typeProviders with private GKE clusters - google-deployment-manager

I am new to google cloud manager (GCM) and am writing some code in order to practice. I have read some interesting articles that detail how I can use deploymentmanager.v2beta.typeprovider in order to extend GCM and use it to configure Kubernetes objects themselves as additional deployment. This is a very appealing behavior for extension and seems to open up great opportunities to extend declarative automation of any API which is cool.
I am attempting to create a private node/public endpoint GKE cluster that is managed by custom typeProvider resources which correspond to GKE api calls. It seems that public node/public endpoint GKE cluster is the only way to support GCM custom typeProviders and this seems wrong considering private node/public endpoint GKE configuration possible.
It seems it would be weird to not have deploymentmanager.v2beta.typeprovider support a private node/public endpoint GKE configuration.
as an aside...
I feel a private node/private endpoint/Cloud Endpoint to expose it to the GCM typeProvider public API endpoint requirement should also be a valid architecture but I have yet to test.
Using the following code
def GenerateConfig(context):
# Some constant type vars that are not really
resources = []
outputs = []
gcp_type_provider = 'deploymentmanager.v2beta.typeProvider'
extension_prefix = 'k8s'
api_version = 'v1'
kube_initial_auth = {
'username': 'luis',
'password': 'letmeinporfavors',
"clientCertificateConfig": {
'issueClientCertificate': True
}
}
# EXTEND API TO CONTROL KUBERNETES BEGIN
kubernetes_exposed_apis = [
{
'name': '{}-{}-api-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'api/v1'
},
{
'name': '{}-{}-apps-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/apps/v1'
},
{
'name': '{}-{}-rbac-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/rbac.authorization.k8s.io/v1'
},
{
'name': '{}-{}-v1beta1-extensions-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/extensions/v1beta1'
}
]
for exposed_api in kubernetes_exposed_apis:
descriptor_url = 'https://{}/swaggerapi/{}'.format(
'$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
exposed_api['endpoint']
)
resources.append(
{
'name': exposed_api['name'],
'type': gcp_type_provider,
'properties': {
'options': {
'validationOptions': {
'schemaValidation': 'IGNORE_WITH_WARNINGS'
},
'inputMappings': [
{
'fieldName': 'name',
'location': 'PATH',
'methodMatch': '^(GET|DELETE|PUT)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'metadata.name',
'location': 'BODY',
'methodMatch': '^(PUT|POST)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'Authorization',
'location': 'HEADER',
'value': '$.concat("Bearer ", $.googleOauth2AccessToken())'
}
],
},
'descriptorUrl': descriptor_url
},
}
)
# EXTEND API TO CONTROL KUBERNETES END
# NETWORK DEFINITION BEGIN
resources.append(
{
'name': "{}-network".format(context.env['deployment']),
'type': "compute.{}.network".format(api_version),
'properties': {
'description': "{} network".format(context.env['deployment']),
'autoCreateSubnetworks': False,
'routingConfig': {
'routingMode': 'REGIONAL'
}
},
}
)
resources.append(
{
'name': "{}-subnetwork".format(context.env['deployment']),
'type': "compute.{}.subnetwork".format(api_version),
'properties': {
'description': "{} subnetwork".format(
context.env['deployment']
),
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'ipCidrRange': '10.64.1.0/24',
'region': 'us-east1',
'privateIpGoogleAccess': True,
'enableFlowLogs': False,
}
}
)
# NETWORK DEFINITION END
# EKS DEFINITION BEGIN
resources.append(
{
'name': "{}-k8s-cluster".format(context.env['deployment']),
'type': "container.{}.cluster".format(api_version),
'properties': {
'zone': 'us-east1-b',
'cluster': {
'description': "{} kubernetes cluster".format(
context.env['deployment']
),
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
'ipAllocationPolicy': {
'useIpAliases': True
},
'nodePools': [
{
'name': "{}-cluster-pool".format(
context.env['deployment']
),
'initialNodeCount': 1,
'config': {
'machineType': 'n1-standard-1',
'oauthScopes': [
'https://www.googleapis.com/auth/compute',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/logging.write',
'https://www.googleapis.com/auth/monitoring'
],
},
'management': {
'autoUpgrade': False,
'autoRepair': True
}
}],
'masterAuth': kube_initial_auth,
'loggingService': 'logging.googleapis.com',
'monitoringService': 'monitoring.googleapis.com',
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'clusterIpv4Cidr': '10.0.0.0/14',
'subnetwork': "$(ref.{}-subnetwork.selfLink)".format(
context.env['deployment']
),
'enableKubernetesAlpha': False,
'resourceLabels': {
'purpose': 'expiramentation'
},
'networkPolicy': {
'provider': 'CALICO',
'enabled': True
},
'initialClusterVersion': 'latest',
'enableTpu': False,
}
}
}
)
outputs.append(
{
'name': '{}-cluster-endpoint'.format(
context.env['deployment']
),
'value': '$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
}
)
# EKS DEFINITION END
# bring it all together
template = {
'resources': resources,
'outputs': outputs
}
# give it to google
return template
if __name__ == '__main__':
GenerateConfig({})
Also, I will note the subsequent hello world template which uses the created typeProviders above.
def current_config():
'''
get the current configuration
'''
return {
'name': 'atrium'
}
def GenerateConfig(context):
resources = []
conf = current_config()
resources.append(
{
'name': '{}-svc'.format(conf['name']),
'type': "{}/{}-k8s-api-v1-type:/api/v1/namespaces/{}/pods".format(
context.env['project'],
conf['name'],
'{namespace}'
),
'properties': {
'namespace': 'default',
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': 'hello-world',
},
'spec': {
'restartPolicy': 'Never',
'containers': [
{
'name': 'hello',
'image': 'ubuntu:14.04',
'command': ['/bin/echo', 'hello', 'world'],
}
]
}
}
}
)
template = {
'resources': resources,
}
return template
if __name__ == '__main__':
GenerateConfig({})
If I leave enablePrivateNodes as False
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
}
I get this as a response
~/code/github/gcp/expiramentation/atrium_gcp_infra 24s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14]...done.
Delete operation operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14 completed successfully.
Waiting for delete [operation-1562105393399-58cba01134528-be47dc30-755cb106]...done.
Delete operation operation-1562105393399-58cba01134528-be47dc30-755cb106 completed successfully.
The fingerprint of the deployment is IiWcrdbZA5MedNlJLIicOg==
Waiting for create [operation-1562105786056-58cba187abee2-5d761e87-b446baca]...done.
Create operation operation-1562105786056-58cba187abee2-5d761e87-b446baca completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is QJ2NS5EhjemyQJThUWYNHA==
Waiting for create [operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb]...done.
Create operation operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-svc atrium-244423/atrium-k8s-api-v1-type:/api/v1/namespaces/{namespace}/pods COMPLETED []
~/code/github/gcp/expiramentation/atrium_gcp_infra 13m 48s
this is a good response and my custom typeProvider resource creates correctly using the API's of my freshly created cluster.
If I make this cluster have private nodes however... with
'privateClusterConfig': {
'enablePrivateNodes': True,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
I fail with
~/code/github/gcp/expiramentation/atrium_gcp_infra 56s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562106572016-58cba47538c93-d34c17fc-8b863765]...done.
Delete operation operation-1562106572016-58cba47538c93-d34c17fc-8b863765 completed successfully.
Waiting for delete [operation-1562106592237-58cba4888184f-a5bc3135-4e662eed]...done.
Delete operation operation-1562106592237-58cba4888184f-a5bc3135-4e662eed completed successfully.
The fingerprint of the deployment is dk5nh_u5ZFFvYO-pCXnFBg==
Waiting for create [operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd]...done.
Create operation operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is 4RnscwpcYTtS614VXqtjRg==
Waiting for create [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]: errors:
- code: ERROR_PROCESSING_REQUEST
message: 'Error fetching URL https://10.0.0.2:443/api/v1/namespaces/default/pods,
reason: ERROR_EXCLUDED_IP'
The 10.0.0.2 seems to be the private endpoint of my cluster. I am having a difficult time tracking down where I can override the https://10.0.0.2:443/api/v1/namespaces/default/pods url host such that it would attempt to contact the publicEndpoint rather than the privateEndpoint.
If this call were to go out to the public endpoint it would create successfully I believe. As an interesting aside the typeProvider declarations in the descriptorUrl do attempt to hit the publicEndpoint of the cluster and are successful in doing so. However, despite this indication the creation of the actual api resources such as the hello world example attempt to interface with the private endpoint.
I feel this behavior should be overridable somewhere but I am failing to find this clue.
I have tried both a working public node configuration and a non-working private node configuration

So I ran into the same problem writing [this] (https://github.com/Aahzymandius/k8s-workshops/tree/master/8-live-debugging). While trying to debug this I ran into 2 issues
The public endpoont is used initially to fetch the swagger definition of the various k8s apis and resources, however, in that response is included the private endpoint for the API server which causes the k8s type to try to use that ip.
While further trying debug that error plus a new one I was hitting, I discovered that gke 1.14.x or later does not support insecure calls to the k8s api which caused the k8s type to fail even with fully public clusters.
Since the functionality seems to fail with newer versions of GKE, I just stopped trying to debug it. Though I'd recommend reporting an issue on the github repo for it. This functionality would be great to keep going

Related

botium-cli emulator does not work with the rasa connector

When I try to start the emulator for my rasa bot, I get this error. The rasa endpoint url works perfectly fine when I send it a request with curl.
$ botium-cli emulator
Error: Start failed: undefined
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:170:25
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2955:19
at wrapper (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:268:20)
at iterateeCallback (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:413:21)
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:321:20
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2953:17
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:130:17
Here's the botium.json
{
"botium": {
"Capabilities": {
"PROJECTNAME": "My Botium Project",
"CONTAINERMODE": "rasa",
"RASA_MODE": "DIALOG_AND_NLU",
"SIMPLEREST_STRICT_SSL" : "false",
"RASA_ENDPOINT_URL": "https://example.com/webhooks/rest/webhook"
},
"Sources": {},
"Envs": {}
}
}
Here's the output from the emulator with the verbose tag.
Still does not work, maybe it has to do something with the way botium connects to the bot?
$ botium-cli emulator --verbose
2021-03-05T08:17:32.230Z botium-cli Using Botium configuration file ./botium.json
2021-03-05T08:17:32.240Z botium-cli-emulator command options: {
_: [ 'emulator' ],
verbose: true,
v: true,
convos: [ '.' ],
C: [ '.' ],
config: './botium.json',
c: './botium.json',
ui: 'console',
'$0': '..\\..\\AppData\\Roaming\\npm\\node_modules\\botium-cli\\bin\\botium-cli.js'
}
2021-03-05T08:17:32.573Z botium-core-BotDriver Loaded Botium configuration files C:\Users\shivam.sinha\Desktop\rasa check\botium.json
2021-03-05T08:17:33.085Z botium-core-BotDriver Build - Botium Core Version: 1.11.1
2021-03-05T08:17:33.087Z botium-core-BotDriver Build - Capabilites: {
PROJECTNAME: 'My Botium Project',
TESTSESSIONNAME: 'Botium Test Session',
TESTCASENAME: 'Botium Test Case',
TEMPDIR: 'botiumwork',
CLEANUPTEMPDIR: true,
WAITFORBOTTIMEOUT: 10000,
SIMULATE_WRITING_SPEED: false,
SIMPLEREST_PING_RETRIES: 6,
SIMPLEREST_PING_TIMEOUT: 10000,
SIMPLEREST_PING_VERB: 'GET',
SIMPLEREST_PING_UPDATE_CONTEXT: true,
SIMPLEREST_STOP_RETRIES: 6,
SIMPLEREST_STOP_TIMEOUT: 10000,
SIMPLEREST_STOP_VERB: 'GET',
SIMPLEREST_START_RETRIES: 6,
SIMPLEREST_START_TIMEOUT: 10000,
SIMPLEREST_START_VERB: 'GET',
SIMPLEREST_POLL_VERB: 'GET',
SIMPLEREST_POLL_INTERVAL: 1000,
SIMPLEREST_POLL_UPDATE_CONTEXT: true,
SIMPLEREST_METHOD: 'GET',
SIMPLEREST_IGNORE_EMPTY: true,
SIMPLEREST_TIMEOUT: 10000,
SIMPLEREST_EXTRA_OPTIONS: {},
SIMPLEREST_STRICT_SSL: false,
SIMPLEREST_INBOUND_UPDATE_CONTEXT: true,
SIMPLEREST_CONTEXT_MERGE_OR_REPLACE: 'MERGE',
SCRIPTING_TXT_EOL: '\n',
SCRIPTING_XLSX_EOL_WRITE: '\r\n',
SCRIPTING_XLSX_HASHEADERS: true,
SCRIPTING_CSV_SKIP_HEADER: true,
SCRIPTING_CSV_QUOTE: '"',
SCRIPTING_CSV_ESCAPE: '"',
SCRIPTING_NORMALIZE_TEXT: true,
SCRIPTING_ENABLE_MEMORY: false,
SCRIPTING_ENABLE_MULTIPLE_ASSERT_ERRORS: false,
SCRIPTING_MATCHING_MODE: 'wildcardIgnoreCase',
SCRIPTING_UTTEXPANSION_MODE: 'all',
SCRIPTING_UTTEXPANSION_RANDOM_COUNT: 1,
SCRIPTING_UTTEXPANSION_NAMING_MODE: 'justLineTag',
SCRIPTING_UTTEXPANSION_NAMING_UTTERANCE_MAX: '16',
SCRIPTING_MEMORYEXPANSION_KEEP_ORIG: false,
ASSERTERS: [],
LOGIC_HOOKS: [],
USER_INPUTS: [],
SECURITY_ALLOW_UNSAFE: true,
CONTAINERMODE: 'rasa',
RASA_MODE: 'DIALOG_AND_NLU',
RASA_ENDPOINT_URL: 'https://10.60.31.102:8080/webhooks/rest/webhook',
CONFIG: './botium.json'
}
2021-03-05T08:17:33.087Z botium-core-BotDriver Build - Sources : { LOCALPATH: '.', GITPATH: 'git', GITBRANCH: 'master', GITDIR: '.' }
2021-03-05T08:17:33.087Z botium-core-BotDriver Build - Envs : { IS_BOTIUM_CONTAINER: true }
2021-03-05T08:17:33.274Z botium-connector-PluginConnectorContainer-helper Botium plugin botium-connector-rasa loaded. Plugin version is 0.0.7
Error: Start failed: undefined
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:170:25
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2955:19
at wrapper (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:268:20)
at iterateeCallback (C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:413:21)
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:321:20
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\node_modules\async\dist\async.js:2953:17
at C:\Users\shivam.sinha\AppData\Roaming\npm\node_modules\botium-core\src\containers\plugins\SimpleRestContainer.js:130:17
The RASA_ENDPOINT_URL capability has to point to the base URL of your Rasa server - without the /webhooks/rest/webhook path. Botium will build up it's own full url endpoint path based on the RASA_MODE capability:
NLU_INPUT => /model/parse is appended
REST_INPUT => /webhooks/rest/webhook/ is appended
{
"botium": {
"Capabilities": {
"PROJECTNAME": "Botium Project Rasa NLU",
"CONTAINERMODE": "rasa",
"RASA_MODE": "NLU_INPUT",
"RASA_ENDPOINT_URL": "https://demo.botiumbox.com/rasa-demo/"
}
}
}

Is the no public IP address feature for Databricks in GA?

Is the feature enabled via the ARM template option enableNoPublicIP currently in general availability. The webinar https://databricks.com/p/webinar/azure-databricks-security-best-practices suggests this was in a private preview a while ago. I need to use this feature to be able to comply with the requirements for a client, but they also have a requirement that no feature in preview are used.
Currently, no-public IP is in public preview.
With secure cluster connectivity enabled, customer virtual networks have no open ports and Databricks Runtime cluster nodes have no public IP addresses.
Documentation: https://learn.microsoft.com/en-us/azure/databricks/security/secure-cluster-connectivity
Have you tried this template to enableNoPublicIP?
To create a Microsoft.Databricks/workspaces resource, add the following JSON to the resources section of your template.
{
'type': 'Microsoft.Databricks/workspaces',
'apiVersion': '2018-04-01',
'name': '[parameters('workspaceName')]',
'location': '[parameters('location')]',
'dependsOn': [],
'tags': {
'data platform': 'warehouse'
},
'sku': {
'name': 'standard'
},
'properties': {
'ManagedResourceGroupId': '[variables('managedResourceGroupId')]',
'parameters': {
'enableNoPublicIp': {
'value': true
}
}
}
}
In case, if you are receiving the below error message for the deployment, you need to create a support to ticket.
{
'status': 'Failed',
'error': {
'code': 'ResourceDeploymentFailure',
'message': 'The resource operation completed with terminal provisioning state 'Failed'.',
'details': [
{
'code': 'CustomerUnauthorized',
'message': 'CUSTOMER_UNAUTHORIZED: Subscription: XXXX-XXXX-XXXXX-XXXXX Is Not Authorized to Create No Public IP Workspaces.'
}
]
}
}
Once you create the support ticket, backend team will help you to enable the enableNoPublicIp for your subscription.

Cognito admin_initiate_auth responds with exception User does not exist when creating a new user

I'm trying to create a new user in a Cognito user pool from my ruby backend server. Using this code:
client = Aws::CognitoIdentityProvider::Client.new
response = client.admin_initiate_auth({
auth_flow: 'ADMIN_NO_SRP_AUTH',
auth_parameters: {
'USERNAME': #user.email,
'PASSWORD': '123456789'
},
client_id: ENV['AWS_COGNITO_CLIENT_ID'],
user_pool_id: ENV['AWS_COGNITO_POOL_ID']
})
The response I get is Aws::CognitoIdentityProvider::Errors::UserNotFoundException: User does not exist.
I'm trying to follow the Server Authentication Flow (https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html), and from that I understood that I could create a new user using admin_initiate_auth.
Am I doing something wrong here?
Thanks
You're using the wrong method. admin_initiate_auth is for logging in/authenticating a user with the ADMIN_NO_SRP_AUTH turned on.
You need to use the sign_up method:
resp = client.sign_up({
client_id: "ClientIdType", # required
secret_hash: "SecretHashType",
username: "UsernameType", # required
password: "PasswordType", # required
user_attributes: [
{
name: "AttributeNameType", # required
value: "AttributeValueType",
},
],
validation_data: [
{
name: "AttributeNameType", # required
value: "AttributeValueType",
},
],
analytics_metadata: {
analytics_endpoint_id: "StringType",
},
user_context_data: {
encoded_data: "StringType",
},
})
You can find it in the AWS Cognito IDP docs here.

How do you use DynamoDB Local with the AWS Ruby SDK?

Amazon's documentation provides examples in Java, .NET, and PHP of how to use DynamoDB Local. How do you do the same thing with the AWS Ruby SDK?
My guess is that you pass in some parameters during initialization, but I can't figure out what they are.
dynamo_db = AWS::DynamoDB.new(
:access_key_id => '...',
:secret_access_key => '...')
Are you using v1 or v2 of the SDK? You'll need to find that out; from the short snippet above, it looks like v2. I've included both answers, just in case.
v1 answer:
AWS.config(use_ssl: false, dynamo_db: { api_verison: '2012-08-10', endpoint: 'localhost', port: '8080' })
dynamo_db = AWS::DynamoDB::Client.new
v2 answer:
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(endpoint: 'http://localhost:8080')
Change the port number as needed of course.
Now aws-sdk version 2.7 throws an error as Aws::Errors::MissingCredentialsError: unable to sign request without credentials set when keys are absent. So below code works for me
dynamo_db = Aws::DynamoDB::Client.new(
region: "your-region",
access_key_id: "anykey-or-xxx",
secret_access_key: "anykey-or-xxx",
endpoint: "http://localhost:8080"
)
I've written a simple gist that shows how to start, create, update and query a local dynamodb instance.
https://gist.github.com/SundeepK/4ffff773f92e3a430481
Heres a run down of some simple code:
Below is a simple command to run dynamoDb in memory
#Assuming you have downloading dynamoDBLocal and extracted into a dir called dynamodbLocal
java -Djava.library.path=./dynamodbLocal/DynamoDBLocal_lib -jar ./dynamodbLocal/DynamoDBLocal.jar -inMemory -port 9010
Below is a simple ruby script
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(region: "eu-west-1", endpoint: 'http://localhost:9010')
dynamo_db.create_table({
table_name: 'TestDB',
attribute_definitions: [{
attribute_name: 'SomeKey',
attribute_type: 'S'
},
{
attribute_name: 'epochMillis',
attribute_type: 'N'
}
],
key_schema: [{
attribute_name: 'SomeKey',
key_type: 'HASH'
},
{
attribute_name: 'epochMillis',
key_type: 'RANGE'
}
],
provisioned_throughput: {
read_capacity_units: 5,
write_capacity_units: 5
}
})
dynamo_db.put_item( table_name: "TestDB",
item: {
"SomeKey" => "somevalue1",
"epochMillis" => 1
})
puts dynamo_db.get_item({
table_name: "TestDB",
key: {
"SomeKey" => "somevalue",
"epochMillis" => 1
}}).item
The above will create a table with a range key and also add/query for the same data that was added. Not you must already have version 2 of the aws gem installed.

raven - sentry + django = No servers configured, and sentry not installed. Cannot send message

I have a sentry server that works ok.
raven test <dnstoserver> -> Sending a test message... success!
I have a dev machine with django 1.3 and raven 1.93.
In the django project I have this:
setting.py:
SENTRY_KEY=<secretkey>
SENTRY_DNS=<dnstoserver>
INSTALLED_APPS = (
'bar',
'foo',
'raven.contrib.django',
)
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'root': {
'level': 'WARNING',
'handlers': ['sentry'],
},
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
},
'handlers': {
'sentry': {
'level': 'ERROR',
'class': 'raven.contrib.django.handlers.SentryHandler',
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
}
},
'loggers': {
'django.db.backends': {
'level': 'ERROR',
'handlers': ['console'],
'propagate': False,
},
'raven': {
'level': 'DEBUG',
'handlers': ['console', 'sentry'],
'propagate': False,
},
},
}
Mind the absence of 'sentry' in the installed_apps. This is intentionally, since sentry is the server and should not be on a client!
views.py (in a view):
import logging
logger = logging.getLogger("raven")
logger.error("test")
When I run the view I get on the console:
No servers configured, and sentry not installed. Cannot send message
Why, and how to fix?
Were you really setting SENTRY_DNS or SENTRY_DSN?
When you set SENTRY_DSN the instantiation of the major config variables happens automatically (including SENTRY_SERVERS, SENTRY_PUBLIC_KEY, SENTRY_SECRET_KEY, and SENTRY_PROJECT)
The problem was in the construction of the raven DjangoClient. It did not get passed any servers, and couldn't find sentry config to steal that config from.
I added in the settings.py:
SENTRY_SERVERS=<dnstoserver>
Console now outputs this every time raven is called:
INFO 2012-06-21 05:33:19,831 base 4323 140735075462336 Configuring Raven for host: <dnstoserver>
But it works like a charm! Messages in sentry...
BTW. for all the undocumented settings take a look in raven.contrib.django.models.get_client()
I suggest to use:
SENTRY_DSN = 'http://user:password#<domain>:<port>/<project_id>'
And in APPS_INSTALLED add:
'raven.contrib.django.raven_compat'
Also take a look at this guide:
http://code.fetzig.at/post/18607051916/sentry-and-raven-setup-for-django-projects

Resources