How do you use DynamoDB Local with the AWS Ruby SDK? - ruby

Amazon's documentation provides examples in Java, .NET, and PHP of how to use DynamoDB Local. How do you do the same thing with the AWS Ruby SDK?
My guess is that you pass in some parameters during initialization, but I can't figure out what they are.
dynamo_db = AWS::DynamoDB.new(
:access_key_id => '...',
:secret_access_key => '...')

Are you using v1 or v2 of the SDK? You'll need to find that out; from the short snippet above, it looks like v2. I've included both answers, just in case.
v1 answer:
AWS.config(use_ssl: false, dynamo_db: { api_verison: '2012-08-10', endpoint: 'localhost', port: '8080' })
dynamo_db = AWS::DynamoDB::Client.new
v2 answer:
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(endpoint: 'http://localhost:8080')
Change the port number as needed of course.

Now aws-sdk version 2.7 throws an error as Aws::Errors::MissingCredentialsError: unable to sign request without credentials set when keys are absent. So below code works for me
dynamo_db = Aws::DynamoDB::Client.new(
region: "your-region",
access_key_id: "anykey-or-xxx",
secret_access_key: "anykey-or-xxx",
endpoint: "http://localhost:8080"
)

I've written a simple gist that shows how to start, create, update and query a local dynamodb instance.
https://gist.github.com/SundeepK/4ffff773f92e3a430481
Heres a run down of some simple code:
Below is a simple command to run dynamoDb in memory
#Assuming you have downloading dynamoDBLocal and extracted into a dir called dynamodbLocal
java -Djava.library.path=./dynamodbLocal/DynamoDBLocal_lib -jar ./dynamodbLocal/DynamoDBLocal.jar -inMemory -port 9010
Below is a simple ruby script
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(region: "eu-west-1", endpoint: 'http://localhost:9010')
dynamo_db.create_table({
table_name: 'TestDB',
attribute_definitions: [{
attribute_name: 'SomeKey',
attribute_type: 'S'
},
{
attribute_name: 'epochMillis',
attribute_type: 'N'
}
],
key_schema: [{
attribute_name: 'SomeKey',
key_type: 'HASH'
},
{
attribute_name: 'epochMillis',
key_type: 'RANGE'
}
],
provisioned_throughput: {
read_capacity_units: 5,
write_capacity_units: 5
}
})
dynamo_db.put_item( table_name: "TestDB",
item: {
"SomeKey" => "somevalue1",
"epochMillis" => 1
})
puts dynamo_db.get_item({
table_name: "TestDB",
key: {
"SomeKey" => "somevalue",
"epochMillis" => 1
}}).item
The above will create a table with a range key and also add/query for the same data that was added. Not you must already have version 2 of the aws gem installed.

Related

How to register a new user using AWS Cognito Ruby SDK?

I would like to know how to register a new user using AWS Cognito Ruby SDK.
So far I have tried:
Input
AWS_KEY = "MY_AWS_KEY"
AWS_SECRET = "MY_AWS_SECRET"
client = Aws::CognitoIdentityProvider::Client.new(
access_key_id: AWS_KEY,
secret_access_key: AWS_SECRET,
region: 'us-east-1',
)
resp = client.sign_up({
client_id: "4d2c7274mc1bk4e9fr******", # required
username: "test#test.com", # required
password: "Password23sing", # required
user_attributes: [
{
name: "app", # required
value: "my app name",
},
],
validation_data: [
{
name: "username", # required
value: "true",
},
]
})
Output
Aws::CognitoIdentityProvider::Errors::NotAuthorizedException (Unable to verify secret hash for client 4d2c7274mc1bk4e9fr*****)
References
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/CognitoIdentityProvider/Client.html#sign_up-instance_method
If your app client is configured with a client secret, most of the client requests require you to include a 'secret hash' in the options parameters of the request. The Cognito docs describe the secret hash thusly:
The SecretHash value is a Base 64-encoded keyed-hash message
authentication code (HMAC) calculated using the secret key of a user
pool client and username plus the client ID in the message. The following pseudocode shows how this value is calculated.
Base64 ( HMAC_SHA256 ( "Client Secret Key", "Username" + "Client Id" ) )
The docs also make it clear via a glob of sample Java that you are expected to roll your own. After a bit of experimenting I was able to successfully complete a sign_up call with the following (my test pool was set up to require email and name attributes):
def secret_hash(client_secret, username, client_id)
Base64.strict_encode64(OpenSSL::HMAC.digest('sha256', CLIENT_SECRET, username + CLIENT_ID))
end
client = Aws::CognitoIdentityProvider::Client.new(
access_key_id: AWS_KEY,
secret_access_key: AWS_SECRET,
region: REGION)
username = 'bob.scum#example.com'
resp = client.sign_up({
client_id: CLIENT_ID,
username: username,
password: 'Password23sing!',
secret_hash: secret_hash(CLIENT_SECRET, username, CLIENT_ID),
user_attributes: [{ name: 'email', value: username },
{ name: 'name', value: 'Bob' }],
validation_data: [{ name: 'username', value: 'true' },
{ name: 'email', value: 'true' }]
})
CLIENT_SECRET is the app client secret that can be found under General Settings > App Clients.
Result:
#<struct Aws::CognitoIdentityProvider::Types::SignUpResponse
user_confirmed=false,
code_delivery_details=nil,
user_sub="c87c2ac8-1480-4d15-a28d-6998d9260e73">

Cognito admin_initiate_auth responds with exception User does not exist when creating a new user

I'm trying to create a new user in a Cognito user pool from my ruby backend server. Using this code:
client = Aws::CognitoIdentityProvider::Client.new
response = client.admin_initiate_auth({
auth_flow: 'ADMIN_NO_SRP_AUTH',
auth_parameters: {
'USERNAME': #user.email,
'PASSWORD': '123456789'
},
client_id: ENV['AWS_COGNITO_CLIENT_ID'],
user_pool_id: ENV['AWS_COGNITO_POOL_ID']
})
The response I get is Aws::CognitoIdentityProvider::Errors::UserNotFoundException: User does not exist.
I'm trying to follow the Server Authentication Flow (https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html), and from that I understood that I could create a new user using admin_initiate_auth.
Am I doing something wrong here?
Thanks
You're using the wrong method. admin_initiate_auth is for logging in/authenticating a user with the ADMIN_NO_SRP_AUTH turned on.
You need to use the sign_up method:
resp = client.sign_up({
client_id: "ClientIdType", # required
secret_hash: "SecretHashType",
username: "UsernameType", # required
password: "PasswordType", # required
user_attributes: [
{
name: "AttributeNameType", # required
value: "AttributeValueType",
},
],
validation_data: [
{
name: "AttributeNameType", # required
value: "AttributeValueType",
},
],
analytics_metadata: {
analytics_endpoint_id: "StringType",
},
user_context_data: {
encoded_data: "StringType",
},
})
You can find it in the AWS Cognito IDP docs here.

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.
Script:
backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
access_key_id: AMAZONS3['access_key_id'],
secret_access_key: AMAZONS3['secret_access_key']
)
s3_bucket = s3.buckets['test-frankfurt']
# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"
file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)
aws-sdk (1.56.0)
How to fix it?
Thank you.
AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.
All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").
According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.
Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.
I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.
¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written,
"Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.
With node, try
var s3 = new AWS.S3( {
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-central-1'
} );
You should set signatureVersion: 'v4' in config to use new sign version:
AWS.config.update({
signatureVersion: 'v4'
});
Works for JS sdk.
For people using boto3 (Python SDK) use the below code
from botocore.client import Config
s3 = boto3.resource(
's3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
config=Config(signature_version='s3v4')
)
I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature and the region
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
this also saved my time after surfing for 24Hours..
Code for Flask (boto3)
Don't forget to import Config. Also If you have your own config class, then change its name.
from botocore.client import Config
s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)
In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
With boto3, this is the code :
s3_client = boto3.resource('s3', region_name='eu-central-1')
or
s3_client = boto3.client('s3', region_name='eu-central-1')
For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE
[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
signature_version = s3
So anything that used boto directly without changes, this may be useful
Supernova answer for django/boto3/django-storages worked with me:
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
just add them to your settings.py and change region code accordingly
you can check aws regions from:
enter link description here
For Android SDK, setEndpoint solves the problem, although it's been deprecated.
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");
Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.
in my case with node js i was using signatureVersion in parmas object like this :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
signatureVersion: 'v4',
region: process.env.AWS_S3_REGION
}
});
Then I put signature out of params object and worked like charm :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
region: process.env.AWS_S3_REGION
},
signatureVersion: 'v4'
});
Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.
In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)
using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = keyName,
Expires = DateTime.Now.AddMinutes(50),
};
urlString = client.GetPreSignedURL(request1);
}
In my case, the request type was wrong. I was using GET(dumb) It must be PUT.
Here is the function I used with Python
def uploadFileToS3(filePath, s3FileName):
s3 = boto3.client('s3',
endpoint_url=settings.BUCKET_ENDPOINT_URL,
aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
aws_secret_access_key=settings.BUCKET_SECRET_KEY,
region_name=settings.BUCKET_REGION_NAME
)
try:
s3.upload_file(
filePath,
settings.BUCKET_NAME,
s3FileName
)
# remove file from local to free up space
os.remove(filePath)
return True
except Exception as e:
logger.error('uploadFileToS3#Error')
logger.error(e)
return False
Sometime the default version will not update. Add this command
AWS_S3_SIGNATURE_VERSION = "s3v4"
in settings.py
For Boto3 , use this code.
import boto3
from botocore.client import Config
s3 = boto3.resource('s3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
region_name='us-south-1',
config=Config(signature_version='s3v4')
)
Try this combination.
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Bucket region
accessKeyId: 'A-----------------U',
secretAccessKey: 'k------ja----------------soGp',
Bucket: 'bucket_name',
useAccelerateEndpoint: true,
signatureVersion: 'v4',
region: 'ap-south-1' // Bucket region
});
I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.
On the AWS Side
I am assuming you have already
Created an s3-bucket
Created a user in IAM
Steps
Configure CORS settings
you bucket > permissions > CORS configuration
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>```
Generate A bucket policy
your bucket > permissions > bucket policy
It should be similar to this one
{
"Version": "2012-10-17",
"Id": "Policy1602480700663",
"Statement": [
{
"Sid": "Stmt1602480694902",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
}
]
}
PS: Bucket policy should say `public` after this
Configure Access Control List
your bucket > permissions > acces control list
give public access
PS: Access Control List should say public after this
Unblock public Access
your bucket > permissions > Block Public Access
Edit and turn all options Off
**On a side note if you are working on django
add the following lines to you settings.py file of your project
**
#S3 BUCKETS CONFIG
AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# look for files first in aws
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
For me this was the solution:
AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'
This needs to be added to settings.py in your Django project
Using PHP SDK Follow Below.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(
array(
'signature' => 'v4',
'region' => 'me-south-1',
'key' => YOUR_AWS_KEY,
'secret' => YOUR_AWS_SECRET
)
);
Nodejs
var aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
ContentType: mimeType, //image mime type from request
Bucket: "MybucketName",
Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
Expires: 300,
});
console.log(data);
AWS S3 Bucket Permission Configuration
Deselect Block All Public Access
Add Below Policy
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::MybucketName/*"
]
}
]
}
Then Paste the returned URL and make PUT request on the URL with binary file of image
Full working nodejs version:
const AWS = require('aws-sdk');
var s3 = new AWS.S3( {
endpoint: 's3.eu-west-2.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-west-2'
} );
const getPreSignedUrl = async () => {
const params = {
Bucket: 'some-bucket-name/some-folder',
Key: 'some-filename.json',
Expires: 60 * 60 * 24 * 7
};
try {
const presignedUrl = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(presignedUrl);
} catch (err) {
if (err) {
console.log(err);
}
}
};
getPreSignedUrl();

Setting up Django on Elastic Beanstalk with Postgres

How should I set up my settings.py file for Django on EC2 Elastic Beanstalk to use a Postgres RDS ?
These docs only give the settings.py for MySQL.
You will probably just need to change the engine setting in your databases object. you will need to install psycopg2 to your environment. Here is what mine looks like. just fill in your db's info.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
'NAME': '', # Or path to database file if using sqlite3.
'USER': '', # Not used with sqlite3.
'PASSWORD': '', # Not used with sqlite3.
'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
Use psycopg2, and use environment variables (made available for you within Elastic Beanstalk):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': os.environ['RDS_DB_NAME'],
'USER': os.environ['RDS_USERNAME'],
'PASSWORD': os.environ['RDS_PASSWORD'],
'HOST': os.environ['RDS_HOSTNAME'],
'PORT': os.environ['RDS_PORT'],
}
}
You'll need to include psycopg2 in a pip requirements.txt file (made using pip freeze > requirements.txt) and likely also install a Postgres dependency, postgresql-devel, by including the following in an .ebextensions/package.config file (the filename doesn't have to be packages.config, that's just what I use):
packages:
yum:
postgresql-devel: []

Ruby AWS SNS SDK: unexpected option message_attributes

I'm trying to publish a message to an endpoint using ruby's sdk for aws sns. The documentation suggests that I can add TTL to the message attributes. However, the following code gives an argument error exception:
# ArgumentError:
# unexpected option message_attributes
#client.publish(:target_arn => endpoint_arn,
:subject => title,
:message_structure => "json",
:message => get_message(title, message).to_json,
:message_attributes => {
"AWS.SNS.MOBILE.APNS.TTL" => {
:data_type => "String",
:string_value => TTL_SECONDS
}
}
This option isn't available in older versions of the api. Upgrading to the newest version (1.48.1) solved the problem.

Resources