This is the URL generated by the Ruby aws-sdk gem for put:
curl --upload-file "/Users/README.rdoc"
-H "x-amz-acl=public-read"
"http://videos.s3.amazonaws.com/6c06517c-64f1-45ed-b07f-8c4c4edec6e3?AWSAccessKeyId={key}&Expires=1384519899&Signature=MKtBESBklYXFT%2B48EKLSoBiQpNA%3D"
-H "x-amz-acl=public-read" is not present in the signature. The signature is OK (Amazon doesn't show any errors).
But the "public-read" permission is not applied, please advise me as to how I can generate a put signed URL which will be public-read after upload.
Thanks!
Updated:
s3 = AWS::S3.new
bucket = s3.buckets['some_videos']
id = SecureRandom.uuid
object = bucket.objects["#{id}"]
url = object.url_for(:put, expires_in: 30*60)
it looks like you can specify this with the acl method (documented here)
If you want to set your bucket to public read you can call:
s3.buckets['some-videos'].acl = :public_read
if you would like to apply this permission directly to an object you can call:
bucket.objects["#{id}"].acl= :public_read
Amazon team added this to their sdk. Thanks, guys!
https://github.com/aws/aws-sdk-ruby/issues/412
https://github.com/aws/aws-sdk-ruby/commit/15e900c0918a67e20bbb6dd9509c112aa01a95ee
Related
Minio container startup command
sudo docker run -p 13456:9000 --name minio-manisha-manual -e "MINIO_ACCESS_KEY=manisha" -e "MINIO_SECRET_KEY=xxxx" -v /home/manisha/files/images:/data minio/minio:RELEASE.2021-05-26T00-22-46Z server /data
I am trying to get a presigned url and upload an image using presigned URL.
# Getting presigned url
minio_client = Minio(localhost:13456, access_key=manisha, secret_key=xxxx, secure=False)
resp = minio_client.presigned_put_object("nudjur", "nginx.png", expires=timedelta(days=1)
Result I get is http://localhost:13456/nudjur/nginx.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=manisha%2F20210526%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210526T190513Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c491440d5be935e80371d15be30a695328beab6d434ba26ce8782fe93858d7a5
As the DNS for my server is manisha.something.com, I would like to use manisha.something.com as host to upload pre-signed URL. So I tried modifying presigned URL host to my DNS manually like below
http://manisha.something.com:13456/nudjur/nginx.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=manisha%2F20210526%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210526T190513Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c491440d5be935e80371d15be30a695328beab6d434ba26ce8782fe93858d7a5
When I tried to upload to this URL I am getting SignatureDoesNotMatch error
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><Key>nginx.png</Key><BucketName>nudjur</BucketName><Resource>/nudjur/nginx.png</Resource><RequestId>1682B2C4E4049CC6</RequestId><HostId>c53990e5-e9ad-46aa-bd28-87482444d77b</HostId></Error>
Can someone help me to overcome this issue?
I would like to store my credentials in ~/.aws/credentials and not in environmental variables, but I am struggling.
To load the credentials I use (from here)
credentials = Aws::SharedCredentials.new({region: 'myregion', profile_name: 'myprofile'})
My ~/.aws/credentials is
[myprofile]
AWS_ACCESS_KEY = XXXXXXXXXXXXXXXXXXX
AWS_SECRET_KEY = YYYYYYYYYYYYYYYYYYYYYYYYYYY
My ~/.aws/config is
[myprofile]
output = json
region = myregion
I then define a resource with
aws = Aws::EC2::Resource.new(region: 'eu....', credentials: credentials)
but if I try for example
aws.instances.first
I get the error Error: #<Aws::Errors::MissingCredentialsError: unable to sign request without credentials set>
Everything works if I hard code the keys
According to the source code aws loads credentials automatically only from ENV.
You can create credentials with custom attributes.
credentials = Aws::Credentials.new(AWS_ACCESS_KEY, AWS_SECRET_KEY)
aws = Aws::EC2::Resource.new(region: 'eu-central-1', credentials: credentials)
In your specific case, it seems there is no way to pass custom credentials to SharedCredentials.
If you just do
credentials = Aws::SharedCredentials.new()
it loads the default profile. You should be able to load myprofile by passing in :profile_name as an option.
I don't know if you can also override the region though. You might want to try to loose that option, see how it works.
I am trying to list and download folders from a bucket on path eg:"aaa/bbb/" using the aws-sdk gem v2. However I can't figure out how to do it.
This is what I tried:
require 'aws-sdk'
Aws.config.update({
region: 'us-west-2',
credentials: Aws::Credentials.new('akid', 'secret')
})
s3 = Aws::S3::Resource.new
# reference an existing bucket by name
bucket = s3.bucket('aaa')
bucket.objects(prefix: '/bbb/').each do |folder|
p folder
end
It says: Access Denied (Aws::S3::Errors::AccessDenied)
But, if I use the command line AWS CLI instead, and execute:
aws s3 ls aaa/bbb/
it works...
Any suggestion?
Many thanks.
The convention in S3 is that the "root" of a bucket's keyspace is a zero-length empty string... it is not / as some people naturally assume.
The prefix you are looking for would be expressed as bbb/ rather than /bbb/.
According to the documentation you've to put in the credentials slightly different:
require 'aws-sdk'
Aws.config.update({
region: 'us-west-2',
credentials: Aws::Credentials.new('akid', 'secret')
})
Maybe try this to list the content of the bucket:
s3.list_objects(bucket:'aaa').each do |response|
puts response.contents.map(&:key)
end
Currently, I am sending GET requests to S3 using aws-sdk ruby as follow:
#!/usr/bin/ruby
#
require 'aws-sdk'
s3 = Aws::S3::Resource.new(region: 'test', endpoint:'http://10.0.23.45:8081')
my_bucket = s3.bucket('test.bucket-name')
my_bucket.objects.limit(50).each do |obj|
puts " #{obj.key} => #{obj.etag}"
end
But the request is trying to hit this url endpoint(virtual hosting):
http://test.bucket-name.10.0.23.45:8081
I would like to use path style addressing instead. This is what I want the request url endpoint to look like:
http://10.0.23.45:8081/test.bucket-name/
Any idea how to set path style addressing instead of virtual hosting address? Thanks.
I found the answer for my own question after looking at the source code of ruby aws-sdk Source Code
Aws.config[:s3] = { force_path_style: true }
Adding the above line forced to use path style addressing.
You need to set option :virtual_host to true according to documentation.
So in your case something like this should work:
s3.bucket('10.0.23.45:8081').object('test.bucket-name').public_url(virtual_host: true)
#=> "http://10.0.23.45:8081/test.bucket-name/"
I would like to create a passbook sign at my ruby server hosted on AWS. what is the best way to save .pem files or .p12 file in AWS ? and retrieve them to sign the passbook.
I'm using passbook gem in https://github.com/frozon/passbook but note at the example he use files from local path
Passbook.configure do |passbook|
passbook.wwdc_cert = Rails.root.join('wwdc_cert.pem')
passbook.p12_key = Rails.root.join('key.pem')
passbook.p12_certificate = Rails.root.join('certificate.pem')
passbook.p12_password = 'cert password'
end
In my case I want to read them from AWS
Just use the url of your files hosted on amazon. Like
https://<bucket-name>.s3.amazonaws.com/<key>