Minio container startup command
sudo docker run -p 13456:9000 --name minio-manisha-manual -e "MINIO_ACCESS_KEY=manisha" -e "MINIO_SECRET_KEY=xxxx" -v /home/manisha/files/images:/data minio/minio:RELEASE.2021-05-26T00-22-46Z server /data
I am trying to get a presigned url and upload an image using presigned URL.
# Getting presigned url
minio_client = Minio(localhost:13456, access_key=manisha, secret_key=xxxx, secure=False)
resp = minio_client.presigned_put_object("nudjur", "nginx.png", expires=timedelta(days=1)
Result I get is http://localhost:13456/nudjur/nginx.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=manisha%2F20210526%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210526T190513Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c491440d5be935e80371d15be30a695328beab6d434ba26ce8782fe93858d7a5
As the DNS for my server is manisha.something.com, I would like to use manisha.something.com as host to upload pre-signed URL. So I tried modifying presigned URL host to my DNS manually like below
http://manisha.something.com:13456/nudjur/nginx.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=manisha%2F20210526%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210526T190513Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c491440d5be935e80371d15be30a695328beab6d434ba26ce8782fe93858d7a5
When I tried to upload to this URL I am getting SignatureDoesNotMatch error
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><Key>nginx.png</Key><BucketName>nudjur</BucketName><Resource>/nudjur/nginx.png</Resource><RequestId>1682B2C4E4049CC6</RequestId><HostId>c53990e5-e9ad-46aa-bd28-87482444d77b</HostId></Error>
Can someone help me to overcome this issue?
Related
I can't access a file created in a docker container with localstack (that mimics aws S3)
I see that it exists if I access localstack:5000, but if I actually go to it I get 403
<Contents>
<Key>json/fa8bd17360193232d16a031b977387e7.json</Key>
<LastModified>2020-08-04T08:35:46.000Z</LastModified>
<ETag>"fa8bd17360193232d16a031b977387e7"</ETag>
<Size>6593</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>webfile</DisplayName>
</Owner>
</Contents>
And if I try to access it:
Access to localstack.test was deniedYou don't have authorization to view this page.
HTTP ERROR 403
I also tried finding the file on the docker container, with find / -name "FILE NAME" with no luck ..
Any ideas ?
i am trying to access google search console api - tried the sample [https://github.com/googleapis/google-api-python-client/blob/master/samples/searchconsole/search_analytics_api_sample.py][1]
i followed the instructions:
1) Install the Google Python client library, as shown at
https://developers.google.com/webmaster-tools/v3/libraries.
2) Sign up for a new project in the Google APIs console at
https://code.google.com/apis/console.
3) Register the project to use
OAuth2.0 for installed applications.
4) Copy your client ID, client
secret, and redirect URL into the client_secrets.json file included in
this package.
5) Run the app in the command-line as shown below.
Sample usage: $ python search_analytics_api_sample.py
'https://www.example.com/' '2015-05-01' '2015-05-30'
of course for my site and newer dates..
recieved in cmd the warning:
\AppData\Local\Programs\Python\Python38\lib\site-packages\oauth2client_helpers.py:255:
UserWarning: Cannot access webmasters.dat: No such file or directory
in the window opened in the browser got the message:
Error 400: redirect_uri_mismatch The redirect URI in the request,
http://localhost:8080/, does not match the ones authorized for the
OAuth client. To update the authorized redirect URIs, visit:
https://console.developers.google.com/apis/credentials/oauthclient/xxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com?project=xxxxxxxxxxxx
i configured the redirect URI as http://localhost:8080/ but still the same
appreciate any help thanks
Here is what I did:
I created the front end using Angular 2, and created the backend code using Spring boot. Then in my front end code, I hard code my api address using EC2 address (which is in front end), and hard code my front end address (the S3 address) in Filter class in Spring boot to get rid the CORS issue.
Then in my front end I run ng build --prod to get the deploy static files, in my backend I run mvn package to get the jar file. Upload the jar file to EC2 instance, it starts successfully for the backend part.
Now upload all the static files into S3 storage, and run the S3 domain, I got following error:
Failed to load ec2-35-182-225-61.ca-central-1.compute.amazonaws.com:8080/api/refresh: Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https.
Is it have any tutorial that link front end and backend using S3 and EC2?
Edit:
After I added the CORS on S3 part by trichetriche's answer, I got new error
main.e4936af900574d09c368.bundle.js:1 ERROR DOMException: Failed to execute 'open' on 'XMLHttpRequest': Invalid URL
at http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/polyfills.b3f1cff0521ef8205be4.bundle.js:1:56874
at XMLHttpRequest.o.(anonymous function) [as open] (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/polyfills.b3f1cff0521ef8205be4.bundle.js:1:20687)
at t._subscribe (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:424047)
at t._trySubscribe (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:7030)
at t.subscribe (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:6859)
at e.a (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:745892)
at e._innerSub (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:748287)
at e._tryNext (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:748211)
at e._next (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:748034)
at e.next (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:9365)
Did you set up the CORS of your S3 ?
In your S3 configuration, you should add this :
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
This is under The Permissions tab, in CORS Configuration
(And once you're done testng, remember to set the origin to your domain)
I would like to create a passbook sign at my ruby server hosted on AWS. what is the best way to save .pem files or .p12 file in AWS ? and retrieve them to sign the passbook.
I'm using passbook gem in https://github.com/frozon/passbook but note at the example he use files from local path
Passbook.configure do |passbook|
passbook.wwdc_cert = Rails.root.join('wwdc_cert.pem')
passbook.p12_key = Rails.root.join('key.pem')
passbook.p12_certificate = Rails.root.join('certificate.pem')
passbook.p12_password = 'cert password'
end
In my case I want to read them from AWS
Just use the url of your files hosted on amazon. Like
https://<bucket-name>.s3.amazonaws.com/<key>
This is the URL generated by the Ruby aws-sdk gem for put:
curl --upload-file "/Users/README.rdoc"
-H "x-amz-acl=public-read"
"http://videos.s3.amazonaws.com/6c06517c-64f1-45ed-b07f-8c4c4edec6e3?AWSAccessKeyId={key}&Expires=1384519899&Signature=MKtBESBklYXFT%2B48EKLSoBiQpNA%3D"
-H "x-amz-acl=public-read" is not present in the signature. The signature is OK (Amazon doesn't show any errors).
But the "public-read" permission is not applied, please advise me as to how I can generate a put signed URL which will be public-read after upload.
Thanks!
Updated:
s3 = AWS::S3.new
bucket = s3.buckets['some_videos']
id = SecureRandom.uuid
object = bucket.objects["#{id}"]
url = object.url_for(:put, expires_in: 30*60)
it looks like you can specify this with the acl method (documented here)
If you want to set your bucket to public read you can call:
s3.buckets['some-videos'].acl = :public_read
if you would like to apply this permission directly to an object you can call:
bucket.objects["#{id}"].acl= :public_read
Amazon team added this to their sdk. Thanks, guys!
https://github.com/aws/aws-sdk-ruby/issues/412
https://github.com/aws/aws-sdk-ruby/commit/15e900c0918a67e20bbb6dd9509c112aa01a95ee