Laradock localstack can't access s3 local file (error 403) - laravel

I can't access a file created in a docker container with localstack (that mimics aws S3)
I see that it exists if I access localstack:5000, but if I actually go to it I get 403
<Contents>
<Key>json/fa8bd17360193232d16a031b977387e7.json</Key>
<LastModified>2020-08-04T08:35:46.000Z</LastModified>
<ETag>"fa8bd17360193232d16a031b977387e7"</ETag>
<Size>6593</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>webfile</DisplayName>
</Owner>
</Contents>
And if I try to access it:
Access to localstack.test was deniedYou don't have authorization to view this page.
HTTP ERROR 403
I also tried finding the file on the docker container, with find / -name "FILE NAME" with no luck ..
Any ideas ?

Related

Download file in Laravel from AWS S3(non public bucket)

I am able to save all my files in the bucket but having difficulties with download.
My code is:
$url = Storage::disk('s3')->temporaryUrl(
$request->file, now()->addMinutes(10)
);
return Storage::disk('s3')->download($url);
Full file path stored in $request->file
Example path: https://bucket_name.privacy_region_info/folder_inside_bucket/cTymyY2gzakfczO3j3H2TtbJX4eeRW4Uj073CZUW
I am getting the fallowing https://prnt.sc/1ip4g77
Did I not understand the purpose od generating temporaryUrl? How can I download files from S3 non public bucket?
BTW I am using Laravel 8 and league/flysystem-aws-s3-v3 1.0.29.
The error message you have shown suggests your user does not have the correct permissions, or that the file does not exist.
If you are sure the file exists, i would suspect a permissions issue.
In AWS IAM, make sure the user has a policy attached to it that grants the correct permissions.
In this case from the comments, i can see the user only has "Write" permissions. You will need explicit "Read" permissions too.

How to use presigned URLs with different domain names

Minio container startup command
sudo docker run -p 13456:9000 --name minio-manisha-manual -e "MINIO_ACCESS_KEY=manisha" -e "MINIO_SECRET_KEY=xxxx" -v /home/manisha/files/images:/data minio/minio:RELEASE.2021-05-26T00-22-46Z server /data
I am trying to get a presigned url and upload an image using presigned URL.
# Getting presigned url
minio_client = Minio(localhost:13456, access_key=manisha, secret_key=xxxx, secure=False)
resp = minio_client.presigned_put_object("nudjur", "nginx.png", expires=timedelta(days=1)
Result I get is http://localhost:13456/nudjur/nginx.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=manisha%2F20210526%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210526T190513Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c491440d5be935e80371d15be30a695328beab6d434ba26ce8782fe93858d7a5
As the DNS for my server is manisha.something.com, I would like to use manisha.something.com as host to upload pre-signed URL. So I tried modifying presigned URL host to my DNS manually like below
http://manisha.something.com:13456/nudjur/nginx.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=manisha%2F20210526%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210526T190513Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c491440d5be935e80371d15be30a695328beab6d434ba26ce8782fe93858d7a5
When I tried to upload to this URL I am getting SignatureDoesNotMatch error
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><Key>nginx.png</Key><BucketName>nudjur</BucketName><Resource>/nudjur/nginx.png</Resource><RequestId>1682B2C4E4049CC6</RequestId><HostId>c53990e5-e9ad-46aa-bd28-87482444d77b</HostId></Error>
Can someone help me to overcome this issue?

Cannot access webmasters.dat: No such file or directory

i am trying to access google search console api - tried the sample [https://github.com/googleapis/google-api-python-client/blob/master/samples/searchconsole/search_analytics_api_sample.py][1]
i followed the instructions:
1) Install the Google Python client library, as shown at
https://developers.google.com/webmaster-tools/v3/libraries.
2) Sign up for a new project in the Google APIs console at
https://code.google.com/apis/console.
3) Register the project to use
OAuth2.0 for installed applications.
4) Copy your client ID, client
secret, and redirect URL into the client_secrets.json file included in
this package.
5) Run the app in the command-line as shown below.
Sample usage: $ python search_analytics_api_sample.py
'https://www.example.com/' '2015-05-01' '2015-05-30'
of course for my site and newer dates..
recieved in cmd the warning:
\AppData\Local\Programs\Python\Python38\lib\site-packages\oauth2client_helpers.py:255:
UserWarning: Cannot access webmasters.dat: No such file or directory
in the window opened in the browser got the message:
Error 400: redirect_uri_mismatch The redirect URI in the request,
http://localhost:8080/, does not match the ones authorized for the
OAuth client. To update the authorized redirect URIs, visit:
https://console.developers.google.com/apis/credentials/oauthclient/xxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com?project=xxxxxxxxxxxx
i configured the redirect URI as http://localhost:8080/ but still the same
appreciate any help thanks

AWS S3 rejecting deleteDirectory() request even though delete() works

I'm on a Laravel 5.4 project, and in my composer.json file I'm using the following dependencies:
"league/flysystem-aws-s3-v3": "~1.0",
"aws/aws-sdk-php-laravel": "3.*",
When I try running the below, everything works fine:
Storage::disk('s3')->delete('exports/a_few_subdirectories/file.xls');
However, when I try running this line of code in an Artisan command, I get the below error:
Storage::disk('s3')->deleteDirectory('exports');
Here is the error message I receive:
Aws\S3\Exception\S3Exception Error executing "DeleteObjects" on "https://mybucket.s3.us-west-2.amazonaws.com/?delete"; AWS HTTP error: Client error: `POST https://mybucket.s3.us-west-2.amazonaws.com/?delete` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>MalformedXML</Code><Message>The XML you provided was not well-formed (truncated...)
MalformedXML (client): The XML you provided was not well-formed or did not validate against our published schema - <?xml version="1.0" encoding="UTF-8"?>
After googling around, I found a couple of users who had this same error for significantly different reasons. One user said they had hit a 1,000 key limit, and another user said they were missing a required parameter in their post request, but these users were both using Ruby and not Laravel. Since I'm using these composer packages, I'm assuming the packages are working correctly, and it also seems like everything on S3 is set up correctly (permissions and security settings, for instance). Another fun fact is that I can successfully use deleteDirectory with the local driver instead of the S3 driver, so deleteDirectory is working in that instance.
Any ideas as to what might be amiss? I'd rather delete the directory than loop through files and delete them one by one on S3. Thanks in advance.

Amazon aws s3 and ec2 CORS for Full Stack web (angular 2/spring boot)

Here is what I did:
I created the front end using Angular 2, and created the backend code using Spring boot. Then in my front end code, I hard code my api address using EC2 address (which is in front end), and hard code my front end address (the S3 address) in Filter class in Spring boot to get rid the CORS issue.
Then in my front end I run ng build --prod to get the deploy static files, in my backend I run mvn package to get the jar file. Upload the jar file to EC2 instance, it starts successfully for the backend part.
Now upload all the static files into S3 storage, and run the S3 domain, I got following error:
Failed to load ec2-35-182-225-61.ca-central-1.compute.amazonaws.com:8080/api/refresh: Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https.
Is it have any tutorial that link front end and backend using S3 and EC2?
Edit:
After I added the CORS on S3 part by trichetriche's answer, I got new error
main.e4936af900574d09c368.bundle.js:1 ERROR DOMException: Failed to execute 'open' on 'XMLHttpRequest': Invalid URL
at http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/polyfills.b3f1cff0521ef8205be4.bundle.js:1:56874
at XMLHttpRequest.o.(anonymous function) [as open] (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/polyfills.b3f1cff0521ef8205be4.bundle.js:1:20687)
at t._subscribe (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:424047)
at t._trySubscribe (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:7030)
at t.subscribe (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:6859)
at e.a (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:745892)
at e._innerSub (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:748287)
at e._tryNext (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:748211)
at e._next (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:748034)
at e.next (http://cloud.eatr.com.s3-website.ca-central-1.amazonaws.com/main.e4936af900574d09c368.bundle.js:1:9365)
Did you set up the CORS of your S3 ?
In your S3 configuration, you should add this :
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
This is under The Permissions tab, in CORS Configuration
(And once you're done testng, remember to set the origin to your domain)

Resources