This question already has answers here:
Uploading to Amazon S3 via curl route
(3 answers)
Closed 4 years ago.
I'm trying to upload a report to a aws s3 server and im the soluctions I have found aren't working I keep getting either a sha264 error or The authorization header is malformed. I'm clueless regarding sh and doing a post in curl and files and on how to upload. Also after uploading I need to generate a download link. Also this sh is to be run on jenkins.
This is one of the solutions I looked into:
s3 bash upload
Solution, is invalid since the authorization mechanism isn't supported anymore.
Below is the error I get:
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the authorization component "Signature=" is malformed.</Message><RequestId>182183F5B97F9258</RequestId><HostId>s3MwGZUpioyk+3Qfj0q51LqY4iosCEC84xThxscQFPwX4SbvJk66oi4qIyEaVkdNLUGL1CciXlY=</HostId></Error>%
Better use a proper command, s3cmd :
Check http://s3tools.org/s3cmd
(the right tool for the right job...)
Or the aws command :
aws s3 sync /tmp/foo s3://bucket/
Related
Please excuse me, as there are already many questions on the same topic.
First some background: I started working on a Laravel 9 web application few months ago, created AWS S3 buckets and an IAM user with full access to AmazonS3, and used the user's access credentials in .env file. And I was able to upload files to the specified bucket, and was able to access the uploaded files in my web application.
Last week, I worked on SMTP setup for sending emails and made some changes to the .env file (though I am sure that I did not change the AWS settings). Now I notice that the uploading files to the AWS S3 bucket is failing with the message:
exception: "League\\Flysystem\\UnableToWriteFile"
file:"/var/www/vhosts/silkweb.ca/vendor/league/flysystem/src/UnableToWriteFile.php"
line: 24
message: "Unable to write file at location: user/profile/3/P2jFdBHTE49mxym6jxa4LHTAPvV0qDiFZ9SsYtZt.png. Error executing \"PutObject\"
I use the following commands to put the file in the AWS S3 bucket:
$filepath = "/user/profile/".$user_id;
$upload_path = Storage::disk('s3')->put($filepath, $request->file('file'));
I even created a new IAM user and used that user's credentials in my .env file, but still no luck. My current .env settings for AWS are as follows
AWS_ACCESS_KEY_ID=AKI***************DAY
AWS_SECRET_ACCESS_KEY=jotz*************************ru
AWS_DEFAULT_REGION=us-east-2
AWS_BUCKET=silkweb
AWS_URL=silkweb.s3.us-east-2.amazonaws.com
AWS_ENDPOINT=http://silkweb-s3.us-east-2.amazonaws.com
AWS_USE_PATH_STYLE_ENDPOINT=false
I have used php artisan clear:cache and php artisan config:clear several times.
Any idea, why I am not able to create a file in the AWS S3 bucket?
After I commented the AWS_URL and AWS_ENDPOINT, file upload started working. The following are the working settings
AWS_ACCESS_KEY_ID=AKI***********DG
AWS_SECRET_ACCESS_KEY=l36***************0C
AWS_DEFAULT_REGION=us-east-2
AWS_BUCKET=silkweb
AWS_USE_PATH_STYLE_ENDPOINT=false
AWS_S3_SILKWEB_URL="https://silkweb.s3.us-east-2.amazonaws.com/"
**#AWS_URL=silkweb.s3.us-east-2.amazonaws.com**
**#AWS_ENDPOINT=http://silkweb-s3.us-east-2.amazonaws.com**
I'm working on a project that takes database backups from MongoDB on S3 and puts them onto a staging box for use for that day. I noticed during a manual run today I got this output. Normally it shows a good copy of each files but today I got a connection reset error or something one of the files, *.15, was not copied over after the operation had completed.
Here is the AWS CLI command that I'm using:
aws s3 cp ${S3_PATH} ${BACKUP_PRODUCTION_PATH}/ --recursive
And here is an excerpt of the output I got back:
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.10
to ../../data/db/myorg-production/myorg-production.10
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.11
to ../../data/db/myorg-production/myorg-production.11
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.12
to ../../data/db/myorg-production/myorg-production.12
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.13
to ../../data/db/myorg-production/myorg-production.13
download s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.14
to ../../data/db/myorg-production/myorg-production.14
download failed: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-produc
tion.15 to ../../data/db/myorg-production/myorg-production.15 ("Connection broken: error(104, 'Connection reset by peer')", error
(104, 'Connection reset by peer'))
download: s3://myorg-mongo-backups-raw/production/daily/2018-09-10/080001/data/s-ds063192-a1/myorg-production/myorg-production.16
to ../../data/db/myorg-production/myorg-production.16
How can I ensure that the data from the given S3 path was fully copied over to the target path without any connection issues, missing files, etc? Is the sync command for the AWS tool a better option? Or should I try something else?
Thanks!
This error appears on Elastic Beanstalk after uploading (with a zip) a new version to Elastic Beanstalk! with a file .ebextensions/singlehttps.config that sets the https for a single instance server.
If you're doing the Amazon AWS workshop LAB:
https://github.com/awslabs/eb-node-express-signup
ie. Upload and Deploying your Elastic Beanstalk app
and getting this PROBLEM error:
*ERROR Failed to deploy application.
*ERROR The configuration file __MACOSX/.ebextensions/._setup.config in application version 1.1.0 contains invalid YAML or JSON. YAML exception: Invalid Yaml: unacceptable character '' (0x0) special characters are not allowed in "", position 0, JSON exception: Invalid JSON: Unexpected character () at position 0.. Update the configuration file.
*INFO Environment update is starting.
SOLUTION
This is because MACOS includes some extra hidden folders which you need to exclude from your ZIP file. To do this, run this command in terminal on your zip:
$ zip -d nameofyourzipfile.zip __MACOSX/\*
Now re-upload, and you should get a success message:
INFO Environment update completed successfully.
INFO New application version was deployed to running EC2 instances.
Hope this solved your issue!
The reason for this problem in the Elastic Beanstalk system was in fact in the zip that is created in the Mac osx platform.
if you upload the new version with eb deploy command and not by zipping the application, then the problem doesn't appear!
Hope this helps someone, as it has been troubling me for so long!!
When you zip folders on MACOSX, it will add its own hidden files in there alongside yours.
If you want to make a zip without those invisible Mac resource files such as “_MACOSX” or “._Filename” and .ds store files, use the “-X” option in the zip command
$ zip -r -X archive_name.zip folder_to_compress
If this is a pre-existing zip file, you can use the command others here have mentioned
$ zip -d nameofyourzipfile.zip __MACOSX/\*
Work around on Mac
Since it opens up the zip file and when you compress it, Elastic Beanstalk gives the error mentioned above. If you run command in previous comments to remove MACOSX related stuff, it still gave me an error about one of the files not found.
Workaround is that before doing the download, rename the zip file to some other extension and change to zip once its on the Mac.
When you upload this file to Elastic Beanstalk, it will work fine.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I am using Parse.com CloudCode to create some custom functionality for an iOS app.
To deploy a CloudCode app you use:
terminal% parse deploy
Everything was working fine and I was able to upload my cloud code to the backend but suddenly it stopped working and I absolutely don't know why. I am getting this response:
requested resource was not found
EDIT:
The main.js is there, that is the file that has the changes. I tried to delete and recreate the app. The config is also correct. I restarted my computer, reset the connection, but still getting that response. Here is the full Terminal response after running % parse deploy:
% parse deploy
Uploading source files
Uploading recent changes to scripts...
The following files will be uploaded:
/path/to/main.js
Deploy failed. Retrying deploy...
Uploading source files
Uploading recent changes to scripts...
The following files will be uploaded:
/path/to/main.js
Deploy failed. Retrying deploy...
Uploading source files
Uploading recent changes to scripts...
The following files will be uploaded:
/path/to/main.js
Deploy failed. Retrying deploy...
Uploading source files
Uploading recent changes to scripts...
The following files will be uploaded:
/path/to/main.js
Deploy failed. Retrying deploy...
Uploading source files
Uploading recent changes to scripts...
The following files will be uploaded:
/path/to/main.js
requested resource was not found
Did anybody have the same or similar problem? If yes, what steps helped to resolve this issue?
Thanks in advance.
The newly released update has fixed this problem for me. You can either update using parse udpate or use curl curl -s https://www.parse.com/downloads/cloud_code/installer.sh | sudo /bin/bash
I´m using it on JavaScript and just spamming "parse deploy" till the main.js is update do the work.
Is not nice, but it works :)
This question already has answers here:
Amazon S3 boto - how to create a folder?
(13 answers)
Closed 8 years ago.
I have three files as
1_timestamp.mp4,
2_timestamp.mp4,
3_timestamp.mp4
By combining these files I am creating combined_timestamp.mp4 file, after that I am uploading combined_timestamp.mp4 file to aws s3.Upload functionality works fine.
But now I have to upload 1_timestamp.mp4, 2_timestamp.mp4, 3_timestamp.mp4 files along with combined_timestamp.mp4.
Is this possible to upload all these files under one timestamp folder. So that I can group them under one folder.
If it is possible, them please guide me.
I am using ruby 1.9.3, fog and gem "aws-sdk", ">= 1.8.1.2"
From this answer:
There is no concept of folders or directories in S3. You can create
file names like "abc/xys/uvw/123.jpg", which many S3 access tools like
S3Fox show like a directory structure, but it's actually just a single
file in a bucket.