I have been trying to find a solution for this but I need to ask you all. Do you know if there is a windows desktop application out there which would put (real time sync) objects from a local folder into predefined AWS S3 bucket? This could work just one way - upload from local to s3.
Setting it up
Insall AWS cli https://aws.amazon.com/cli/ for windows.
Through AWS website/console. Create an IAM user with a strict policy that allows access only to the required S3 bucket.
Run aws configure in powershell or cmd and set up the region, access key and secrect key for the IAM user that you created.
Test if your set up is correct by running aws s3 ls in the command line and verify you see a list of your account S3 buckets.
If not, then you probably configured IAM permissions incorrectly, you might need ListBuckets on all of S3 too.
How to sync examples
aws s3 sync path/to/yourfolder s3://mybucket/
aws s3 sync path/to/yourfolder s3://mybucket/images/
aws s3 sync path/to/yourfolder s3://mybucket/images/ --delete deletes files on S3 that are no longer available on your local path.
Not sure what this has to do with electron but you could set up a trigger on your application to invoke these commands. For example, in atom.io or VS code, you could bind this to saving a document on "ctrl+s".
If you are programming an application using Electron then you should consider using AWS JavaScript SDK instead of the AWS CLI but that is a whole different story.
And lastly, back up your files somewhere else before trying to use possibly destructive commands such as sync until you get a feeling of how they work.
Related
We maintain dozens of developer accounts on AWS and for maintenance purposes it would be amazing if on all cloudshell environments we would have a set of scripts available.
It is possible to upload files to the cloudshell environment manually by using the Actions->Upload File feature in the web console, but that is not feasible to manage dozens of environments.
Is there any ansible module or other way to upload files to cloudshell? Probably via S3 bucket, but we're missing the last mile into the cloudshell environment.
I'd like to create a way (using shell scripts and AWS's CLI) so that the following can be automated:
Copy specific files from an s3 bucket
Paste them into a different bucket in S3.
Would the below 'sync' command work?
aws s3 sync s3://directory1/bucket1 s3://directory2/bucket2 --exclude "US*.gz" --exclude "CA*.gz" --include "AU*.gz"
The goal here is to ONLY transfer files whose filenames begin with "AU" and exclude everything else, all in automated fashion as much as possible. Also, is it possible to exclude very old files?
Second part of the question is what do I need to add to my shell script in order to automate this process as much as possible, as "AU" files gets dropped in this folder everyday?
Copy objects
The AWS CLI can certainly copy objects between buckets. In fact, it does not even require files to be downloaded — S3 will copy directly between buckets, even if they are in different regions.
The aws s3 sync command is certainly an easy way to do it, since it will replicate any files from the source to the destination without having to specifically state which files to copy.
To only copy AU* files, use: --exclude "*" --include "AU*"
See: Use of Exclude and Include Filters
You asked about excluding old files — the sync command will sync all files, so any files that were previously copied will not be copied again. By default, any files deleted from the source will not be deleted in the destination until specifically requested.
Automate
How to automate this? The most cloud-worthy way to do this would be to create an AWS Lambda function. The Lambda function can be automatically triggered by an Amazon CloudWatch Events rule on a regular schedule.
However, the AWS CLI is not installed by default in Lambda, so it might be a little more challenging. See: Running aws-cli Commands Inside An AWS Lambda Function - Alestic.com
It would be better to have the Lambda function do the copy itself, rather than calling the AWS CLI.
Alternative idea
Amazon S3 can be configured to trigger an AWS Lambda function whenever a new object is added to an S3 bucket. This way, as soon as the object is added in S3, it will be copied to the other Amazon S3 bucket. Logic in the Lambda function can determine whether or not to copy the file, such as checking that is starts with AU.
I have parse-server running on Heroku. When I first created this app, I didn't specify a files adapter in index.js, so all uploaded files have been getting stored on Heroku.
So I have now run out of room and I have set up an AWS S3 bucket to store my files on. This is working fine expect for the fact that any files which were originally stored on Heroku can no longer be accessed through the application.
At the moment I am thinking about looping through all objects which have a relation to a file stored on heroku, then uploading that file to S3 bucket. Just hoping that there may be some tool out there or that someone has an easier process for doing this.
thanks
There are migration guides for migrating parse server itself but I don't see anything in the documentation for migrating hosted files unfortunately.
I did find one migration tool but it appears to still utilize the previous file adapter (on your heroku instance) and then stores anything new on the new adapter (s3 storage).
parse-server-migrating-adapter
My web application server on AWS ec2 instance.
And using MEAN stack.
I'd like to upload image to ec2 instance.(ex - /usr/local/web/images)
I can't found that how can i get the credentials.
There are just about AWS S3.
How can i upload image file to ec2 instance?
If you do file transfer repeatedly try, unison. It is bidirectional, kind of sync. Allows options to handle conflicts.
I've found the easiest way to do this as a one-off is to upload the file to google drive, and then download the file from there. View this thread to see how simply this can be done!
I am using Amazon's official aws-sdk gem, but I can't seem to find any funcionality that works like the command line tool aws s3 sync <path> <bucket>. Does it exist or am I forced to upload each file separately (slow)?
you don't have an api call that achieves that.
the sync is basically a call to get the objects, a call to inspect your local path and after that uploads/downloads to bring the 2 locations in sync. that's what the was cli tool does under the hood.