Laravel advice on unzipping files on s3 - laravel

I have a form where users upload a zip file. I am using the filesystem.
is it possible to upload the file to S3 and then unzip the file there?
OR
should i unzip the files first and then upload them to S3.
The zipped folder has lots of files in it (around 500-600 small files), so how does laravel work with such large amount of files? Will the system hault for the time the files are being uploaded or does it carry on in the background? like a queue

No, AWS S3 will not provide you such functionality to unzip files on S3.
If you have EC2 instance within the same region, then Upload your zip files to EC2 and then move it S3 Unzipped.
Theres no charges between ec2 and s3 so ec2 can handle the unzipping and then write it out into your s3 bucket without additional transfer charges.
S3 will provide you just storage.
EDIT- To transfer files from EC2 to S3
You can use following command in AWS CLI. on you EC2
aws s3 cp myfolder s3://mybucket/myfolder --recursive
Here is the reference for it.
http://aws.amazon.com/cli/
For copying the files form the EC2 to S3 without interrupting the execution
Create a script to transfer the files form the ec2 to s3. and after uploading files to the ec2. Use laravel queue to execute the script. so that User don't have to wait until files are being transfer.
https://laravel.com/docs/5.1/queues

Related

Is there any way to transfer windows files by automatically convert in zip format and then to aws s3 bucket in an automate way daily

Looking an automate process so that files will compress automatically and then transfered to aws s3 bucket from local system.
Just create a script that will:
Zip the files
Use the AWS Command-Line Interface (CLI) aws s3 cp command to copy the file to Amazon S3

How to move the files from Amazon s3 to sftp using shell scripts

We have a requirement to automate the process of moving the CSV files from Amazon s3 to sftp server using shell scripts(BASH). Can we achieve this requirement using shell scripting? If yes, can someone help me in sharing the sample code?
An SFTP server simply makes local files accessible via the SFTP protocol. Therefore, all you need to do is to copy the files from Amazon S3 to the local disk.
This can be done via the AWS Command-Line Interface (CLI) aws s3 cp command or, better yet, aws s3 sync. This can be called from shell scripts.

Move files S3 to Ftb using bash script

I want to move files from Amazon s3 to ftp using bash script command...
I already tried
rsync --progress -avz -e ssh s3://folder//s3://folder/
Can anyone please suggest the correct command?
Thanks in advance
AWS built sync in their cli,
aws s3 sync ./localdir s3://mybucket
You can sync your local directory to remote bucket.
How to install aws cli?
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
If you don't want to take the cli installation route, you can use docker to connect to a container, share your local directory to that container and perform the sync.
https://hub.docker.com/r/mesosphere/aws-cli/
Hope it helps.
You can't copy objects from S3 in that way because S3 is not an SSH service, it a file storage. So the easiest way is to mount the S3 bucket. Then you can use it like a normal volume and copy all files to the target.
You should do that on the target system otherwise you have to copy all the file over the third server or computer.
https://www.interserver.net/tips/kb/mount-s3-bucket-centos-ubuntu-using-s3fs/

Downloading folders from aws s3, cp or sync?

If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ?
Any help would be highly appreciated.
For example,
if I want to download all the contents of "this folder" to my desktop, would it look like this ?
aws s3 sync s3://"myBucket"/"this folder" C:\\Users\Desktop
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files.
aws s3 cp s3://myBucket/dir localdir --recursive
The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files.
aws s3 sync s3://mybucket/dir localdir
Just experiment to get the result you want.
Documentation:
cp command
sync command
Just used version 2 of the AWS CLI. For the s3 option, there is also a --dryrun option now to show you what will happen:
aws s3 --dryrun cp s3://bucket/filename /path/to/dest/folder --recursive
In case you need to use another profile, especially cross account. you need to add the profile in the config file
[profile profileName]
region = us-east-1
role_arn = arn:aws:iam::XXX:role/XXXX
source_profile = default
and then if you are accessing only a single file
aws s3 cp s3://crossAccountBucket/dir localdir --profile profileName
In the case you want to download a single file, you can try the following command:
aws s3 cp s3://bucket/filename /path/to/dest/folder
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket/yourfolder /local/path
In the above command, replace the following fields:
yourbucket/yourfolder >> your S3 bucket and the folder that you want to download.
/local/path >> path in your local system where you want to download all the files.
sync method first lists both source and destination paths and copies only differences (name, size etc.).
cp --recursive method lists source path and copies (overwrites) all to the destination path.
If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.
Question: Will aws s3 sync s3://myBucket/this_folder/object_file C:\\Users\Desktop create also the "this_folder" in C:\Users\Desktop?
If not, what would be the solution to copy/sync including the folder structure of S3? I mean I have many files in different S3 bucket folders sorted by year, month, day. I would like to copy them locally with the folder structure to be kept.

Shell script to Upload files from AWS EC2 to S3

I am executing jmeter on AWS EC2, result of which is returned in the form csv file.
I need to upload this csv file to AWS S3 bucket.
Since I am creating number of EC2 instances dynamically and executing jmeter on those instances, it's better to automate this process .
So for this I want to write shell script (as a user data) to execute jmeter and upload result CSV file to S3 bucket from each EC2 instance.
How i can write script for this ?
Consider using command line s3 clients.
S3 command line tools
Also go through some of these sites :
Shell Script To Transfer Files From Amazon S3 Bucket.
aws command line tools
python script to upload file to s3
You can use this library for managing objects on AWS S3 using shell scripts.
Universal Docs Manager is Pure shell script based objects manager which currently supports Local Disk, MySQL and AWS S3

Resources