Transfer large number of large files to s3 - bash

I am transferring around 31 TB of data that consists of 4500 files, file sizes range from 69MB to 25GB, from a remote server to a s3 bucket. I am using s4cmd put to do this and put it in a bash script upload.sh:
#!/bin/bash
FILES="/path/to/*.fastq.gz"
for i in $FILES
do
echo "$i"
s4cmd put --sync-check -c 10 $i s3://bucket-name/directory/
done
Then I use qsub to submit the job:
qsub -cwd -e error.txt -o output.txt -l h_vmem=10G -l mem_free=8G -l m_mem_free=8G -pe smp 10 upload.sh
This is taking way too long - it took 10 hours to upload ~20 files. Can someone suggest alternatives or modifications to my command?
Thanks!

Your case may belong to the situation when copying the data onto physical media and shipping it by regular mail is faster and cheaper than transferring the data over the internet. AWS supports such a "protocol" and has a special name for it - AWS Snowball.
Snowball is a petabyte-scale data transport solution that uses secure
appliances to transfer large amounts of data into and out of the AWS
cloud. Using Snowball addresses common challenges with large-scale
data transfers including high network costs, long transfer times, and
security concerns. Transferring data with Snowball is simple, fast,
secure, and can be as little as one-fifth the cost of high-speed
Internet.
With Snowball, you don’t need to write any code or purchase any
hardware to transfer your data. Simply create a job in the AWS
Management Console and a Snowball appliance will be automatically
shipped to you*. Once it arrives, attach the appliance to your local
network, download and run the Snowball client to establish a
connection, and then use the client to select the file directories
that you want to transfer to the appliance. The client will then
encrypt and transfer the files to the appliance at high speed. Once
the transfer is complete and the appliance is ready to be returned,
the E Ink shipping label will automatically update and you can track
the job status via Amazon Simple Notification Service (SNS), text
messages, or directly in the Console.
* Snowball is currently available in select regions. Your location will be verified once a job has been created in the AWS Management
Console.
The capacity of their smaller device is 50TB, a good fit for your case.
There is also a similar service AWS Import/Export disk, where you ship your own hardware (hard drives), instead of their special device:
To use AWS Import/Export Disk:
Prepare a portable storage device (see the Product Details page for supported devices).
Submit a Create Job request. You’ll get a job ID with a digital signature used to authenticate your device.
Print out your pre-paid shipping label.
Securely identify and authenticate your device. For Amazon S3, place the signature file on the root directory of your device. For
Amazon EBS or Amazon Glacier, tape the signature barcode to the
exterior of the device.
Attach your pre-paid shipping label to the shipping container and ship your device along with its interface connectors, and power supply
to AWS.
When your package arrives, it will be processed and securely
transferred to an AWS data center, where your device will be attached
to an AWS Import/Export station. After the data load completes, the
device will be returned to you.

Related

Fastest way to transfer files to EC2 over Session Manager

I regularly need to move large files to and from an EC2 instance connected via Session Manager. File transfers within AWS are fast as are files between local machines and non AWS assets over our fiber connection.
However, upstream and downstream speeds with EC2 over Session Manager are really slow -- like around 1MB/s. I proxy ssh over Session Manager which allows me to use regular utilities to move things around. Is this a Session Manager thing, a function of how I'm using, it or something else?
If this is the best I can do, I'll have to deal with it, but I'd love to use a better way if there's one available.
I discovered exactly the same issue when using rsync and other file transfer tools via SSM. Uploads speeds to an EC2 instance that were ~15 MB/s when connecting directly (using its public IP, not using SSM) appeared limited to between 300 and 800 KB/s when going via SSM.
I contacted AWS support for clarifications, and their response included:
"After discussing this situation with our SSM service team, they have mentioned that there will be some delay in SCP over Session Manager compared to direct SCP as there are extra hops in communication in SCP via SSM. Apart from the extra hops, there are other limits imposed in this feature which controls the rate of packet transfer and size of packet. These restrictions are placed to prevent misuse on the feature.
Therefore, there is not a way to mitigate this speed limitation you have encountered due to this."
This Github issue from 2019 on the aws-ssm-agent repo indicates slow performance which they claimed was resolved, but it seems they do not expect users to manage large file uploads/downloads via SSM.

Using parallel-level in azcopy when transferring data from an Azure VM to an AWS VM

How does one precisely utilize --parallel-level when using azcopy on Ubuntu 14.04 to speed up download performance?
I chose a value of 100, but without any reason (just to see what happens). I can't find related documentation online.
I'm using it to transfer files from an Azure blob to an AWS EC2 VM. It's just a t2.micro instance - however I'm using it for testing purposes and once I get the hang of azcopy, I'm open to using a bigger instance. I have to transfer ~50 GB of data, mostly low res image (i.e. lots of files).
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-linux?toc=%2fazure%2fstorage%2fblobs%2ftoc.json
Option --parallel-level specifies the number of concurrent copy
operations. By default, AzCopy starts a certain number of concurrent
operations to increase the data transfer throughput. The number of
concurrent operations is equal eight times the number of processors
you have. If you are running AzCopy across a low-bandwidth network,
you can specify a lower number for --parallel-level to avoid failure
caused by resource competition.
Under most of the cases you don't need to specify this option, only when you're running AzCopy across a low-bandwidth network, you can specify a lower number of this option.

Why is a download manager required to utilize full download speed available via isp from computer in california accessing ec2 instance in virginia?

So far I get an average of 700 kilobytes per second for downloads via chrome hitting an ec2 instance in virginia (us-east region). If I download directly from s3 in virginia (us-east region) I get 2 megabytes per second.
I've simplified this way down to simply running apache and reading a file from a mounted ebs volume. Less than one percent of the time I've seen the download hit around 1,800 kilobytes per second.
I also tried nginx, no difference. I also tried running a large instance with 7GB of Ram. I tried allocating 6GB of ram to the jvm and running tomcat, streaming the files in memory from s3 to avoid the disk. I tried enabling sendfile in apache. None of this helps.
When I run from apache reading from the file system, and use a download manager such as downthemall, I always get 2 megabytes per second when downloading from an ec2 instance in virginia (us-east region). It's as if my apache is configured to only allow 700 megabytes per thread. I don't see any configuration options relating to this though.
What am I missing here? I also benchmarked dropbox downloads as they use ec2 as well, and I noticed I get roughly 700 kilobytes per second there too, which is way slow as well. I imagine they must host their ec2 instances in virginia / us-east region as well based in the speed. If I use a download manager to download files from dropbox I get 2 megabytes a second as well.
Is this just the case with tcp, where if you are far away from the server you have to split transfers into chunks and download them in parrallel to saturate your network connection?
I think your last sentence is right: your 700mbps is probably a limitation of a given tcp connection ... maybe a throttle imposed by EC2, or perhaps your ISP, or the browser, or a router along the way -- dunno. Download managers likely split the request over multiple connections (I think this is called "multi-source"), gluing things together in the right order after they arrive. Whether this is the case depends on the software you're using, of course.

Testing file transfer speed across LAN/WAN

Is there a utility for Windows that allows you to test different aspects of file transfer operations across a Lan or a Wan.
Example...
How long does it take to move a file of a known size (500 MB or 1 GB) from Server A (on site) to Server B (on site) or to Server C (off site-Satellite location)?
D-ITG will allow you to test many aspects of your links. It does not necessarily allow you transfer a file directly, but it allows you to control almost all aspects of the transmission of data across the wire.
If all you are interested in is bulk transfer time (and not all the nitty-gritty details) you could just use a basic FTP application and time the transfer.
Probably nothing you've not already figured out. You could get some coarse grain metrics using a batch file to coordinate:
start monitoring
copy file
stop monitoring
Copy file might just be initiating a file copy between two nodes on the LAN, or it might initiate a FTP copy between two nodes on the WAN.
Monitoring could be as basic as writing the current time to output or file, or it could be as complex as adding performance counter metrics from the network adapter on the two machines.
A commercial WAN emulator would also give you the information your looking for. I've used the Shunra Appliance successfully in the past. Its pretty expensive, so I'd really only recommend it if critical business success is riding on understanding how application behavior could change based on network conditions and is something you could incorporate into regular testing activities.

Best approach to collecting log files from remote machines?

I have over 500 machines distributed across a WAN covering three continents. Periodically, I need to collect text files which are on the local hard disk on each blade. Each server is running Windows server 2003 and the files are mounted on a share which can be accessed remotely as \server\Logs. Each machine holds many files which can be several Mb each and the size can be reduced by zipping.
Thus far I have tried using Powershell scripts and a simple Java application to do the copying. Both approaches take several days to collect the 500Gb or so of files. Is there a better solution which would be faster and more efficient?
I guess it depends what you do with them ... if you are going to parse them for metrics data into a database, it would be faster to have that parsing utility installed on each of those machines to parse and load into your central database at the same time.
Even if all you are doing is compressing and copying to a central location, set up those commands in a .cmd file and schedule it to run on each of the servers automatically. Then you will have distributed the work amongst all those servers, rather than forcing your one local system to do all the work. :-)
The first improvement that comes to mind is to not ship entire log files, but only the records from after the last shipment. This of course is assuming that the files are being accumulated over time and are not entirely new each time.
You could implement this in various ways: if the files have date/time stamps you can rely on, running them through a filter that removes the older records from consideration and dumps the remainder would be sufficient. If there is no such discriminator available, I would keep track of the last byte/line sent and advance to that location prior to shipping.
Either way, the goal is to only ship new content. In our own system logs are shipped via a service that replicates the logs as they are written. That required a small service that handled the log files to be written, but reduced latency in capturing logs and cut bandwidth use immensely.
Each server should probably:
manage its own log files (start new logs before uploading and delete sent logs after uploading)
name the files (or prepend metadata) so the server knows which client sent them and what period they cover
compress log files before shipping (compress + FTP + uncompress is often faster than FTP alone)
push log files to a central location (FTP is faster than SMB, the windows FTP command can be automated with "-s:scriptfile")
notify you when it cannot push its log for any reason
do all the above on a staggered schedule (to avoid overloading the central server)
Perhaps use the server's last IP octet multiplied by a constant to offset in minutes from midnight?
The central server should probably:
accept log files sent and queue them for processing
gracefully handle receiving the same log file twice (should it ignore or reprocess?)
uncompress and process the log files as necessary
delete/archive processed log files according to your retention policy
notify you when a server has not pushed its logs lately
We have a similar product on a smaller scale here. Our solution is to have the machines generating the log files push them to a NAT on a daily basis in a randomly staggered pattern. This solved a lot of the problems of a more pull-based method, including bunched-up read-write times that kept a server busy for days.
It doesn't sound like the storage servers bandwidth would be saturated, so you could pull from several clients at different locations in parallel. The main question is, what is the bottleneck that slows the whole process down?
I would do the following:
Write a program to run on each server, which will do the following:
Monitor the logs on the server
Compress them at a particular defined schedule
Pass information to the analysis server.
Write another program which sits on the core srver which does the following:
Pulls compressed files when the network/cpu is not too busy.
(This can be multi-threaded.)
This uses the information passed to it from the end computers to determine which log to get next.
Uncompress and upload to your database continuously.
This should give you a solution which provides up to date information, with a minimum of downtime.
The downside will be relatively consistent network/computer use, but tbh that is often a good thing.
It will also allow easy management of the system, to detect any problems or issues which need resolving.
NetBIOS copies are not as fast as, say, FTP. The problem is that you don't want an FTP server on each server. If you can't process the log files locally on each server, another solution is to have all the server upload the log files via FTP to a central location, which you can process from. For instance:
Set up an FTP server as a central collection point. Schedule tasks on each server to zip up the log files and FTP the archives to your central FTP server. You can write a program which automates the scheduling of the tasks remotely using a tool like schtasks.exe:
KB 814596: How to use schtasks.exe to Schedule Tasks in Windows Server 2003
You'll likely want to stagger the uploads back to the FTP server.

Resources