atopsar extraction to text file with additional info - shell

As part of performance metrics collections, I'm trying to get the CPU,IO,Memory... usages of system for the particular run using atop.
To achieve this, I'm starting atop to generate the atop-data file using below command
/usr/bin/atop -a -w /venki/atop_temp 2
Once the datafile is generated, I'm going to extract the interested information. For example I want to get the memory usage details. For this, I'm applying below command.
atopsar -b 20:39:45 -e 20:42:45 -r /venki/atop_temp -S -x -a -m > /venki/atop_mem4
It's resulting with below info:
sdl00999 2.6.32.54-0.7.TDC.1.R.4-default #1 SMP 2012-04-19 16:07:40 +0200 x86_64 2016/11/15
-------------------------- analysis date: 2016/11/15 --------------------------
20:39:37 memtotal memfree buffers cached dirty slabmem swptotal swpfree _mem_
20:39:41 3700M 2386M 9M 353M 0M 121M 309M 302M
20:39:43 3700M 2385M 9M 353M 0M 121M 309M 302M
20:39:47 3700M 2385M 9M 353M 2M 121M 309M 302M
20:39:49 3700M 2385M 9M 353M 2M 121M 309M 302M
But, I need additional column [Date - 2016/11/16] in the beginning.
I need this information, if my test went for multiple days [3 days - I need information like, which dates time]
Can any-one help me on this
Thanks in-advance

You can start with the following:
atopsar -b 20:39:45 -e 20:42:45 -r /venki/atop_temp -S -x -a -m | awk 'BEGIN {DATE_STAMP=""; } /analysis date: /{DATE_STAMP=$4;} /^[0-9]/ {print DATE_STAMP, $0;}' > /venki/atop_mem4

Related

Limit Speed in Ubuntu based on traffic

I have come across a script i.e. Wondershaper
The script is terrific, however any way to make it smarter?
Like it runs after certain traffic has gone through?
Say 1TB is set per day, once 1TB is hit, the script turns on automatically?
I have thought about setting crn job,
At 12 am it clears the wondershaper, and in 15mins interval, it checks if the server has crossed 1TB limit for the day, and then if it is true then it runs the limiter,
but I am not sure how to set up the 2nd part, how can i setup a way that will enable the limiter to run after 1TB is crossed?
Remove Code
wondershaper -ca eth0
Limit Code
wondershaper -a eth0 -u 154000
I have made a custom script for this, as it is not possible to do it within the system, i had to become creative and do a API call to the datacenter and then run cron job.
I also used bashjson, to run it. I have attached the script below.
date=$(date +%F)
url='API URL /metrics/datatraffic?from='
url1='T00:00:00Z&to='
url2='T23:59:59Z&aggregation=SUM'
final="$url$date$url1$date$url2"
wget --no-check-certificate -O output.txt \
--method GET \
--timeout=0 \
--header 'X-Lsw-Auth: API AUTH' \
$final
sed 's/[][]//g' output.txt >> test1.json // will remove '[]' from the code just to make things easier for bashjson to understand
down=$(/root/bashjson/bashjson.sh test1.json metrics DOWN_PUBLIC values value) // outputs the data to variable
up=$(/root/bashjson/bashjson.sh test1.json metrics UP_PUBLIC values value)
newdown=$(printf "%.14f" $down)
newup=$(printf "%.14f" $up)
upp=$(printf "%.0f\n" "$newup") // removes scientific notation as bash does not like it
downn=$(printf "%.0f\n" "$newdown")
if (($upp>800000000000 | bc))
then
wondershaper -a eth0 -u 100000 //main command to limit
else
echo uppworks
fi
if (($downn>500000000000 | bc))
then
wondershaper -a eth0 -d 100000
else
echo downworks
fi
rm -rf output.txt test1.json
echo $upp
echo $downn
You can always update it as per your preference.

Run million of list in PBS with parallel tool

I've huge size(few million) job contain list and wants to run java written tool to perform the features comparison. This tool completes the calculation in
real 0m0.179s
user 0m0.005s
sys 0m0.000s sec
Running 5 nodes(each have 72 cpus) with pbs torque scheduler in the GNU parallel, tool runs fine and produces the results but as I set 72 jobs per node, it should run 72 x 5 jobs at a time but I can see only it runs 25-35 jobs!
Checking of cpu utilization on each node also shows low utilization.
I desire to run 72 X 5 jobs or more at a time and produce the results by utilizing all the available source (72 X 5 cpus).
As I mentioned have ~200 millions of job to run, I desire to complete it faster(1-2 hours) by using/increasing the number of nodes/cpus.
Current code, input and job state:
example.lst (it has ~300 million lines)
ZNF512-xxxx_2_N-THRA-xxtx_2_N
ZNF512-xxxx_2_N-THRA-xxtx_3_N
ZNF512-xxxx_2_N-THRA-xxtx_4_N
.......
cat job_script.sh
#!/bin/bash
#PBS -l nodes=5:ppn=72
#PBS -N job01
#PBS -j oe
#work dir
export WDIR=/shared/data/work_dir
cd $WDIR;
# use available 72 cpu in each node
export JOBS_PER_NODE=72
#gnu parallel command
parallelrun="parallel -j $JOBS_PER_NODE --slf $PBS_NODEFILE --wd $WDIR --joblog process.log --resume"
$parallelrun -a example.lst sh run_script.sh {}
cat run_script.sh
#!/bin/bash
# parallel command options
i=$1
data=/shared/TF_data
# create tmp dir and work in
TMP_DIR=/shared/data/work_dir/$i
mkdir -p $TMP_DIR
cd $TMP_DIR/
# get file name
mk=$(echo "$i" | cut -d- -f1-2)
nk=$(echo "$i" | cut -d- -f3-6)
#run a tool to compare the features of pair files
/shared/software/tool_v2.1/tool -s1 $data/inf_tf/$mk -s1cf $data/features/$mk-cf -s1ss $data/features/$mk-ss -s2 $data/inf_tf/$nk.pdb -s2cf $data/features/$nk-cf.pdb -s2ss $data/features/$nk-ss.pdb > $data/$i.out
# move output files
mv matrix.txt $data/glosa_tf/matrix/$mk"_"$nk.txt
mv ali_struct.pdb $data/glosa_tf/aligned/$nk"_"$mk.pdb
# move back and remove tmp dir
cd $TMP_DIR/../
rm -rf $TMP_DIR
exit 0
PBS submission
qsub job_script.sh
Login to one of the node : ssh ip-172-31-9-208
top - 09:28:03 up 15 min, 1 user, load average: 14.77, 13.44, 8.08
Tasks: 928 total, 1 running, 434 sleeping, 0 stopped, 166 zombie
Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 98.4%id, 1.4%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 193694612k total, 1811200k used, 191883412k free, 94680k buffers
Swap: 0k total, 0k used, 0k free, 707960k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
15348 ec2-user 20 0 16028 2820 1820 R 0.3 0.0 0:00.10 top
15621 ec2-user 20 0 169m 7584 6684 S 0.3 0.0 0:00.01 ssh
15625 ec2-user 20 0 171m 7472 6552 S 0.3 0.0 0:00.01 ssh
15626 ec2-user 20 0 126m 3924 3492 S 0.3 0.0 0:00.01 perl
.....
All of the nodes top shows the similar state and produces the results by running only ~26 at a time!
I've aws-parallelcluster contains 5 nodes(each have 72 cpus) with torque scheduler and GNU Parallel 2018, Mar 2018
Update
By introducing the new function that takes input on stdin and running the script in parallel works great and utilizes all the CPU in local machine.
However, when its runs over remote machines it produces a
parallel: Error: test.lst is neither a file nor a block device
MCVE:
A simple code that echoing list gives the same error while running it in remote machines but works great in local machine:
cat test.lst # contains list
DNMT3L-5yx2B_1_N-DNMT3L-5yx2B_2_N
DNMT3L-5yx2B_1_N-DNMT3L-6brrC_3_N
DNMT3L-5yx2B_1_N-DNMT3L-6f57B_2_N
DNMT3L-5yx2B_1_N-DNMT3L-6f57C_2_N
DNMT3L-5yx2B_1_N-DUX4-6e8cA_4_N
DNMT3L-5yx2B_1_N-E2F8-4yo2A_3_P
DNMT3L-5yx2B_1_N-E2F8-4yo2A_6_N
DNMT3L-5yx2B_1_N-EBF3-3n50A_2_N
DNMT3L-5yx2B_1_N-ELK4-1k6oA_3_N
DNMT3L-5yx2B_1_N-EPAS1-1p97A_1_N
cat test_job.sh # GNU parallel submission script
#!/bin/bash
#PBS -l nodes=1:ppn=72
#PBS -N test
#PBS -k oe
# introduce new function and Run from ~/
dowork() {
parallel sh test_work.sh {}
}
export -f dowork
parallel -a test.lst --env dowork --pipepart --slf $PBS_NODEFILE --block -10 dowork
cat test_work.sh # run/work script
#!/bin/bash
i=$1
data=pwd
#create temporary folder in current dir
TMP_DIR=$data/$i
mkdir -p $TMP_DIR
cd $TMP_DIR/
# split list
mk=$(echo "$i" | cut -d- -f1-2)
nk=$(echo "$i" | cut -d- -f3-6)
# echo list and save in echo_test.out
echo $mk, $nk >> $data/echo_test.out
cd $TMP_DIR/../
rm -rf $TMP_DIR
From your timing:
real 0m0.179s
user 0m0.005s
sys 0m0.000s sec
it seems the tool uses very little CPU power. When GNU Parallel runs local jobs it has an overhead of 10 ms CPU time per job. Your jobs use 179 ms time, and 5 ms CPU time. So GNU Parallel will be using quite a bit of the time spent.
The overhead is much worse when running jobs remotely. Here we are talking 10 ms + running an ssh command. This can easily be in the order of 100 ms.
So how can we minimize the number of ssh commands and how can spread the overhead over multiple cores?
First let us make a function that can take input on stdin and run the script - one job per CPU thread in parallel:
dowork() {
[...set variables here. that becomes particularly important we when run remotely...]
parallel sh run_script.sh {}
}
export -f dowork
Test that this actually works by running:
head -n 1000 example.lst | dowork
Then let us look at running jobs locally. This can be done similar to described here: https://www.gnu.org/software/parallel/man.html#EXAMPLE:-Running-more-than-250-jobs-workaround
parallel -a example.lst --pipepart --block -10 dowork
This will split example.lst into 10 blocks per CPU thread. So on a machine with 72 CPU threads this will make 720 blocks. It will the start 72 doworks and when one is done it will get another of the 720 blocks. The reason I choose 10 instead of 1 is if one of the jobs "get stuck" for a while, then you are unlikely to notice this.
This should make sure 100% of the CPUs on the local machine is busy.
If that works, we need to distribute this work to remote machines:
parallel -j1 -a example.lst --env dowork --pipepart --slf $PBS_NODEFILE --block -10 dowork
This should in total start 10 ssh per CPU thread (i.e. 5*72*10) - namely one for each block. With 1 running per server listed in $PBS_NODEFILE in parallel.
Unfortunately this means that --joblog and --resume will not work. There is currently no way to make that work, but if it is valuable to you contact me via parallel#gnu.org.
I am not sure what tool does. But if the copying takes most of the time and if tool only reads the files, then you might just be able symlink the files into $TMP_DIR instead of copying.
A good indication of whether you can do it faster is to look at top of the 5 machines in the cluster. If they are all using all cores at >90% then you cannot expect to get it faster.

Create a new batch .txt file with specified content for every current file in a directory

I have a huge list of files on a cluster and I need to create a .txt file for each "pair". Each pair is specified by filename_R1.fq.gz and filename_R2.fq.gz. for each pair of R1 and R2 files I need to create a text file that contains:
#!/bin/bash
#$ -N align.$i
#$ -j y
#$ -l h_rt=4:00:00
#$ -pe omp 12
bowtie2 \
--phred33 \
--fast-local \
-X 1000 \
-p 12 \
-x /usr3/graduate/dhc285/reference_files/21G6 \
-1 $i -2 ${i%_R1.fq.gz}_R2.fq.gz \
| samtools view -bS - > ${i%_R1.fq.gz}.bam
Where the $i command refers to my filenames. I would also like each file to be named ${i%_R1.fq.gz}.txt. Thanks!
Using GNU Parallel it looks like this:
sge_jobfile() {
i="$1"
cat <<EOF > ${i%_R1.fq.gz}.txt
#!/bin/bash
#$ -N align.$i
#$ -j y
#$ -l h_rt=4:00:00
#$ -pe omp 12
bowtie2 \\
--phred33 \\
--fast-local \\
-X 1000 \\
-p 12 \\
-x /usr3/graduate/dhc285/reference_files/21G6 \\
-1 $i -2 ${i%_R1.fq.gz}_R2.fq.gz \\
| samtools view -bS - > ${i%_R1.fq.gz}.bam
EOF
}
export -f sge_jobfile
parallel sge_jobfile ::: *_R1.fq.gz
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

Speed up rsync with Simultaneous/Concurrent File Transfers?

We need to transfer 15TB of data from one server to another as fast as we can. We're currently using rsync but we're only getting speeds of around 150Mb/s, when our network is capable of 900+Mb/s (tested with iperf). I've done tests of the disks, network, etc and figured it's just that rsync is only transferring one file at a time which is causing the slowdown.
I found a script to run a different rsync for each folder in a directory tree (allowing you to limit to x number), but I can't get it working, it still just runs one rsync at a time.
I found the script here (copied below).
Our directory tree is like this:
/main
- /files
- /1
- 343
- 123.wav
- 76.wav
- 772
- 122.wav
- 55
- 555.wav
- 324.wav
- 1209.wav
- 43
- 999.wav
- 111.wav
- 222.wav
- /2
- 346
- 9993.wav
- 4242
- 827.wav
- /3
- 2545
- 76.wav
- 199.wav
- 183.wav
- 23
- 33.wav
- 876.wav
- 4256
- 998.wav
- 1665.wav
- 332.wav
- 112.wav
- 5584.wav
So what I'd like to happen is to create an rsync for each of the directories in /main/files, up to a maximum of, say, 5 at a time. So in this case, 3 rsyncs would run, for /main/files/1, /main/files/2 and /main/files/3.
I tried with it like this, but it just runs 1 rsync at a time for the /main/files/2 folder:
#!/bin/bash
# Define source, target, maxdepth and cd to source
source="/main/files"
target="/main/filesTest"
depth=1
cd "${source}"
# Set the maximum number of concurrent rsync threads
maxthreads=5
# How long to wait before checking the number of rsync threads again
sleeptime=5
# Find all folders in the source directory within the maxdepth level
find . -maxdepth ${depth} -type d | while read dir
do
# Make sure to ignore the parent folder
if [ `echo "${dir}" | awk -F'/' '{print NF}'` -gt ${depth} ]
then
# Strip leading dot slash
subfolder=$(echo "${dir}" | sed 's#^\./##g')
if [ ! -d "${target}/${subfolder}" ]
then
# Create destination folder and set ownership and permissions to match source
mkdir -p "${target}/${subfolder}"
chown --reference="${source}/${subfolder}" "${target}/${subfolder}"
chmod --reference="${source}/${subfolder}" "${target}/${subfolder}"
fi
# Make sure the number of rsync threads running is below the threshold
while [ `ps -ef | grep -c [r]sync` -gt ${maxthreads} ]
do
echo "Sleeping ${sleeptime} seconds"
sleep ${sleeptime}
done
# Run rsync in background for the current subfolder and move one to the next one
nohup rsync -a "${source}/${subfolder}/" "${target}/${subfolder}/" </dev/null >/dev/null 2>&1 &
fi
done
# Find all files above the maxdepth level and rsync them as well
find . -maxdepth ${depth} -type f -print0 | rsync -a --files-from=- --from0 ./ "${target}/"
Updated answer (Jan 2020)
xargs is now the recommended tool to achieve parallel execution. It's pre-installed almost everywhere. For running multiple rsync tasks the command would be:
ls /srv/mail | xargs -n1 -P4 -I% rsync -Pa % myserver.com:/srv/mail/
This will list all folders in /srv/mail, pipe them to xargs, which will read them one-by-one and and run 4 rsync processes at a time. The % char replaces the input argument for each command call.
Original answer using parallel:
ls /srv/mail | parallel -v -j8 rsync -raz --progress {} myserver.com:/srv/mail/{}
Have you tried using rclone.org?
With rclone you could do something like
rclone copy "${source}/${subfolder}/" "${target}/${subfolder}/" --progress --multi-thread-streams=N
where --multi-thread-streams=N represents the number of threads you wish to spawn.
rsync transfers files as fast as it can over the network. For example, try using it to copy one large file that doesn't exist at all on the destination. That speed is the maximum speed rsync can transfer data. Compare it with the speed of scp (for example). rsync is even slower at raw transfer when the destination file exists, because both sides have to have a two-way chat about what parts of the file are changed, but pays for itself by identifying data that doesn't need to be transferred.
A simpler way to run rsync in parallel would be to use parallel. The command below would run up to 5 rsyncs in parallel, each one copying one directory. Be aware that the bottleneck might not be your network, but the speed of your CPUs and disks, and running things in parallel just makes them all slower, not faster.
run_rsync() {
# e.g. copies /main/files/blah to /main/filesTest/blah
rsync -av "$1" "/main/filesTest/${1#/main/files/}"
}
export -f run_rsync
parallel -j5 run_rsync ::: /main/files/*
You can use xargs which supports running many processes at a time. For your case it will be:
ls -1 /main/files | xargs -I {} -P 5 -n 1 rsync -avh /main/files/{} /main/filesTest/
There are a number of alternative tools and approaches for doing this listed arround the web. For example:
The NCSA Blog has a description of using xargs and find to parallelize rsync without having to install any new software for most *nix systems.
And parsync provides a feature rich Perl wrapper for parallel rsync.
I've developed a python package called: parallel_sync
https://pythonhosted.org/parallel_sync/pages/examples.html
Here is a sample code how to use it:
from parallel_sync import rsync
creds = {'user': 'myusername', 'key':'~/.ssh/id_rsa', 'host':'192.168.16.31'}
rsync.upload('/tmp/local_dir', '/tmp/remote_dir', creds=creds)
parallelism by default is 10; you can increase it:
from parallel_sync import rsync
creds = {'user': 'myusername', 'key':'~/.ssh/id_rsa', 'host':'192.168.16.31'}
rsync.upload('/tmp/local_dir', '/tmp/remote_dir', creds=creds, parallelism=20)
however note that ssh typically has the MaxSessions by default set to 10 so to increase it beyond 10, you'll have to modify your ssh settings.
The simplest I've found is using background jobs in the shell:
for d in /main/files/*; do
rsync -a "$d" remote:/main/files/ &
done
Beware it doesn't limit the amount of jobs! If you're network-bound this is not really a problem but if you're waiting for spinning rust this will be thrashing the disk.
You could add
while [ $(jobs | wc -l | xargs) -gt 10 ]; do sleep 1; done
inside the loop for a primitive form of job control.
3 tricks for speeding up rsync on local net.
1. Copying from/to local network: don't use ssh!
If you're locally copying a server to another, there is no need to encrypt data during transfer!
By default, rsync use ssh to transer data through network. To avoid this, you have to create a rsync server on target host. You could punctually run daemon by something like:
rsync --daemon --no-detach --config filename.conf
where minimal configuration file could look like: (see man rsyncd.conf)
filename.conf
port = 12345
[data]
path = /some/path
use chroot = false
Then
rsync -ax rsync://remotehost:12345/data/. /path/to/target/.
rsync -ax /path/to/source/. rsync://remotehost:12345/data/.
2. Using zstandard zstd for high speed compression
Zstandard could be upto 8x faster than the common gzip. So using this newer compression algorithm will improve significantly your transfer!
rsync -axz --zc=zstd rsync://remotehost:12345/data/. /path/to/target/.
rsync -axz --zc=zstd /path/to/source/. rsync://remotehost:12345/data/.
3. Multiplexing rsync to reduce inactivity due to browse time
This kind of optimisation is about disk access and filesystem structure. There is nothing to see with number of CPU! So this could improve transfer even if your host use single core CPU.
As the goal is to ensure maximum data are using bandwidth while other task browse filesystem, the most suited number of simultaneous process depend on number of small files presents.
Here is a sample bash script using wait -n -p PID:
#!/bin/bash
maxProc=3
source=''
destination='rsync://remotehost:12345/data/'
declare -ai start elap results order
wait4oneTask() {
local _i
wait -np epid
results[epid]=$?
elap[epid]=" ${EPOCHREALTIME/.} - ${start[epid]} "
unset "running[$epid]"
while [ -v elap[${order[0]}] ];do
_i=${order[0]}
printf " - %(%a %d %T)T.%06.0f %-36s %4d %12d\n" "${start[_i]:0:-6}" \
"${start[_i]: -6}" "${paths[_i]}" "${results[_i]}" "${elap[_i]}"
order=(${order[#]:1})
done
}
printf " %-22s %-36s %4s %12s\n" Started Path Rslt 'microseconds'
for path; do
rsync -axz --zc zstd "$source$path/." "$destination$path/." &
lpid=$!
paths[lpid]="$path"
start[lpid]=${EPOCHREALTIME/.}
running[lpid]=''
order+=($lpid)
((${#running[#]}>=maxProc)) && wait4oneTask
done
while ((${#running[#]})); do
wait4oneTask
done
Output could look like:
myRsyncP.sh files/*/*
Started Path Rslt microseconds
- Fri 03 09:20:44.673637 files/1/343 0 1186903
- Fri 03 09:20:44.673914 files/1/43 0 2276767
- Fri 03 09:20:44.674147 files/1/55 0 2172830
- Fri 03 09:20:45.861041 files/1/772 0 1279463
- Fri 03 09:20:46.847241 files/2/346 0 2363101
- Fri 03 09:20:46.951192 files/2/4242 0 2180573
- Fri 03 09:20:47.140953 files/3/23 0 1789049
- Fri 03 09:20:48.930306 files/3/2545 0 3259273
- Fri 03 09:20:49.132076 files/3/4256 0 2263019
Quick check:
printf "%'d\n" $(( 49132076 + 2263019 - 44673637)) \
$((1186903+2276767+2172830+1279463+2363101+2180573+1789049+3259273+2263019))
6’721’458
18’770’978
There was 6,72seconds elapsed to process 18,77seconds under upto three subprocess.
Note: you could use musec2str to improve ouptut, by replacing 1st long printf line by:
musec2str -v elapsed "${elap[i]}"
printf " - %(%a %d %T)T.%06.0f %-36s %4d %12s\n" "${start[i]:0:-6}" \
"${start[i]: -6}" "${paths[i]}" "${results[i]}" "$elapsed"
myRsyncP.sh files/*/*
Started Path Rslt Elapsed
- Fri 03 09:27:33.463009 files/1/343 0 18.249400"
- Fri 03 09:27:33.463264 files/1/43 0 18.153972"
- Fri 03 09:27:33.463502 files/1/55 93 10.104106"
- Fri 03 09:27:43.567882 files/1/772 122 14.748798"
- Fri 03 09:27:51.617515 files/2/346 0 19.286811"
- Fri 03 09:27:51.715848 files/2/4242 0 3.292849"
- Fri 03 09:27:55.008983 files/3/23 0 5.325229"
- Fri 03 09:27:58.317356 files/3/2545 0 10.141078"
- Fri 03 09:28:00.334848 files/3/4256 0 15.306145"
The more: you could add overall stat line by some edits in this script:
#!/bin/bash
maxProc=3 source='' destination='rsync://remotehost:12345/data/'
. musec2str.bash # See https://stackoverflow.com/a/72316403/1765658
declare -ai start elap results order
declare -i sumElap totElap
wait4oneTask() {
wait -np epid
results[epid]=$?
local -i _i crtelap=" ${EPOCHREALTIME/.} - ${start[epid]} "
elap[epid]=crtelap sumElap+=crtelap
unset "running[$epid]"
while [ -v elap[${order[0]}] ];do # Print status lines in command order.
_i=${order[0]}
musec2str -v helap ${elap[_i]}
printf " - %(%a %d %T)T.%06.f %-36s %4d %12s\n" "${start[_i]:0:-6}" \
"${start[_i]: -6}" "${paths[_i]}" "${results[_i]}" "${helap}"
order=(${order[#]:1})
done
}
printf " %-22s %-36s %4s %12s\n" Started Path Rslt 'microseconds'
for path;do
rsync -axz --zc zstd "$source$path/." "$destination$path/." &
lpid=$! paths[lpid]="$path" start[lpid]=${EPOCHREALTIME/.}
running[lpid]='' order+=($lpid)
((${#running[#]}>=maxProc)) &&
wait4oneTask
done
while ((${#running[#]})) ;do
wait4oneTask
done
totElap=${EPOCHREALTIME/.}
for i in ${!start[#]};do sortstart[${start[i]}]=$i;done
sortstartstr=${!sortstart[*]}
fstarted=${sortstartstr%% *}
totElap+=-fstarted
musec2str -v hTotElap $totElap
musec2str -v hSumElap $sumElap
printf " = %(%a %d %T)T.%06.0f %-41s %12s\n" "${fstarted:0:-6}" \
"${fstarted: -6}" "Real: $hTotElap, Total:" "$hSumElap"
Could produce:
$ ./parallelRsync Data\ dirs-{1..4}/Sub\ dir{A..D}
Started Path Rslt microseconds
- Sat 10 16:57:46.188195 Data dirs-1/Sub dirA 0 1.69131"
- Sat 10 16:57:46.188337 Data dirs-1/Sub dirB 116 2.256086"
- Sat 10 16:57:46.188473 Data dirs-1/Sub dirC 0 1.1722"
- Sat 10 16:57:47.361047 Data dirs-1/Sub dirD 0 2.222638"
- Sat 10 16:57:47.880674 Data dirs-2/Sub dirA 0 2.193557"
- Sat 10 16:57:48.446484 Data dirs-2/Sub dirB 0 1.615003"
- Sat 10 16:57:49.584670 Data dirs-2/Sub dirC 0 2.201602"
- Sat 10 16:57:50.061832 Data dirs-2/Sub dirD 0 2.176913"
- Sat 10 16:57:50.075178 Data dirs-3/Sub dirA 0 1.952396"
- Sat 10 16:57:51.786967 Data dirs-3/Sub dirB 0 1.123764"
- Sat 10 16:57:52.028138 Data dirs-3/Sub dirC 0 2.531878"
- Sat 10 16:57:52.239866 Data dirs-3/Sub dirD 0 2.297417"
- Sat 10 16:57:52.911924 Data dirs-4/Sub dirA 14 1.290787"
- Sat 10 16:57:54.203172 Data dirs-4/Sub dirB 0 2.236149"
- Sat 10 16:57:54.537597 Data dirs-4/Sub dirC 14 2.125793"
- Sat 10 16:57:54.561454 Data dirs-4/Sub dirD 0 2.49632"
= Sat 10 16:57:46.188195 Real: 10.870221", Total: 31.583813"
Fake rsync for testing this script
Note: For testing this, I've used a fake rsync:
## Fake rsync wait 1.0 - 2.99 seconds and return 0-255 ~ 1x/10
rsync() { sleep $((RANDOM%2+1)).$RANDOM;exit $(( RANDOM%10==3?RANDOM%128:0));}
export -f rsync
The shortest version I found is to use the --cat option of parallel like below. This version avoids using xargs, only relying on features of parallel:
cat files.txt | \
parallel -n 500 --lb --pipe --cat rsync --files-from={} user#remote:/dir /dir -avPi
#### Arg explainer
# -n 500 :: split input into chunks of 500 entries
#
# --cat :: create a tmp file referenced by {} containing the 500
# entry content for each process
#
# user#remote:/dir :: the root relative to which entries in files.txt are considered
#
# /dir :: local root relative to which files are copied
Sample content from files.txt:
/dir/file-1
/dir/subdir/file-2
....
Note that this doesn't use -j 50 for job count, that didn't work on my end here. Instead I've used -n 500 for record count per job, calculated as a reasonable number given the total number of records.
I've found UDR/UDT to be an amazing tool. The TLDR; It's a UDT wrapper for rsync, utilizing multiple UPD connections rather than a single TCP connection.
References: https://udt.sourceforge.io/ & https://github.com/jaystevens/UDR#udr
If you use any RHEL distros, they've pre-compiled it for you... http://hgdownload.soe.ucsc.edu/admin/udr
The ONLY downside I've encountered is that you can't specify a different SSH port, so your remote server must use 22.
Anyway, after installing the rpm, it's literally as simple as:
udr rsync -aP user#IpOrFqdn:/source/files/* /dest/folder/
and your transfer speeds will increase drastically in most cases, depending on the server I've seen easily 10x increase in transfer speed.
Side note: if you choose to gzip everything first, then make sure to use --rsyncable arg so that it only updates what has changed.
using parallel rsync on a regular disk would only cause them to compete for the i/o, turning what should be a sequential read into an inefficient random read. You could try instead tar the directory into a stream through ssh pull from the destination server, then pipe the stream to tar extract.

Parsing rsync output

I am trying the parse the progress bar output of rsync command. I want to use the percentage data from the rsync progress bar and display it on a dialog gauge utility.
The rsync progress bar data looks like:
32768 0% 0.00kB/s 0:00:00
330563584 8% 315.22MB/s 0:00:11
So far, I have tried sed to extract the data:
rsync -a --progress test.tar.gz /media/sdb1 \
| sed -u -E 's/(^|.*[^0-9])([0-9]{1,3})%.*/\2/p'
I am able to obtain the final value 100 alone. I am not able to obtain the intermediate values.
This answer [1] led me finally to the main issue, why nothing worked in the way I expected it.
With this the following sniplet gives you all output you can have for parsing. The "final" lines (e.g. the 100%, but also the xxx files to consider) but also the intermediate stuff only separated with ^M.
/usr/bin/rsync --info=progress2 --no-i-r -avr /foo /boar | \
gawk '1;{fflush()}' RS='\r|\n' | \
while read -r i
do
echo Got: -$i-
done
Note the remarks about the separation of line endings in the original answer.
[1] Realtime removal of carriage return in shell
I found a possible solution to this problem. It could be done as follows:
rsync -a --progress test.tar.gz /media/sdb1 |
unbuffer -p grep -o "[0-9]*%" |
tr -d '%

Resources