I have come across a script i.e. Wondershaper
The script is terrific, however any way to make it smarter?
Like it runs after certain traffic has gone through?
Say 1TB is set per day, once 1TB is hit, the script turns on automatically?
I have thought about setting crn job,
At 12 am it clears the wondershaper, and in 15mins interval, it checks if the server has crossed 1TB limit for the day, and then if it is true then it runs the limiter,
but I am not sure how to set up the 2nd part, how can i setup a way that will enable the limiter to run after 1TB is crossed?
Remove Code
wondershaper -ca eth0
Limit Code
wondershaper -a eth0 -u 154000
I have made a custom script for this, as it is not possible to do it within the system, i had to become creative and do a API call to the datacenter and then run cron job.
I also used bashjson, to run it. I have attached the script below.
date=$(date +%F)
url='API URL /metrics/datatraffic?from='
url1='T00:00:00Z&to='
url2='T23:59:59Z&aggregation=SUM'
final="$url$date$url1$date$url2"
wget --no-check-certificate -O output.txt \
--method GET \
--timeout=0 \
--header 'X-Lsw-Auth: API AUTH' \
$final
sed 's/[][]//g' output.txt >> test1.json // will remove '[]' from the code just to make things easier for bashjson to understand
down=$(/root/bashjson/bashjson.sh test1.json metrics DOWN_PUBLIC values value) // outputs the data to variable
up=$(/root/bashjson/bashjson.sh test1.json metrics UP_PUBLIC values value)
newdown=$(printf "%.14f" $down)
newup=$(printf "%.14f" $up)
upp=$(printf "%.0f\n" "$newup") // removes scientific notation as bash does not like it
downn=$(printf "%.0f\n" "$newdown")
if (($upp>800000000000 | bc))
then
wondershaper -a eth0 -u 100000 //main command to limit
else
echo uppworks
fi
if (($downn>500000000000 | bc))
then
wondershaper -a eth0 -d 100000
else
echo downworks
fi
rm -rf output.txt test1.json
echo $upp
echo $downn
You can always update it as per your preference.
On Ubuntu I am able to use aplay to play sound generated live from a script by piping the output of my script to aplay's stdin :
./generate_sound.py | aplay -r 2000 -c2 -f MU_LAW
cat sample.wav | aplay
Is there a way to do the same from terminal in OSX? I think afplay doesn't support this ...
Maybe someone knows another OSX command line sound player that would do the trick?
I had high hopes for redirection/piping, but afplay /dev/stdin <<< $(generate_sound.py) failed for all the formats I tried. Sadly afplay doesn't let you specify the format, and so it tries instead to sniff it which probably involves seeking which doesn't work with pipes.
I think you'd better find another command line player. sox seems like a good candidate. And! It's installable via homebrew: brew install sox and you can pipe data to it like so:
cat whatever.raw | play -t raw -e floating-point -b 32 -c 2 -r 44100 -
To Listen to an FM station on a mac
rtl_fm -f 95.3e6 -M wbfm -s 200000 -r 48000 – | aplay -r 48k -f S16_LE
To record for 10s
export AUDIOSAMPLERATE=48000
export SAMPLERATE=200000
export FREQ="127.2m"
rtl_fm -f $FREQ -M am -s $SAMPLERATE -r $AUDIOSAMPLERATE | sox -r $AUDIOSAMPLERATE -t raw -e s -b 16 -c 1 -V1 - FILENAME.wav&
sleep 10
killall rtl_fm
On my Fedora machine I sometimes need to find out certain components of the kernel name, e.g.
VERSION=3.18.9-200.fc21
VERSION_ARCH=3.18.9-200.fc21.x86_64
SHORT_VERSION=3.18
DIST_VERSION=fc21
EXTRAVERSION = -200.fc21.x86_64
I know uname -a/-r/-m but these give me not all the components I need.
Of course I can just disassemble uname -r e.g.
KERNEL_VERSION_ARCH=$(uname -r)
KERNEL_VERSION=$(uname -r | cut -d '.' -f 1-4)
KERNEL_SHORT_VERSION=$(uname -r | cut -d '.' -f 1-2)
KERNEL_DIST_VERSION=$(uname -r | cut -d '.' -f 4)
EXTRAVERSION="-$(uname -r | cut -d '-' -f 2)"
But this seems very cumbersome and not future-safe to me.
Question: is there an elegant way (i.e. more readable and distribution aware) to get all kernel version/name components I need?
Nice would be s.th. like
kernel-ver -f "%M.%m.%p-%e.%a"
3.19.4-200.fc21.x86_64
kernel-ver -f "%M.%m"
3.19
kernel-ver -f "%d"
fc21
Of course the uname -r part would need a bit sed/awk/grep magic. But there are some other options you can try:
cat /etc/os-release
cat /etc/lsb-release
Since it's fedora you can try: cat /etc/fedora-release
lsb_release -a is also worth a try.
cat /proc/version, but that nearly the same output as uname -a
In the files /etc/*-release the format is already VARIABLE=value, so you could source the file directly and access the variables later:
$ source /etc/os-release
$ echo $ID
fedora
To sum this up a command that should work on every system that combines the above ideas:
cat /etc/*_ver* /etc/*-rel* 2>/dev/null
We need to transfer 15TB of data from one server to another as fast as we can. We're currently using rsync but we're only getting speeds of around 150Mb/s, when our network is capable of 900+Mb/s (tested with iperf). I've done tests of the disks, network, etc and figured it's just that rsync is only transferring one file at a time which is causing the slowdown.
I found a script to run a different rsync for each folder in a directory tree (allowing you to limit to x number), but I can't get it working, it still just runs one rsync at a time.
I found the script here (copied below).
Our directory tree is like this:
/main
- /files
- /1
- 343
- 123.wav
- 76.wav
- 772
- 122.wav
- 55
- 555.wav
- 324.wav
- 1209.wav
- 43
- 999.wav
- 111.wav
- 222.wav
- /2
- 346
- 9993.wav
- 4242
- 827.wav
- /3
- 2545
- 76.wav
- 199.wav
- 183.wav
- 23
- 33.wav
- 876.wav
- 4256
- 998.wav
- 1665.wav
- 332.wav
- 112.wav
- 5584.wav
So what I'd like to happen is to create an rsync for each of the directories in /main/files, up to a maximum of, say, 5 at a time. So in this case, 3 rsyncs would run, for /main/files/1, /main/files/2 and /main/files/3.
I tried with it like this, but it just runs 1 rsync at a time for the /main/files/2 folder:
#!/bin/bash
# Define source, target, maxdepth and cd to source
source="/main/files"
target="/main/filesTest"
depth=1
cd "${source}"
# Set the maximum number of concurrent rsync threads
maxthreads=5
# How long to wait before checking the number of rsync threads again
sleeptime=5
# Find all folders in the source directory within the maxdepth level
find . -maxdepth ${depth} -type d | while read dir
do
# Make sure to ignore the parent folder
if [ `echo "${dir}" | awk -F'/' '{print NF}'` -gt ${depth} ]
then
# Strip leading dot slash
subfolder=$(echo "${dir}" | sed 's#^\./##g')
if [ ! -d "${target}/${subfolder}" ]
then
# Create destination folder and set ownership and permissions to match source
mkdir -p "${target}/${subfolder}"
chown --reference="${source}/${subfolder}" "${target}/${subfolder}"
chmod --reference="${source}/${subfolder}" "${target}/${subfolder}"
fi
# Make sure the number of rsync threads running is below the threshold
while [ `ps -ef | grep -c [r]sync` -gt ${maxthreads} ]
do
echo "Sleeping ${sleeptime} seconds"
sleep ${sleeptime}
done
# Run rsync in background for the current subfolder and move one to the next one
nohup rsync -a "${source}/${subfolder}/" "${target}/${subfolder}/" </dev/null >/dev/null 2>&1 &
fi
done
# Find all files above the maxdepth level and rsync them as well
find . -maxdepth ${depth} -type f -print0 | rsync -a --files-from=- --from0 ./ "${target}/"
Updated answer (Jan 2020)
xargs is now the recommended tool to achieve parallel execution. It's pre-installed almost everywhere. For running multiple rsync tasks the command would be:
ls /srv/mail | xargs -n1 -P4 -I% rsync -Pa % myserver.com:/srv/mail/
This will list all folders in /srv/mail, pipe them to xargs, which will read them one-by-one and and run 4 rsync processes at a time. The % char replaces the input argument for each command call.
Original answer using parallel:
ls /srv/mail | parallel -v -j8 rsync -raz --progress {} myserver.com:/srv/mail/{}
Have you tried using rclone.org?
With rclone you could do something like
rclone copy "${source}/${subfolder}/" "${target}/${subfolder}/" --progress --multi-thread-streams=N
where --multi-thread-streams=N represents the number of threads you wish to spawn.
rsync transfers files as fast as it can over the network. For example, try using it to copy one large file that doesn't exist at all on the destination. That speed is the maximum speed rsync can transfer data. Compare it with the speed of scp (for example). rsync is even slower at raw transfer when the destination file exists, because both sides have to have a two-way chat about what parts of the file are changed, but pays for itself by identifying data that doesn't need to be transferred.
A simpler way to run rsync in parallel would be to use parallel. The command below would run up to 5 rsyncs in parallel, each one copying one directory. Be aware that the bottleneck might not be your network, but the speed of your CPUs and disks, and running things in parallel just makes them all slower, not faster.
run_rsync() {
# e.g. copies /main/files/blah to /main/filesTest/blah
rsync -av "$1" "/main/filesTest/${1#/main/files/}"
}
export -f run_rsync
parallel -j5 run_rsync ::: /main/files/*
You can use xargs which supports running many processes at a time. For your case it will be:
ls -1 /main/files | xargs -I {} -P 5 -n 1 rsync -avh /main/files/{} /main/filesTest/
There are a number of alternative tools and approaches for doing this listed arround the web. For example:
The NCSA Blog has a description of using xargs and find to parallelize rsync without having to install any new software for most *nix systems.
And parsync provides a feature rich Perl wrapper for parallel rsync.
I've developed a python package called: parallel_sync
https://pythonhosted.org/parallel_sync/pages/examples.html
Here is a sample code how to use it:
from parallel_sync import rsync
creds = {'user': 'myusername', 'key':'~/.ssh/id_rsa', 'host':'192.168.16.31'}
rsync.upload('/tmp/local_dir', '/tmp/remote_dir', creds=creds)
parallelism by default is 10; you can increase it:
from parallel_sync import rsync
creds = {'user': 'myusername', 'key':'~/.ssh/id_rsa', 'host':'192.168.16.31'}
rsync.upload('/tmp/local_dir', '/tmp/remote_dir', creds=creds, parallelism=20)
however note that ssh typically has the MaxSessions by default set to 10 so to increase it beyond 10, you'll have to modify your ssh settings.
The simplest I've found is using background jobs in the shell:
for d in /main/files/*; do
rsync -a "$d" remote:/main/files/ &
done
Beware it doesn't limit the amount of jobs! If you're network-bound this is not really a problem but if you're waiting for spinning rust this will be thrashing the disk.
You could add
while [ $(jobs | wc -l | xargs) -gt 10 ]; do sleep 1; done
inside the loop for a primitive form of job control.
3 tricks for speeding up rsync on local net.
1. Copying from/to local network: don't use ssh!
If you're locally copying a server to another, there is no need to encrypt data during transfer!
By default, rsync use ssh to transer data through network. To avoid this, you have to create a rsync server on target host. You could punctually run daemon by something like:
rsync --daemon --no-detach --config filename.conf
where minimal configuration file could look like: (see man rsyncd.conf)
filename.conf
port = 12345
[data]
path = /some/path
use chroot = false
Then
rsync -ax rsync://remotehost:12345/data/. /path/to/target/.
rsync -ax /path/to/source/. rsync://remotehost:12345/data/.
2. Using zstandard zstd for high speed compression
Zstandard could be upto 8x faster than the common gzip. So using this newer compression algorithm will improve significantly your transfer!
rsync -axz --zc=zstd rsync://remotehost:12345/data/. /path/to/target/.
rsync -axz --zc=zstd /path/to/source/. rsync://remotehost:12345/data/.
3. Multiplexing rsync to reduce inactivity due to browse time
This kind of optimisation is about disk access and filesystem structure. There is nothing to see with number of CPU! So this could improve transfer even if your host use single core CPU.
As the goal is to ensure maximum data are using bandwidth while other task browse filesystem, the most suited number of simultaneous process depend on number of small files presents.
Here is a sample bash script using wait -n -p PID:
#!/bin/bash
maxProc=3
source=''
destination='rsync://remotehost:12345/data/'
declare -ai start elap results order
wait4oneTask() {
local _i
wait -np epid
results[epid]=$?
elap[epid]=" ${EPOCHREALTIME/.} - ${start[epid]} "
unset "running[$epid]"
while [ -v elap[${order[0]}] ];do
_i=${order[0]}
printf " - %(%a %d %T)T.%06.0f %-36s %4d %12d\n" "${start[_i]:0:-6}" \
"${start[_i]: -6}" "${paths[_i]}" "${results[_i]}" "${elap[_i]}"
order=(${order[#]:1})
done
}
printf " %-22s %-36s %4s %12s\n" Started Path Rslt 'microseconds'
for path; do
rsync -axz --zc zstd "$source$path/." "$destination$path/." &
lpid=$!
paths[lpid]="$path"
start[lpid]=${EPOCHREALTIME/.}
running[lpid]=''
order+=($lpid)
((${#running[#]}>=maxProc)) && wait4oneTask
done
while ((${#running[#]})); do
wait4oneTask
done
Output could look like:
myRsyncP.sh files/*/*
Started Path Rslt microseconds
- Fri 03 09:20:44.673637 files/1/343 0 1186903
- Fri 03 09:20:44.673914 files/1/43 0 2276767
- Fri 03 09:20:44.674147 files/1/55 0 2172830
- Fri 03 09:20:45.861041 files/1/772 0 1279463
- Fri 03 09:20:46.847241 files/2/346 0 2363101
- Fri 03 09:20:46.951192 files/2/4242 0 2180573
- Fri 03 09:20:47.140953 files/3/23 0 1789049
- Fri 03 09:20:48.930306 files/3/2545 0 3259273
- Fri 03 09:20:49.132076 files/3/4256 0 2263019
Quick check:
printf "%'d\n" $(( 49132076 + 2263019 - 44673637)) \
$((1186903+2276767+2172830+1279463+2363101+2180573+1789049+3259273+2263019))
6’721’458
18’770’978
There was 6,72seconds elapsed to process 18,77seconds under upto three subprocess.
Note: you could use musec2str to improve ouptut, by replacing 1st long printf line by:
musec2str -v elapsed "${elap[i]}"
printf " - %(%a %d %T)T.%06.0f %-36s %4d %12s\n" "${start[i]:0:-6}" \
"${start[i]: -6}" "${paths[i]}" "${results[i]}" "$elapsed"
myRsyncP.sh files/*/*
Started Path Rslt Elapsed
- Fri 03 09:27:33.463009 files/1/343 0 18.249400"
- Fri 03 09:27:33.463264 files/1/43 0 18.153972"
- Fri 03 09:27:33.463502 files/1/55 93 10.104106"
- Fri 03 09:27:43.567882 files/1/772 122 14.748798"
- Fri 03 09:27:51.617515 files/2/346 0 19.286811"
- Fri 03 09:27:51.715848 files/2/4242 0 3.292849"
- Fri 03 09:27:55.008983 files/3/23 0 5.325229"
- Fri 03 09:27:58.317356 files/3/2545 0 10.141078"
- Fri 03 09:28:00.334848 files/3/4256 0 15.306145"
The more: you could add overall stat line by some edits in this script:
#!/bin/bash
maxProc=3 source='' destination='rsync://remotehost:12345/data/'
. musec2str.bash # See https://stackoverflow.com/a/72316403/1765658
declare -ai start elap results order
declare -i sumElap totElap
wait4oneTask() {
wait -np epid
results[epid]=$?
local -i _i crtelap=" ${EPOCHREALTIME/.} - ${start[epid]} "
elap[epid]=crtelap sumElap+=crtelap
unset "running[$epid]"
while [ -v elap[${order[0]}] ];do # Print status lines in command order.
_i=${order[0]}
musec2str -v helap ${elap[_i]}
printf " - %(%a %d %T)T.%06.f %-36s %4d %12s\n" "${start[_i]:0:-6}" \
"${start[_i]: -6}" "${paths[_i]}" "${results[_i]}" "${helap}"
order=(${order[#]:1})
done
}
printf " %-22s %-36s %4s %12s\n" Started Path Rslt 'microseconds'
for path;do
rsync -axz --zc zstd "$source$path/." "$destination$path/." &
lpid=$! paths[lpid]="$path" start[lpid]=${EPOCHREALTIME/.}
running[lpid]='' order+=($lpid)
((${#running[#]}>=maxProc)) &&
wait4oneTask
done
while ((${#running[#]})) ;do
wait4oneTask
done
totElap=${EPOCHREALTIME/.}
for i in ${!start[#]};do sortstart[${start[i]}]=$i;done
sortstartstr=${!sortstart[*]}
fstarted=${sortstartstr%% *}
totElap+=-fstarted
musec2str -v hTotElap $totElap
musec2str -v hSumElap $sumElap
printf " = %(%a %d %T)T.%06.0f %-41s %12s\n" "${fstarted:0:-6}" \
"${fstarted: -6}" "Real: $hTotElap, Total:" "$hSumElap"
Could produce:
$ ./parallelRsync Data\ dirs-{1..4}/Sub\ dir{A..D}
Started Path Rslt microseconds
- Sat 10 16:57:46.188195 Data dirs-1/Sub dirA 0 1.69131"
- Sat 10 16:57:46.188337 Data dirs-1/Sub dirB 116 2.256086"
- Sat 10 16:57:46.188473 Data dirs-1/Sub dirC 0 1.1722"
- Sat 10 16:57:47.361047 Data dirs-1/Sub dirD 0 2.222638"
- Sat 10 16:57:47.880674 Data dirs-2/Sub dirA 0 2.193557"
- Sat 10 16:57:48.446484 Data dirs-2/Sub dirB 0 1.615003"
- Sat 10 16:57:49.584670 Data dirs-2/Sub dirC 0 2.201602"
- Sat 10 16:57:50.061832 Data dirs-2/Sub dirD 0 2.176913"
- Sat 10 16:57:50.075178 Data dirs-3/Sub dirA 0 1.952396"
- Sat 10 16:57:51.786967 Data dirs-3/Sub dirB 0 1.123764"
- Sat 10 16:57:52.028138 Data dirs-3/Sub dirC 0 2.531878"
- Sat 10 16:57:52.239866 Data dirs-3/Sub dirD 0 2.297417"
- Sat 10 16:57:52.911924 Data dirs-4/Sub dirA 14 1.290787"
- Sat 10 16:57:54.203172 Data dirs-4/Sub dirB 0 2.236149"
- Sat 10 16:57:54.537597 Data dirs-4/Sub dirC 14 2.125793"
- Sat 10 16:57:54.561454 Data dirs-4/Sub dirD 0 2.49632"
= Sat 10 16:57:46.188195 Real: 10.870221", Total: 31.583813"
Fake rsync for testing this script
Note: For testing this, I've used a fake rsync:
## Fake rsync wait 1.0 - 2.99 seconds and return 0-255 ~ 1x/10
rsync() { sleep $((RANDOM%2+1)).$RANDOM;exit $(( RANDOM%10==3?RANDOM%128:0));}
export -f rsync
The shortest version I found is to use the --cat option of parallel like below. This version avoids using xargs, only relying on features of parallel:
cat files.txt | \
parallel -n 500 --lb --pipe --cat rsync --files-from={} user#remote:/dir /dir -avPi
#### Arg explainer
# -n 500 :: split input into chunks of 500 entries
#
# --cat :: create a tmp file referenced by {} containing the 500
# entry content for each process
#
# user#remote:/dir :: the root relative to which entries in files.txt are considered
#
# /dir :: local root relative to which files are copied
Sample content from files.txt:
/dir/file-1
/dir/subdir/file-2
....
Note that this doesn't use -j 50 for job count, that didn't work on my end here. Instead I've used -n 500 for record count per job, calculated as a reasonable number given the total number of records.
I've found UDR/UDT to be an amazing tool. The TLDR; It's a UDT wrapper for rsync, utilizing multiple UPD connections rather than a single TCP connection.
References: https://udt.sourceforge.io/ & https://github.com/jaystevens/UDR#udr
If you use any RHEL distros, they've pre-compiled it for you... http://hgdownload.soe.ucsc.edu/admin/udr
The ONLY downside I've encountered is that you can't specify a different SSH port, so your remote server must use 22.
Anyway, after installing the rpm, it's literally as simple as:
udr rsync -aP user#IpOrFqdn:/source/files/* /dest/folder/
and your transfer speeds will increase drastically in most cases, depending on the server I've seen easily 10x increase in transfer speed.
Side note: if you choose to gzip everything first, then make sure to use --rsyncable arg so that it only updates what has changed.
using parallel rsync on a regular disk would only cause them to compete for the i/o, turning what should be a sequential read into an inefficient random read. You could try instead tar the directory into a stream through ssh pull from the destination server, then pipe the stream to tar extract.
After much trouble I got the UDEV rule to run after inserting an USB. It runs a program to convert the names of pictures and movies. I use {} & to run the program in the background: The only thing is that by unplugging the usb it is easy corrupted. So I would like to also run fsck. Does anybody has an idea?
Here is the UDEV rule:
CTION=="add", SUBSYSTEM=="block", ATTRS{idVendor}=="14cd", ATTRS{idProduct}=="121f", RUN+="/home/pi/bashtest.sh"
Here is the program:
#!/bin/bash
sudo umount /dev/sda1
sudo fsck -y /dev/sda1
{
dd=1234567890aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ
sleep 5
sudo mount -t vfat /dev/sda1 /media/usb1
cd /media/usb1/DCIM/Camera
sudo find /media/usb1/DCIM/Camera -regextype posix-egrep -regex ".*[^/]{13}.JPG"|
for i in *.JPG
do
ddate=$(exiv2 "${i}"|grep timestamp)
SPEC=$ddate
read X X YEAR MONTH DAY HOUR MINUTE SECOND <<<${SPEC//:/ }
d1=${YEAR:2}
d2=${dd:(10#$MONTH-1):1}
d3=${dd:(10#$DAY-1):1}
d4=${dd:(10#$HOUR-1):1}
d5=${dd:(10#$MINUTE-1):1}
d6=${dd:(10#$SECOND-1):1}
d7=0
sudo cp -nrv --preserve=all "$i" /media/usb1/DCIM/"${d1}${d2}${d3}${d4}${d5}${d6}${d7}.JPG"
find . -name '*.JPG' -size -1 -delete
done
for i in *.MP4
do
#exiftool -createdate -S -s 20140308_133017.MP4
dddate=$(exiftool "${i}" |grep "Media Create Date" | awk -F':' '{print $2, $3, $4, $5, $6, $7}')
SPEC=$dddate
read YEAR MONTH DAY HOUR MINUTE SECOND <<<${SPEC//:/ }
d1=${YEAR:2}
d2=${dd:(10#$MONTH-1):1}
d3=${dd:(10#$DAY-1):1}
d4=${dd:(10#$HOUR-1):1}
d5=${dd:(10#$MINUTE-1):1}
d6=${dd:(10#$SECOND-1):1}
d7=0
sudo cp -nrv --preserve=all "$i" /media/usb1/DCIM/"${d1}${d2}${d3}${d4}${d5}${d6}${d7}.MP4"
done
sudo umount -l /media/usb1
sleep 5
sudo shutdown -h now
} &
Probably the code can be written better, but it works for me.
Using {} && for fsck and the rest {} & let it all work on the background and helped me to get it to work!