Bash concurrency when writing to file - bash

I want to run a script agains a long subset of items, and each of them run concurrently, only when every iteration finishes, write it to a file.
For some reason, it writes to the file without finishing the function:
#!/bin/bash
function print_not_semver_line() {
echo -n "$repo_name,"
git tag -l | while read -r tag_name;do
semver $tag_name > /dev/null || echo -n "$tag_name "
done
echo ""
}
csv_name=~/Scripts/all_repos/not_semver.csv
echo "Repo Name,Not Semver Versions" > $csv_name
while read -r repo_name;do
cd $repo_dir
print_not_semver_line >> $csv_name &
done < ~/Scripts/all_repos/all_repos.txt
of course without &, it does what it supposed to do, but with it, it gets all messed up.
Ideas?

Here's an alternative that uses xargs for its natural parallelization, and a quick script that determines all of the non-semver tags and outputs at the end of the repo.
The premise is that this script does nothing fancy, it just loops over its provided directories and does one at a time, where you can parallelize outside of the script.
#!/bin/bash
log() {
now=$(date -Isec --utc)
echo "${now} $$ ${*}" > /dev/stderr
}
# I don't have semver otherwise available, so a knockoff replacement
function is_semver() {
echo "$*" | egrep -q "^v?[0-9]+\.[0-9]+\.[0-9]+$"
}
log "Called with: ${#}"
for repo_dir in ${#} ; do
log "Starting '${repo_dir}'"
bad=$(
git -C "${repo_dir}" tag -l | \
while read tag_name ; do
is_semver "${tag_name}" || echo -n "${tag_name} "
done
)
log "Done '${repo_dir}'"
echo "${repo_dir},${bad}"
done
log "exiting"
I have a project directory with various cloned github repos, I'll run it using xargs here. Notice a few things:
I am demonstrating calling the script with -L2 two directories per call (not parallelized) but -P4 four of these scripts running simultaneously
everything left of xargs in the pipe should be your method of determining what dirs/repos to iterate over
the first batch of processes starts with PIDs 17438, 17439, 17440, and 17442, and only when one of those quits (17442 then 17439) are new processes started
if you are not concerned with too many things running at once, you might use xargs -L1 -P9999 or something equally ridiculous :-)
$ find . -maxdepth 2 -iname .git | sed -e 's,/\.git,,g' | head -n 12 | \
xargs -L2 -P4 ~/StackOverflow/5783481/62283574_2.sh > not_semver.csv
2020-06-09T17:51:39+00:00 17438 Called with: ./calendar ./callr
2020-06-09T17:51:39+00:00 17439 Called with: ./docker-self-service-password ./ggnomics
2020-06-09T17:51:39+00:00 17438 Starting './calendar'
2020-06-09T17:51:39+00:00 17440 Called with: ./ggplot2 ./grid
2020-06-09T17:51:39+00:00 17439 Starting './docker-self-service-password'
2020-06-09T17:51:39+00:00 17442 Called with: ./gt ./keyring
2020-06-09T17:51:39+00:00 17440 Starting './ggplot2'
2020-06-09T17:51:39+00:00 17442 Starting './gt'
2020-06-09T17:51:39+00:00 17442 Done './gt'
2020-06-09T17:51:40+00:00 17442 Starting './keyring'
2020-06-09T17:51:40+00:00 17438 Done './calendar'
2020-06-09T17:51:40+00:00 17438 Starting './callr'
2020-06-09T17:51:40+00:00 17439 Done './docker-self-service-password'
2020-06-09T17:51:40+00:00 17439 Starting './ggnomics'
2020-06-09T17:51:40+00:00 17442 Done './keyring'
2020-06-09T17:51:40+00:00 17439 Done './ggnomics'
2020-06-09T17:51:40+00:00 17442 exiting
2020-06-09T17:51:40+00:00 17439 exiting
2020-06-09T17:51:40+00:00 17515 Called with: ./knitr ./ksql
2020-06-09T17:51:40+00:00 17518 Called with: ./nanodbc ./nostalgy
2020-06-09T17:51:40+00:00 17515 Starting './knitr'
2020-06-09T17:51:40+00:00 17518 Starting './nanodbc'
2020-06-09T17:51:41+00:00 17438 Done './callr'
2020-06-09T17:51:41+00:00 17438 exiting
2020-06-09T17:51:42+00:00 17440 Done './ggplot2'
2020-06-09T17:51:42+00:00 17440 Starting './grid'
2020-06-09T17:51:43+00:00 17518 Done './nanodbc'
2020-06-09T17:51:43+00:00 17518 Starting './nostalgy'
2020-06-09T17:51:43+00:00 17518 Done './nostalgy'
2020-06-09T17:51:43+00:00 17518 exiting
2020-06-09T17:51:43+00:00 17440 Done './grid'
2020-06-09T17:51:43+00:00 17440 exiting
2020-06-09T17:51:44+00:00 17515 Done './knitr'
2020-06-09T17:51:44+00:00 17515 Starting './ksql'
2020-06-09T17:51:55+00:00 17515 Done './ksql'
2020-06-09T17:51:55+00:00 17515 exiting
The output, in not_semver.csv:
./gt,
./calendar,
./docker-self-service-password,2.7 2.8 3.0
./keyring,
./ggnomics,
./callr,
./ggplot2,ggplot2-0.7 ggplot2-0.8 ggplot2-0.8.1 ggplot2-0.8.2 ggplot2-0.8.3 ggplot2-0.8.5 ggplot2-0.8.6 ggplot2-0.8.7 ggplot2-0.8.8 ggplot2-0.8.9 ggplot2-0.9.0 ggplot2-0.9.1 ggplot2-0.9.2 ggplot2-0.9.2.1 ggplot2-0.9.3 ggplot2-0.9.3.1 show
./nanodbc,
./nostalgy,
./grid,0.1 0.2 0.5 0.5-1 0.6 0.6-1 0.7-1 0.7-2 0.7-3 0.7-4
./knitr,doc v0.1 v0.2 v0.3 v0.4 v0.5 v0.6 v0.7 v0.8 v0.9 v1.0 v1.1 v1.10 v1.11 v1.12 v1.13 v1.14 v1.15 v1.16 v1.17 v1.18 v1.19 v1.2 v1.20 v1.3 v1.4 v1.5 v1.6 v1.7 v1.8 v1.9
./ksql,0.1-pre1 0.1-pre10 0.1-pre2 0.1-pre4 0.1-pre5 0.1-pre6 0.1-pre7 0.1-pre8 0.1-pre9 0.3 v0.2 v0.2-rc0 v0.2-rc1 v0.3 v0.3-rc0 v0.3-rc1 v0.3-rc2 v0.3-rc3 v0.3-temp v0.4 v0.4-rc0 v0.4-rc1 v0.5 v0.5-rc0 v0.5-rc1 v4.1.0-rc1 v4.1.0-rc2 v4.1.0-rc3 v4.1.0-rc4 v4.1.1-rc1 v4.1.1-rc2 v4.1.1-rc3 v4.1.2-beta180719000536 v4.1.2-beta3 v4.1.2-rc1 v4.1.3-beta180814192459 v4.1.3-beta180828173526 v5.0.0-beta1 v5.0.0-beta10 v5.0.0-beta11 v5.0.0-beta12 v5.0.0-beta14 v5.0.0-beta15 v5.0.0-beta16 v5.0.0-beta17 v5.0.0-beta18 v5.0.0-beta180622225242 v5.0.0-beta180626015140 v5.0.0-beta180627203620 v5.0.0-beta180628184550 v5.0.0-beta180628221539 v5.0.0-beta180629053850 v5.0.0-beta180630224559 v5.0.0-beta180701010229 v5.0.0-beta180701053749 v5.0.0-beta180701175910 v5.0.0-beta180701205239 v5.0.0-beta180702185100 v5.0.0-beta180702222458 v5.0.0-beta180706202823 v5.0.0-beta180707005130 v5.0.0-beta180707072142 v5.0.0-beta180718203558 v5.0.0-beta180722214927 v5.0.0-beta180723195256 v5.0.0-beta180726003306 v5.0.0-beta180730183336 v5.0.0-beta19 v5.0.0-beta2 v5.0.0-beta20 v5.0.0-beta21 v5.0.0-beta22 v5.0.0-beta23 v5.0.0-beta24 v5.0.0-beta25 v5.0.0-beta26 v5.0.0-beta27 v5.0.0-beta28 v5.0.0-beta29 v5.0.0-beta3 v5.0.0-beta30 v5.0.0-beta31 v5.0.0-beta32 v5.0.0-beta33 v5.0.0-beta5 v5.0.0-beta6 v5.0.0-beta7 v5.0.0-beta8 v5.0.0-beta9 v5.0.0-rc1 v5.0.0-rc3 v5.0.0-rc4 v5.0.1-beta180802235906 v5.0.1-beta180812233236 v5.0.1-beta180824214627 v5.0.1-beta180826190446 v5.0.1-beta180828173436 v5.0.1-beta180830182727 v5.0.1-beta180902210116 v5.0.1-beta180905054336 v5.0.1-beta180909000146 v5.0.1-beta180909000436 v5.0.1-beta180911213156 v5.0.1-beta180913003126 v5.0.1-beta180914024526 v5.0.1-beta181008233543 v5.0.1-beta181018200736 v5.0.1-rc1 v5.0.1-rc2 v5.0.1-rc3 v5.0.2-beta181116204629 v5.0.2-beta181116204811 v5.0.2-beta181116205152 v5.0.2-beta181117022246 v5.0.2-beta181118024524 v5.0.2-beta181119063215 v5.0.2-beta181119185816 v5.0.2-beta181126211008 v5.1.0-beta180611231144 v5.1.0-beta180612043613 v5.1.0-beta180612224009 v5.1.0-beta180613013021 v5.1.0-beta180614233101 v5.1.0-beta180615005408 v5.1.0-beta180618191747 v5.1.0-beta180618214711 v5.1.0-beta180618223247 v5.1.0-beta180618225004 v5.1.0-beta180619025141 v5.1.0-beta180620180431 v5.1.0-beta180620180739 v5.1.0-beta180620183559 v5.1.0-beta180622181348 v5.1.0-beta180626014959 v5.1.0-beta180627203509 v5.1.0-beta180628064520 v5.1.0-beta180628184841 v5.1.0-beta180630224439 v5.1.0-beta180701010040 v5.1.0-beta180701175749 v5.1.0-beta180702063039 v5.1.0-beta180702063440 v5.1.0-beta180702214311 v5.1.0-beta180702220040 v5.1.0-beta180703024529 v5.1.0-beta180706202701 v5.1.0-beta180707004950 v5.1.0-beta180718203536 v5.1.0-beta180722215127 v5.1.0-beta180723023347 v5.1.0-beta180723173636 v5.1.0-beta180724024536 v5.1.0-beta180730185716 v5.1.0-beta180812233046 v5.1.0-beta180820223106 v5.1.0-beta180824214446 v5.1.0-beta180828022857 v5.1.0-beta180828173516 v5.1.0-beta180829024526 v5.1.0-beta180905054157 v5.1.0-beta180911213206 v5.1.0-beta180912202326 v5.1.0-beta180917172706 v5.1.0-beta180919183606 v5.1.0-beta180928000756 v5.1.0-beta180929024526 v5.1.0-beta201806191956 v5.1.0-beta201806200051 v5.1.0-beta34 v5.1.0-beta35 v5.1.0-beta36 v5.1.0-beta37 v5.1.0-beta38 v5.1.0-beta39 v5.1.0-rc1 v6.0.0-beta181009070836 v6.0.0-beta181009071126 v6.0.0-beta181009071136 v6.0.0-beta181011024526
To reduce verbosity, you could remove logging and such, most of this output was intended to demonstrate the timing and running.
As another alternative, consider something like this:
log() {
now=$(date -Isec --utc)
echo "${now} ${*}" > /dev/stderr
}
# I don't have semver otherwise available, so a knockoff replacement
function is_semver() {
echo "$*" | egrep -q "^v?[0-9]+\.[0-9]+\.[0-9]+$"
}
function print_something() {
local repo_name=$1 tag_name=
bad=$(
git tag -l | while read tag_name ; do
is_semver "${tag_name}" || echo -n "${tag_name} "
done
)
echo "${repo_name},${bad}"
}
csvdir=$(mktemp -d not_semver_tempdir.XXXXXX)
csvdir=$(realpath "${csvdir}")/
log "Temp Directory: ${csvdir}"
while read -r repo_dir ; do
log "Starting '${repo_dir}'"
(
if [ -d "${repo_dir}" ]; then
repo_name=$(basename "${repo_dir}")
tmpfile=$(mktemp -p "${csvdir}")
tmpfile=$(realpath "${tmpfile}")
cd "${repo_dir}"
print_something "${repo_name}" > "${tmpfile}" 2> /dev/null
fi
) &
done
wait
outfile=$(mktemp not_semver_XXXXXX.csv)
cat ${csvdir}* > "${outfile}"
# rm -rf "${csvdir}" # uncomment when you're comfortable/confident
log "Output: ${outfile}"
I don't like it as much, admittedly, but its premise is that it creates a temporary directory in which each repo process will write its own file. Once all backgrounded jobs are complete (i.e., the wait near the end), all files are concatenated into an output.
Running it (without xargs):
$ find . -maxdepth 2 -iname .git | sed -e 's,/\.git,,g' | head -n 12 | \
~/StackOverflow/5783481/62283574.sh
2020-06-10T14:48:18+00:00 Temp Directory: /c/Users/r2/Projects/github/not_semver_tempdir.YeyaNY/
2020-06-10T14:48:18+00:00 Starting './calendar'
2020-06-10T14:48:18+00:00 Starting './callr'
2020-06-10T14:48:18+00:00 Starting './docker-self-service-password'
2020-06-10T14:48:18+00:00 Starting './ggnomics'
2020-06-10T14:48:18+00:00 Starting './ggplot2'
2020-06-10T14:48:19+00:00 Starting './grid'
2020-06-10T14:48:19+00:00 Starting './gt'
2020-06-10T14:48:19+00:00 Starting './keyring'
2020-06-10T14:48:19+00:00 Starting './knitr'
2020-06-10T14:48:19+00:00 Starting './ksql'
2020-06-10T14:48:19+00:00 Starting './nanodbc'
2020-06-10T14:48:19+00:00 Starting './nostalgy'
2020-06-10T14:48:38+00:00 Output: not_semver_CLy098.csv
r2#d2sb2 MINGW64 ~/Projects/github
$ cat not_semver_CLy098.csv
keyring,
ksql,0.1-pre1 0.1-pre10 0.1-pre2 0.1-pre4 0.1-pre5 0.1-pre6 0.1-pre7 0.1-pre8 0.1-pre9 0.3 v0.2 v0.2-rc0 v0.2-rc1 v0.3 v0.3-rc0 v0.3-rc1 v0.3-rc2 v0.3-rc3 v0.3-temp v0.4 v0.4-rc0 v0.4-rc1 v0.5 v0.5-rc0 v0.5-rc1 v4.1.0-rc1 v4.1.0-rc2 v4.1.0-rc3 v4.1.0-rc4 v4.1.1-rc1 v4.1.1-rc2 v4.1.1-rc3 v4.1.2-beta180719000536 v4.1.2-beta3 v4.1.2-rc1 v4.1.3-beta180814192459 v4.1.3-beta180828173526 v5.0.0-beta1 v5.0.0-beta10 v5.0.0-beta11 v5.0.0-beta12 v5.0.0-beta14 v5.0.0-beta15 v5.0.0-beta16 v5.0.0-beta17 v5.0.0-beta18 v5.0.0-beta180622225242 v5.0.0-beta180626015140 v5.0.0-beta180627203620 v5.0.0-beta180628184550 v5.0.0-beta180628221539 v5.0.0-beta180629053850 v5.0.0-beta180630224559 v5.0.0-beta180701010229 v5.0.0-beta180701053749 v5.0.0-beta180701175910 v5.0.0-beta180701205239 v5.0.0-beta180702185100 v5.0.0-beta180702222458 v5.0.0-beta180706202823 v5.0.0-beta180707005130 v5.0.0-beta180707072142 v5.0.0-beta180718203558 v5.0.0-beta180722214927 v5.0.0-beta180723195256 v5.0.0-beta180726003306 v5.0.0-beta180730183336 v5.0.0-beta19 v5.0.0-beta2 v5.0.0-beta20 v5.0.0-beta21 v5.0.0-beta22 v5.0.0-beta23 v5.0.0-beta24 v5.0.0-beta25 v5.0.0-beta26 v5.0.0-beta27 v5.0.0-beta28 v5.0.0-beta29 v5.0.0-beta3 v5.0.0-beta30 v5.0.0-beta31 v5.0.0-beta32 v5.0.0-beta33 v5.0.0-beta5 v5.0.0-beta6 v5.0.0-beta7 v5.0.0-beta8 v5.0.0-beta9 v5.0.0-rc1 v5.0.0-rc3 v5.0.0-rc4 v5.0.1-beta180802235906 v5.0.1-beta180812233236 v5.0.1-beta180824214627 v5.0.1-beta180826190446 v5.0.1-beta180828173436 v5.0.1-beta180830182727 v5.0.1-beta180902210116 v5.0.1-beta180905054336 v5.0.1-beta180909000146 v5.0.1-beta180909000436 v5.0.1-beta180911213156 v5.0.1-beta180913003126 v5.0.1-beta180914024526 v5.0.1-beta181008233543 v5.0.1-beta181018200736 v5.0.1-rc1 v5.0.1-rc2 v5.0.1-rc3 v5.0.2-beta181116204629 v5.0.2-beta181116204811 v5.0.2-beta181116205152 v5.0.2-beta181117022246 v5.0.2-beta181118024524 v5.0.2-beta181119063215 v5.0.2-beta181119185816 v5.0.2-beta181126211008 v5.1.0-beta180611231144 v5.1.0-beta180612043613 v5.1.0-beta180612224009 v5.1.0-beta180613013021 v5.1.0-beta180614233101 v5.1.0-beta180615005408 v5.1.0-beta180618191747 v5.1.0-beta180618214711 v5.1.0-beta180618223247 v5.1.0-beta180618225004 v5.1.0-beta180619025141 v5.1.0-beta180620180431 v5.1.0-beta180620180739 v5.1.0-beta180620183559 v5.1.0-beta180622181348 v5.1.0-beta180626014959 v5.1.0-beta180627203509 v5.1.0-beta180628064520 v5.1.0-beta180628184841 v5.1.0-beta180630224439 v5.1.0-beta180701010040 v5.1.0-beta180701175749 v5.1.0-beta180702063039 v5.1.0-beta180702063440 v5.1.0-beta180702214311 v5.1.0-beta180702220040 v5.1.0-beta180703024529 v5.1.0-beta180706202701 v5.1.0-beta180707004950 v5.1.0-beta180718203536 v5.1.0-beta180722215127 v5.1.0-beta180723023347 v5.1.0-beta180723173636 v5.1.0-beta180724024536 v5.1.0-beta180730185716 v5.1.0-beta180812233046 v5.1.0-beta180820223106 v5.1.0-beta180824214446 v5.1.0-beta180828022857 v5.1.0-beta180828173516 v5.1.0-beta180829024526 v5.1.0-beta180905054157 v5.1.0-beta180911213206 v5.1.0-beta180912202326 v5.1.0-beta180917172706 v5.1.0-beta180919183606 v5.1.0-beta180928000756 v5.1.0-beta180929024526 v5.1.0-beta201806191956 v5.1.0-beta201806200051 v5.1.0-beta34 v5.1.0-beta35 v5.1.0-beta36 v5.1.0-beta37 v5.1.0-beta38 v5.1.0-beta39 v5.1.0-rc1 v6.0.0-beta181009070836 v6.0.0-beta181009071126 v6.0.0-beta181009071136 v6.0.0-beta181011024526
knitr,doc v0.1 v0.2 v0.3 v0.4 v0.5 v0.6 v0.7 v0.8 v0.9 v1.0 v1.1 v1.10 v1.11 v1.12 v1.13 v1.14 v1.15 v1.16 v1.17 v1.18 v1.19 v1.2 v1.20 v1.3 v1.4 v1.5 v1.6 v1.7 v1.8 v1.9
calendar,
ggplot2,ggplot2-0.7 ggplot2-0.8 ggplot2-0.8.1 ggplot2-0.8.2 ggplot2-0.8.3 ggplot2-0.8.5 ggplot2-0.8.6 ggplot2-0.8.7 ggplot2-0.8.8 ggplot2-0.8.9 ggplot2-0.9.0 ggplot2-0.9.1 ggplot2-0.9.2 ggplot2-0.9.2.1 ggplot2-0.9.3 ggplot2-0.9.3.1 show
nostalgy,
callr,
docker-self-service-password,2.7 2.8 3.0
grid,0.1 0.2 0.5 0.5-1 0.6 0.6-1 0.7-1 0.7-2 0.7-3 0.7-4
ggnomics,
nanodbc,
gt,

use a variable or temp file for buffering lines. random file name is used
($0 = script name, $! = most recently background PID)
make sure you have write permissions. if you are worried about eMMC Flash Memory wear-out or write speeds you can also use shared-memory /run/shm
#!/bin/bash
print_not_semver_line() {
# random file name for line buffering
local tmpfile="${0%.*}${!:-0}.tmp~"
touch "$tmpfile" || return 1
# redirect stdout into different tmp file
echo -n "$repo_name," > "$tmpfile"
git tag -l | while read -r tag_name;do
semver $tag_name > /dev/null || echo -n "$tag_name " >> "$tmpfile"
done
echo "" >> "$tmpfile"
# print the whole line from one single ride
cat "$tmpfile" && rm "$tmpfile" && return 0
}
however, it is recommended to limit the maximum number of background processes. for the above example you can count open files with lsof
this function is waiting for given file name. it will check for similar file names and wait until number of open files is below allowed maximum. use it in your loop
first argument is mandatory file name
second argument is optional limit (default 4)
third argument is optional frequency for lsof
usage: wait_of <file> [<limit>] [<freq>]
# wait for open files (of)
wait_of() {
local pattern="$1" limit=${2:-4} time=${3:-1} path of
# check path
path="${pattern%/*}"
pattern="${pattern##*/}"
[ "$path" = "$pattern" ] && path=.
[ -e "$path" ] && [ -d "$(realpath "$path")" ] || return 1
# convert file name into regex
pattern="${pattern//[0-9]/0}"
while [[ "$pattern" =~ "00" ]]
do
pattern="${pattern//00/0}"
done
pattern="${pattern//0/[0-9]*}"
pattern="${pattern//[[:space:]]/[[:space:]]}"
# check path with regex for open files > 4 and wait
of=$(lsof -t "$path"/$pattern 2> /dev/null | wc -l)
while (( ${of:-0} > $limit ))
do
of=$(lsof -t "$path"/$pattern 2> /dev/null | wc -l)
sleep $time
done
return 0
}
# make sure only give one single tmp file name
wait_of "${0%.*}${!:-0}.tmp~" || exit 2
print_not_semver_line >> $csv_name &

Related

Parallel subshells doing work and report status

I am trying to do work in all subfolders in parallel and describe a status per folder once it is done in bash.
suppose I have a work function which can return a couple of statuses
#param #1 is the folder
# can return 1 on fail, 2 on sucess, 3 on nothing happend
work(){
cd $1
// some update thing
return 1, 2, 3
}
now I call this in my wrapper function
do_work(){
while read -r folder; do
tput cup "${row}" 20
echo -n "${folder}"
(
ret=$(work "${folder}")
tput cup "${row}" 0
[[ $ret -eq 1 ]] && echo " \e[0;31mupdate failed \uf00d\e[0m"
[[ $ret -eq 2 ]] && echo " \e[0;32mupdated \uf00c\e[0m"
[[ $ret -eq 3 ]] && echo " \e[0;32malready up to date \uf00c\e[0m"
) &>/dev/null
pids+=("${!}")
((++row))
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
echo "waiting for pids ${pids[*]}"
wait "${pids[#]}"
}
and what I want is, that it prints out all the folders per line, and updates them independently from each other in parallel and when they are done, I want that status to be written in that line.
However, I am unsure subshell is writing, which ones I need to capture how and so on.
My attempt above is currently not writing correctly, and not in parallel.
If I get it to work in parallel, I get those [1] <PID> things and [1] + 3156389 done ... messing up my screen.
If I put the work itself in a subshell, I don't have anything to wait for.
If I then collect the pids I dont get the response code to print out the text to show the status.
I did have a look at GNU Parallel but I think I cannot have that behaviour. (I think I could hack it that the finished jobs are printed, but I want all 'running' jobs are printed, and the finished ones get amended).
Assumptions/undestandings:
a separate child process is spawned for each folder to be processed
the child process generates messages as work progresses
messages from child processes are to be displayed in the console in real time, with each child's latest message being displayed on a different line
The general idea is to setup a means of interprocess communications (IC) ... named pipe, normal file, queuing/messaging system, sockets (plenty of ideas available via a web search on bash interprocess communications); the children write to this system while the parent reads from the system and issues the appropriate tput commands.
One very simple example using a normal file:
> status.msgs # initialize our IC file
child_func () {
# Usage: child_func <unique_id> <other> ... <args>
local i
for ((i=1;i<=10;i++))
do
sleep $1
# each message should include the child's <unique_id> ($1 in this case);
# parent/monitoring process uses this <unique_id> to control tput output
echo "$1:message - $1.$i" >> status.msgs
done
}
clear
( child_func 3 & )
( child_func 5 & )
( child_func 2 & )
while IFS=: read -r child msg
do
tput cup $child 10
echo "$msg"
done < <(tail -f status.msgs)
NOTES:
the (child_func 3 &) construct is one way to eliminate the OS message re: 'background process completed' from showing up in stdout (there may be other ways but I'm drawing a blank at the moment)
when using a file (normal, pipe) OP will want to look at a locking method (flock?) to insure messages from multiple children don't stomp each other
OP can get creative with the format of the messages printed to status.msgs in conjunction with parsing logic in the parent's while loop
assuming variable width messages OP may want to look at appending a tput el on the end of each printed message in order to 'erase' any characters leftover from a previous/longer message
exiting the loop could be as simple as keeping count of the number of child processes that send a message <id>:done, or keeping track of the number of children still running in the background, or ...
Running this at my command line generates 3 separate lines of output that are updated at various times (based on the sleep $1):
# no ouput to line #1
message - 2.10 # messages change from 2.1 to 2.2 to ... to 2.10
message - 3.10 # messages change from 3.1 to 3.2 to ... to 3.10
# no ouput to line #4
message - 5.10 # messages change from 5.1 to 5.2 to ... to 5.10
NOTE: comments not actually displayed in console
Based on #markp-fuso's answer:
printer() {
while IFS=$'\t' read -r child msg
do
tput cup $child 10
echo "$child $msg"
done
}
clear
parallel --lb --tagstring "{%}\t{}" work ::: folder1 folder2 folder3 | printer
echo
You can't control exit statuses like that. Try this instead, rework your work function to echo status:
work(){
cd $1
# some update thing &> /dev/null without output
echo "${1}_$status" #status=1, 2, 3
}
And than set data collection from all folders like so:
data=$(
while read -r folder; do
work "$folder" &
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
wait
)
echo "$data"

Cannot understand why our function calls return twice?

We have a 15 year (or so) old script we are trying to figure out and document. We have found some errors in it but one specific log file gives us much headache. and I would love some help figuring it out.
First the function that are run with the question:
#=========================================================#
# Define function removeOldBackupFile. #
#=========================================================#
removeOldBackupFile()
{
#set -x
echo "Removing old backups if they exists." >> "${report}"
local RCLOC=0
spaceBefore=$(getAvailableSpace ${backupDirectory})
timesToWait=60 # Wait a maximum of 10 minutes before bailing
cat ${oldDbContainer} | while read fileName
do
echo "Old file exists. Removing ${fileName}." >> "${report}"
removeFileIfExist "${fileName}"
RC=$?
echo "Resultcode for removing old backup is: RC=$RC." >> "${report}"
RCLOC=$(($RC+$RCLOC))
spaceAfter=$(getAvailableSpace ${backupDirectory})
# Wait for the OS to register that the file is removed
cnt=0
while [ $spaceAfter -le $spaceBefore ]; do
cnt=$((cnt+1))
if [ $cnt -gt $timesToWait ]; then
echo "Waited too long for space in ${backupDirectory}" | tee -a "${report}"
RCLOC=$(($RCLOC+1))
return $RCLOC
fi
sleep 10
spaceAfter=$(getAvailableSpace ${backupDirectory})
done
done
return $RCLOC
}
The place where this function is ran looks as follows:
#=========================================================#
# Remove old backupfiles if any exist. #
#=========================================================#
removeOldBackupFile
RC=$?
RCSUM=$(($RC+$RCSUM))
We have identified that the if condition is a bit wrong and the while loops would not work as intended if there are multiple files.
But what bothers us is output from a log file:
...
+ cnt=61
+ '[' 61 -gt 60 ']'
+ echo 'Waited too long for space in /<redacted>/backup'
+ tee -a /tmp/maintenanceBackupMessage.70927
Waited too long for space in /<redacted>/backup
+ RCLOC=1
+ return 1
+ return 0
+ RC=0
+ RCSUM=0
...
As seen in the log output after the inner loop have ran 60 times and ending it returns 1 as expected.. BUT! it also have return 0 after!? Why is it also returning 0?
We are unable to figure out the double returns... Any help appriciated
The first return executes in the subshell started by the pipe cat ${oldDbContainer} | while .... The second return is from return $RCLOC at the end of the function. Get rid of the useless use of cat:
removeOldBackupFile()
{
#set -x
echo "Removing old backups if they exists." >> "${report}"
local RCLOC=0
spaceBefore=$(getAvailableSpace ${backupDirectory})
timesToWait=60 # Wait a maximum of 10 minutes before bailing
while read fileName
do
...
done < ${oldDbContainer}
return $RCLOC
}

batch processing : File name comparison error

I have written a program (Cifti_subject_fmri) which compares whether file name matches in two folders and essentially executes a set of instructions
#!/bin/bash -- fix_mni_paths
source activate ciftify_v1.0.0
export SUBJECTS_DIR=/scratch/m/mchakrav/dev/functional_data
export HCP_DATA=/scratch/m/mchakrav/dev/tCDS_ciftify
## make the $SUBJECTS_DIR if it does not already exist
mkdir -p ${HCP_DATA}
SUBJECTS=`cd $SUBJECTS_DIR; ls -1d *` ## list of my subjects
HCP=`cd $HCP_DATA; ls -1d *` ## List of HCP Subjects
cd $HCP_DATA
## submit the files to the queue
for i in $SUBJECTS;do
for j in $HCP ; do
if [[ $i == $j ]];then
parallel "echo ciftify_subject_fmri $i/filtered_func_data.nii.gz $j fMRI " ::: $SUBJECTS |qbatch --walltime '05:00:00' --ppj 8 -c 4 -j 4 -N ciftify_subject_fmri -
fi
done
done
When i run this code in the cluster i am getting an error which says
./Cifti_subject_fmri: [[AS1: command not found
The query ciftify_subject_fmri is part of toolbox ciftify, for it to execute it requires following instructions
ciftify_subject_fmri <func.nii.gz> <Subject> <NameOffMRI>
I have 33 subjects [AS1 -AS33] each with its own func.nii.gz files located SUBJECTS directory,the results need to be populated in HCP directory, fMRI is name of file format .
Could some one kindly let me know why i am getting an error in loop

Waiting until all processes end to execute next line in Bash

In a script I'm writing right now, I create many background processes in attempts to run my script on multiple devices in parallel. This functionality works, but it would appear I have no control of it. The simple wait command does not get me the results I need.
Abridged code:
#!/bin/bash
echo ""
date
echo ""
echo "Displaying devices to be configured:"
./adb devices | sed "1d ; $ d"
echo ""
echo "###########################"
echo "# #"
echo "# Starting configuration! #"
echo "# #"
echo "###########################"
echo ""
# All commands ran through this function
DeviceConfig () {
...
# Large list of commands
...
}
# This is the loop that spawns all the processes. Note the ampersand I'm using.
for usb in $(./adb devices -l | awk '/ device usb:/{print $3}'); do ( DeviceConfig & ) ; done
echo ""
echo "###########################"
echo "# #"
echo "# Configuration complete! #"
echo "# #"
echo "###########################"
While this will successfully run all my commands in parallel, my output is not as intended.
Actual output:
Wed Oct 5 13:11:26 EDT 2016
Displaying devices to be configured:
3100c2759da2a200 device
3100c2ddbbafa200 device
###########################
# #
# Starting configuration! #
# #
###########################
###########################
# #
# Configuration complete! #
# #
###########################
Starting: Intent { cmp=com.android.settings/.Settings }
Warning: Activity not started, its current task has been brought to the front
Starting: Intent { cmp=com.android.settings/.Settings }
Warning: Activity not started, its current task has been brought to the front
...
(The ... is to imply more output from the script.)
Putting a wait in the loop does not solve the issue. Putting a wait after the loop does not solve the issue. How do I write this loop so the configurations happen in between the Starting configuration! and Configuration complete! output?
You can ask wait to wait on multiple processes, e.g.:
pids=()
for usb in $(./adb devices -l | awk '/ device usb:/{print $3}'); do DeviceConfig & pids+=($!); done
wait "${pids[#]}"

How collect metadata/os info about ec2 instance to file?

I want to create a script that gathers information about the ec2 instance ( id, ip, os, users mb other if needed ), but i need help with getting info about running system - i think it easy to get OS info from /etc/os-release ? And the second question about yaml - is it possible parse output to data.txt as yaml ?
Please help me add OS info to data.txt :)
#!/bin/bash
URL="http://169.254.169.254/latest/meta-data/"
which curl > /dev/null 2>&1
if [ $? == 0 ]; then
get_cmd="curl -s"
else
get_cmd="wget -q -O -"
fi
get () {
$get_cmd $URL/$1
}
data_items=(instance-id
local-ipv4
public-ipv4
)
yaml=""
for meta_thing in ${data_items[*]}; do
data=$(get $meta_thing)
entry=$(printf "%-30s%s" "$meta_thing:" "$data\n")
yaml="$yaml$entry"
done
echo -e "$yaml" > data.txt
maybe add
lsb_release -a >> data.txt
uname -a >> data.txt
To the end of the script

Resources