I wrote a readiness_probe for my pod by using a bash script. Readiness probe failed with Reason: Unhealthy but when I manually get in to the pod and run this command /bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; fi bash script exits with code 0.
What could be the reason? I am attaching the code and the error below.
Edit: Found out that the health variable is set to 000 which means timeout in for bash script.
readinessProbe:
exec:
command:
- /bin/bash
- '-c'
- |-
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; fi
"kubectl describe pod {pod_name}" result:
Name: rustici-engine-54cbc97c88-5tg8s
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Tue, 12 Jul 2022 18:39:08 +0200
Labels: app.kubernetes.io/name=rustici-engine
pod-template-hash=54cbc97c88
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/rustici-engine-54cbc97c88
Containers:
rustici-engine:
Container ID: docker://f7efffe6fc167e52f913ec117a4d78e62b326d8f5b24bfabc1916b5f20ed887c
Image: batupaksoy/rustici-engine:singletenant
Image ID: docker-pullable://batupaksoy/rustici-engine#sha256:d3cf985c400c0351f5b5b10c4d294d48fedfd2bb2ddc7c06a20c1a85d5d1ae11
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 12 Jul 2022 18:39:12 +0200
Ready: False
Restart Count: 0
Limits:
memory: 350Mi
Requests:
memory: 350Mi
Liveness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=20
Readiness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=10
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whb8d (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-whb8d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24s default-scheduler Successfully assigned default/rustici-engine-54cbc97c88-5tg8s to minikube
Normal Pulling 23s kubelet Pulling image "batupaksoy/rustici-engine:singletenant"
Normal Pulled 21s kubelet Successfully pulled image "batupaksoy/rustici-engine:singletenant" in 1.775919851s
Normal Created 21s kubelet Created container rustici-engine
Normal Started 20s kubelet Started container rustici-engine
Warning Unhealthy 4s kubelet Readiness probe failed:
Warning Unhealthy 4s kubelet Liveness probe failed:
The probe could be failing because it is facing performance issues or slow startup. To troubleshoot this issue, you will need to check that the probe doesn’t start until the app is up and running in your pod. Perhaps you will need to increase the Timeout of the Readiness Probe, as well as the Timeout of the Liveness Probe, like in the following example:
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 2
timeoutSeconds: 10
You can find more details about how to configure the Readlines Probe and Liveness Probe in this link.
I want to run a script agains a long subset of items, and each of them run concurrently, only when every iteration finishes, write it to a file.
For some reason, it writes to the file without finishing the function:
#!/bin/bash
function print_not_semver_line() {
echo -n "$repo_name,"
git tag -l | while read -r tag_name;do
semver $tag_name > /dev/null || echo -n "$tag_name "
done
echo ""
}
csv_name=~/Scripts/all_repos/not_semver.csv
echo "Repo Name,Not Semver Versions" > $csv_name
while read -r repo_name;do
cd $repo_dir
print_not_semver_line >> $csv_name &
done < ~/Scripts/all_repos/all_repos.txt
of course without &, it does what it supposed to do, but with it, it gets all messed up.
Ideas?
Here's an alternative that uses xargs for its natural parallelization, and a quick script that determines all of the non-semver tags and outputs at the end of the repo.
The premise is that this script does nothing fancy, it just loops over its provided directories and does one at a time, where you can parallelize outside of the script.
#!/bin/bash
log() {
now=$(date -Isec --utc)
echo "${now} $$ ${*}" > /dev/stderr
}
# I don't have semver otherwise available, so a knockoff replacement
function is_semver() {
echo "$*" | egrep -q "^v?[0-9]+\.[0-9]+\.[0-9]+$"
}
log "Called with: ${#}"
for repo_dir in ${#} ; do
log "Starting '${repo_dir}'"
bad=$(
git -C "${repo_dir}" tag -l | \
while read tag_name ; do
is_semver "${tag_name}" || echo -n "${tag_name} "
done
)
log "Done '${repo_dir}'"
echo "${repo_dir},${bad}"
done
log "exiting"
I have a project directory with various cloned github repos, I'll run it using xargs here. Notice a few things:
I am demonstrating calling the script with -L2 two directories per call (not parallelized) but -P4 four of these scripts running simultaneously
everything left of xargs in the pipe should be your method of determining what dirs/repos to iterate over
the first batch of processes starts with PIDs 17438, 17439, 17440, and 17442, and only when one of those quits (17442 then 17439) are new processes started
if you are not concerned with too many things running at once, you might use xargs -L1 -P9999 or something equally ridiculous :-)
$ find . -maxdepth 2 -iname .git | sed -e 's,/\.git,,g' | head -n 12 | \
xargs -L2 -P4 ~/StackOverflow/5783481/62283574_2.sh > not_semver.csv
2020-06-09T17:51:39+00:00 17438 Called with: ./calendar ./callr
2020-06-09T17:51:39+00:00 17439 Called with: ./docker-self-service-password ./ggnomics
2020-06-09T17:51:39+00:00 17438 Starting './calendar'
2020-06-09T17:51:39+00:00 17440 Called with: ./ggplot2 ./grid
2020-06-09T17:51:39+00:00 17439 Starting './docker-self-service-password'
2020-06-09T17:51:39+00:00 17442 Called with: ./gt ./keyring
2020-06-09T17:51:39+00:00 17440 Starting './ggplot2'
2020-06-09T17:51:39+00:00 17442 Starting './gt'
2020-06-09T17:51:39+00:00 17442 Done './gt'
2020-06-09T17:51:40+00:00 17442 Starting './keyring'
2020-06-09T17:51:40+00:00 17438 Done './calendar'
2020-06-09T17:51:40+00:00 17438 Starting './callr'
2020-06-09T17:51:40+00:00 17439 Done './docker-self-service-password'
2020-06-09T17:51:40+00:00 17439 Starting './ggnomics'
2020-06-09T17:51:40+00:00 17442 Done './keyring'
2020-06-09T17:51:40+00:00 17439 Done './ggnomics'
2020-06-09T17:51:40+00:00 17442 exiting
2020-06-09T17:51:40+00:00 17439 exiting
2020-06-09T17:51:40+00:00 17515 Called with: ./knitr ./ksql
2020-06-09T17:51:40+00:00 17518 Called with: ./nanodbc ./nostalgy
2020-06-09T17:51:40+00:00 17515 Starting './knitr'
2020-06-09T17:51:40+00:00 17518 Starting './nanodbc'
2020-06-09T17:51:41+00:00 17438 Done './callr'
2020-06-09T17:51:41+00:00 17438 exiting
2020-06-09T17:51:42+00:00 17440 Done './ggplot2'
2020-06-09T17:51:42+00:00 17440 Starting './grid'
2020-06-09T17:51:43+00:00 17518 Done './nanodbc'
2020-06-09T17:51:43+00:00 17518 Starting './nostalgy'
2020-06-09T17:51:43+00:00 17518 Done './nostalgy'
2020-06-09T17:51:43+00:00 17518 exiting
2020-06-09T17:51:43+00:00 17440 Done './grid'
2020-06-09T17:51:43+00:00 17440 exiting
2020-06-09T17:51:44+00:00 17515 Done './knitr'
2020-06-09T17:51:44+00:00 17515 Starting './ksql'
2020-06-09T17:51:55+00:00 17515 Done './ksql'
2020-06-09T17:51:55+00:00 17515 exiting
The output, in not_semver.csv:
./gt,
./calendar,
./docker-self-service-password,2.7 2.8 3.0
./keyring,
./ggnomics,
./callr,
./ggplot2,ggplot2-0.7 ggplot2-0.8 ggplot2-0.8.1 ggplot2-0.8.2 ggplot2-0.8.3 ggplot2-0.8.5 ggplot2-0.8.6 ggplot2-0.8.7 ggplot2-0.8.8 ggplot2-0.8.9 ggplot2-0.9.0 ggplot2-0.9.1 ggplot2-0.9.2 ggplot2-0.9.2.1 ggplot2-0.9.3 ggplot2-0.9.3.1 show
./nanodbc,
./nostalgy,
./grid,0.1 0.2 0.5 0.5-1 0.6 0.6-1 0.7-1 0.7-2 0.7-3 0.7-4
./knitr,doc v0.1 v0.2 v0.3 v0.4 v0.5 v0.6 v0.7 v0.8 v0.9 v1.0 v1.1 v1.10 v1.11 v1.12 v1.13 v1.14 v1.15 v1.16 v1.17 v1.18 v1.19 v1.2 v1.20 v1.3 v1.4 v1.5 v1.6 v1.7 v1.8 v1.9
./ksql,0.1-pre1 0.1-pre10 0.1-pre2 0.1-pre4 0.1-pre5 0.1-pre6 0.1-pre7 0.1-pre8 0.1-pre9 0.3 v0.2 v0.2-rc0 v0.2-rc1 v0.3 v0.3-rc0 v0.3-rc1 v0.3-rc2 v0.3-rc3 v0.3-temp v0.4 v0.4-rc0 v0.4-rc1 v0.5 v0.5-rc0 v0.5-rc1 v4.1.0-rc1 v4.1.0-rc2 v4.1.0-rc3 v4.1.0-rc4 v4.1.1-rc1 v4.1.1-rc2 v4.1.1-rc3 v4.1.2-beta180719000536 v4.1.2-beta3 v4.1.2-rc1 v4.1.3-beta180814192459 v4.1.3-beta180828173526 v5.0.0-beta1 v5.0.0-beta10 v5.0.0-beta11 v5.0.0-beta12 v5.0.0-beta14 v5.0.0-beta15 v5.0.0-beta16 v5.0.0-beta17 v5.0.0-beta18 v5.0.0-beta180622225242 v5.0.0-beta180626015140 v5.0.0-beta180627203620 v5.0.0-beta180628184550 v5.0.0-beta180628221539 v5.0.0-beta180629053850 v5.0.0-beta180630224559 v5.0.0-beta180701010229 v5.0.0-beta180701053749 v5.0.0-beta180701175910 v5.0.0-beta180701205239 v5.0.0-beta180702185100 v5.0.0-beta180702222458 v5.0.0-beta180706202823 v5.0.0-beta180707005130 v5.0.0-beta180707072142 v5.0.0-beta180718203558 v5.0.0-beta180722214927 v5.0.0-beta180723195256 v5.0.0-beta180726003306 v5.0.0-beta180730183336 v5.0.0-beta19 v5.0.0-beta2 v5.0.0-beta20 v5.0.0-beta21 v5.0.0-beta22 v5.0.0-beta23 v5.0.0-beta24 v5.0.0-beta25 v5.0.0-beta26 v5.0.0-beta27 v5.0.0-beta28 v5.0.0-beta29 v5.0.0-beta3 v5.0.0-beta30 v5.0.0-beta31 v5.0.0-beta32 v5.0.0-beta33 v5.0.0-beta5 v5.0.0-beta6 v5.0.0-beta7 v5.0.0-beta8 v5.0.0-beta9 v5.0.0-rc1 v5.0.0-rc3 v5.0.0-rc4 v5.0.1-beta180802235906 v5.0.1-beta180812233236 v5.0.1-beta180824214627 v5.0.1-beta180826190446 v5.0.1-beta180828173436 v5.0.1-beta180830182727 v5.0.1-beta180902210116 v5.0.1-beta180905054336 v5.0.1-beta180909000146 v5.0.1-beta180909000436 v5.0.1-beta180911213156 v5.0.1-beta180913003126 v5.0.1-beta180914024526 v5.0.1-beta181008233543 v5.0.1-beta181018200736 v5.0.1-rc1 v5.0.1-rc2 v5.0.1-rc3 v5.0.2-beta181116204629 v5.0.2-beta181116204811 v5.0.2-beta181116205152 v5.0.2-beta181117022246 v5.0.2-beta181118024524 v5.0.2-beta181119063215 v5.0.2-beta181119185816 v5.0.2-beta181126211008 v5.1.0-beta180611231144 v5.1.0-beta180612043613 v5.1.0-beta180612224009 v5.1.0-beta180613013021 v5.1.0-beta180614233101 v5.1.0-beta180615005408 v5.1.0-beta180618191747 v5.1.0-beta180618214711 v5.1.0-beta180618223247 v5.1.0-beta180618225004 v5.1.0-beta180619025141 v5.1.0-beta180620180431 v5.1.0-beta180620180739 v5.1.0-beta180620183559 v5.1.0-beta180622181348 v5.1.0-beta180626014959 v5.1.0-beta180627203509 v5.1.0-beta180628064520 v5.1.0-beta180628184841 v5.1.0-beta180630224439 v5.1.0-beta180701010040 v5.1.0-beta180701175749 v5.1.0-beta180702063039 v5.1.0-beta180702063440 v5.1.0-beta180702214311 v5.1.0-beta180702220040 v5.1.0-beta180703024529 v5.1.0-beta180706202701 v5.1.0-beta180707004950 v5.1.0-beta180718203536 v5.1.0-beta180722215127 v5.1.0-beta180723023347 v5.1.0-beta180723173636 v5.1.0-beta180724024536 v5.1.0-beta180730185716 v5.1.0-beta180812233046 v5.1.0-beta180820223106 v5.1.0-beta180824214446 v5.1.0-beta180828022857 v5.1.0-beta180828173516 v5.1.0-beta180829024526 v5.1.0-beta180905054157 v5.1.0-beta180911213206 v5.1.0-beta180912202326 v5.1.0-beta180917172706 v5.1.0-beta180919183606 v5.1.0-beta180928000756 v5.1.0-beta180929024526 v5.1.0-beta201806191956 v5.1.0-beta201806200051 v5.1.0-beta34 v5.1.0-beta35 v5.1.0-beta36 v5.1.0-beta37 v5.1.0-beta38 v5.1.0-beta39 v5.1.0-rc1 v6.0.0-beta181009070836 v6.0.0-beta181009071126 v6.0.0-beta181009071136 v6.0.0-beta181011024526
To reduce verbosity, you could remove logging and such, most of this output was intended to demonstrate the timing and running.
As another alternative, consider something like this:
log() {
now=$(date -Isec --utc)
echo "${now} ${*}" > /dev/stderr
}
# I don't have semver otherwise available, so a knockoff replacement
function is_semver() {
echo "$*" | egrep -q "^v?[0-9]+\.[0-9]+\.[0-9]+$"
}
function print_something() {
local repo_name=$1 tag_name=
bad=$(
git tag -l | while read tag_name ; do
is_semver "${tag_name}" || echo -n "${tag_name} "
done
)
echo "${repo_name},${bad}"
}
csvdir=$(mktemp -d not_semver_tempdir.XXXXXX)
csvdir=$(realpath "${csvdir}")/
log "Temp Directory: ${csvdir}"
while read -r repo_dir ; do
log "Starting '${repo_dir}'"
(
if [ -d "${repo_dir}" ]; then
repo_name=$(basename "${repo_dir}")
tmpfile=$(mktemp -p "${csvdir}")
tmpfile=$(realpath "${tmpfile}")
cd "${repo_dir}"
print_something "${repo_name}" > "${tmpfile}" 2> /dev/null
fi
) &
done
wait
outfile=$(mktemp not_semver_XXXXXX.csv)
cat ${csvdir}* > "${outfile}"
# rm -rf "${csvdir}" # uncomment when you're comfortable/confident
log "Output: ${outfile}"
I don't like it as much, admittedly, but its premise is that it creates a temporary directory in which each repo process will write its own file. Once all backgrounded jobs are complete (i.e., the wait near the end), all files are concatenated into an output.
Running it (without xargs):
$ find . -maxdepth 2 -iname .git | sed -e 's,/\.git,,g' | head -n 12 | \
~/StackOverflow/5783481/62283574.sh
2020-06-10T14:48:18+00:00 Temp Directory: /c/Users/r2/Projects/github/not_semver_tempdir.YeyaNY/
2020-06-10T14:48:18+00:00 Starting './calendar'
2020-06-10T14:48:18+00:00 Starting './callr'
2020-06-10T14:48:18+00:00 Starting './docker-self-service-password'
2020-06-10T14:48:18+00:00 Starting './ggnomics'
2020-06-10T14:48:18+00:00 Starting './ggplot2'
2020-06-10T14:48:19+00:00 Starting './grid'
2020-06-10T14:48:19+00:00 Starting './gt'
2020-06-10T14:48:19+00:00 Starting './keyring'
2020-06-10T14:48:19+00:00 Starting './knitr'
2020-06-10T14:48:19+00:00 Starting './ksql'
2020-06-10T14:48:19+00:00 Starting './nanodbc'
2020-06-10T14:48:19+00:00 Starting './nostalgy'
2020-06-10T14:48:38+00:00 Output: not_semver_CLy098.csv
r2#d2sb2 MINGW64 ~/Projects/github
$ cat not_semver_CLy098.csv
keyring,
ksql,0.1-pre1 0.1-pre10 0.1-pre2 0.1-pre4 0.1-pre5 0.1-pre6 0.1-pre7 0.1-pre8 0.1-pre9 0.3 v0.2 v0.2-rc0 v0.2-rc1 v0.3 v0.3-rc0 v0.3-rc1 v0.3-rc2 v0.3-rc3 v0.3-temp v0.4 v0.4-rc0 v0.4-rc1 v0.5 v0.5-rc0 v0.5-rc1 v4.1.0-rc1 v4.1.0-rc2 v4.1.0-rc3 v4.1.0-rc4 v4.1.1-rc1 v4.1.1-rc2 v4.1.1-rc3 v4.1.2-beta180719000536 v4.1.2-beta3 v4.1.2-rc1 v4.1.3-beta180814192459 v4.1.3-beta180828173526 v5.0.0-beta1 v5.0.0-beta10 v5.0.0-beta11 v5.0.0-beta12 v5.0.0-beta14 v5.0.0-beta15 v5.0.0-beta16 v5.0.0-beta17 v5.0.0-beta18 v5.0.0-beta180622225242 v5.0.0-beta180626015140 v5.0.0-beta180627203620 v5.0.0-beta180628184550 v5.0.0-beta180628221539 v5.0.0-beta180629053850 v5.0.0-beta180630224559 v5.0.0-beta180701010229 v5.0.0-beta180701053749 v5.0.0-beta180701175910 v5.0.0-beta180701205239 v5.0.0-beta180702185100 v5.0.0-beta180702222458 v5.0.0-beta180706202823 v5.0.0-beta180707005130 v5.0.0-beta180707072142 v5.0.0-beta180718203558 v5.0.0-beta180722214927 v5.0.0-beta180723195256 v5.0.0-beta180726003306 v5.0.0-beta180730183336 v5.0.0-beta19 v5.0.0-beta2 v5.0.0-beta20 v5.0.0-beta21 v5.0.0-beta22 v5.0.0-beta23 v5.0.0-beta24 v5.0.0-beta25 v5.0.0-beta26 v5.0.0-beta27 v5.0.0-beta28 v5.0.0-beta29 v5.0.0-beta3 v5.0.0-beta30 v5.0.0-beta31 v5.0.0-beta32 v5.0.0-beta33 v5.0.0-beta5 v5.0.0-beta6 v5.0.0-beta7 v5.0.0-beta8 v5.0.0-beta9 v5.0.0-rc1 v5.0.0-rc3 v5.0.0-rc4 v5.0.1-beta180802235906 v5.0.1-beta180812233236 v5.0.1-beta180824214627 v5.0.1-beta180826190446 v5.0.1-beta180828173436 v5.0.1-beta180830182727 v5.0.1-beta180902210116 v5.0.1-beta180905054336 v5.0.1-beta180909000146 v5.0.1-beta180909000436 v5.0.1-beta180911213156 v5.0.1-beta180913003126 v5.0.1-beta180914024526 v5.0.1-beta181008233543 v5.0.1-beta181018200736 v5.0.1-rc1 v5.0.1-rc2 v5.0.1-rc3 v5.0.2-beta181116204629 v5.0.2-beta181116204811 v5.0.2-beta181116205152 v5.0.2-beta181117022246 v5.0.2-beta181118024524 v5.0.2-beta181119063215 v5.0.2-beta181119185816 v5.0.2-beta181126211008 v5.1.0-beta180611231144 v5.1.0-beta180612043613 v5.1.0-beta180612224009 v5.1.0-beta180613013021 v5.1.0-beta180614233101 v5.1.0-beta180615005408 v5.1.0-beta180618191747 v5.1.0-beta180618214711 v5.1.0-beta180618223247 v5.1.0-beta180618225004 v5.1.0-beta180619025141 v5.1.0-beta180620180431 v5.1.0-beta180620180739 v5.1.0-beta180620183559 v5.1.0-beta180622181348 v5.1.0-beta180626014959 v5.1.0-beta180627203509 v5.1.0-beta180628064520 v5.1.0-beta180628184841 v5.1.0-beta180630224439 v5.1.0-beta180701010040 v5.1.0-beta180701175749 v5.1.0-beta180702063039 v5.1.0-beta180702063440 v5.1.0-beta180702214311 v5.1.0-beta180702220040 v5.1.0-beta180703024529 v5.1.0-beta180706202701 v5.1.0-beta180707004950 v5.1.0-beta180718203536 v5.1.0-beta180722215127 v5.1.0-beta180723023347 v5.1.0-beta180723173636 v5.1.0-beta180724024536 v5.1.0-beta180730185716 v5.1.0-beta180812233046 v5.1.0-beta180820223106 v5.1.0-beta180824214446 v5.1.0-beta180828022857 v5.1.0-beta180828173516 v5.1.0-beta180829024526 v5.1.0-beta180905054157 v5.1.0-beta180911213206 v5.1.0-beta180912202326 v5.1.0-beta180917172706 v5.1.0-beta180919183606 v5.1.0-beta180928000756 v5.1.0-beta180929024526 v5.1.0-beta201806191956 v5.1.0-beta201806200051 v5.1.0-beta34 v5.1.0-beta35 v5.1.0-beta36 v5.1.0-beta37 v5.1.0-beta38 v5.1.0-beta39 v5.1.0-rc1 v6.0.0-beta181009070836 v6.0.0-beta181009071126 v6.0.0-beta181009071136 v6.0.0-beta181011024526
knitr,doc v0.1 v0.2 v0.3 v0.4 v0.5 v0.6 v0.7 v0.8 v0.9 v1.0 v1.1 v1.10 v1.11 v1.12 v1.13 v1.14 v1.15 v1.16 v1.17 v1.18 v1.19 v1.2 v1.20 v1.3 v1.4 v1.5 v1.6 v1.7 v1.8 v1.9
calendar,
ggplot2,ggplot2-0.7 ggplot2-0.8 ggplot2-0.8.1 ggplot2-0.8.2 ggplot2-0.8.3 ggplot2-0.8.5 ggplot2-0.8.6 ggplot2-0.8.7 ggplot2-0.8.8 ggplot2-0.8.9 ggplot2-0.9.0 ggplot2-0.9.1 ggplot2-0.9.2 ggplot2-0.9.2.1 ggplot2-0.9.3 ggplot2-0.9.3.1 show
nostalgy,
callr,
docker-self-service-password,2.7 2.8 3.0
grid,0.1 0.2 0.5 0.5-1 0.6 0.6-1 0.7-1 0.7-2 0.7-3 0.7-4
ggnomics,
nanodbc,
gt,
use a variable or temp file for buffering lines. random file name is used
($0 = script name, $! = most recently background PID)
make sure you have write permissions. if you are worried about eMMC Flash Memory wear-out or write speeds you can also use shared-memory /run/shm
#!/bin/bash
print_not_semver_line() {
# random file name for line buffering
local tmpfile="${0%.*}${!:-0}.tmp~"
touch "$tmpfile" || return 1
# redirect stdout into different tmp file
echo -n "$repo_name," > "$tmpfile"
git tag -l | while read -r tag_name;do
semver $tag_name > /dev/null || echo -n "$tag_name " >> "$tmpfile"
done
echo "" >> "$tmpfile"
# print the whole line from one single ride
cat "$tmpfile" && rm "$tmpfile" && return 0
}
however, it is recommended to limit the maximum number of background processes. for the above example you can count open files with lsof
this function is waiting for given file name. it will check for similar file names and wait until number of open files is below allowed maximum. use it in your loop
first argument is mandatory file name
second argument is optional limit (default 4)
third argument is optional frequency for lsof
usage: wait_of <file> [<limit>] [<freq>]
# wait for open files (of)
wait_of() {
local pattern="$1" limit=${2:-4} time=${3:-1} path of
# check path
path="${pattern%/*}"
pattern="${pattern##*/}"
[ "$path" = "$pattern" ] && path=.
[ -e "$path" ] && [ -d "$(realpath "$path")" ] || return 1
# convert file name into regex
pattern="${pattern//[0-9]/0}"
while [[ "$pattern" =~ "00" ]]
do
pattern="${pattern//00/0}"
done
pattern="${pattern//0/[0-9]*}"
pattern="${pattern//[[:space:]]/[[:space:]]}"
# check path with regex for open files > 4 and wait
of=$(lsof -t "$path"/$pattern 2> /dev/null | wc -l)
while (( ${of:-0} > $limit ))
do
of=$(lsof -t "$path"/$pattern 2> /dev/null | wc -l)
sleep $time
done
return 0
}
# make sure only give one single tmp file name
wait_of "${0%.*}${!:-0}.tmp~" || exit 2
print_not_semver_line >> $csv_name &
I am working on the linux kernel base 3.14 version and i have enabled the cgroup and blkio subsystem on it for checking the write byte count of the block device from the container and host applications.
But, I have problems in getting the written bytes from the cgroup blkio throttling function for the container application.
It works for the main hierarchy (e.g. /sys/fs/cgroup/blkio/blkio.throttle.io_service_bytes) , but not for the deeper ones (e.g. /sys/fs/cgroup/blkio/lxc/web (container name is web))
I created a small test script (checkWrite), which will simply enter ther cgroup it is started in (pwd) and will create 1M.
#!/bin/bash
SIZE=1M
DST="/home/root"
#check if we are in the /sys/fs/cgroup/ dir
if [ ! -e ./tasks ]; then
echo "Error, this script must be started in a cgroup blkio directory"
echo "Start in or below /sys/fs/cgroup/blkio !"
exit -1
fi
echo "Using the cgroup: ${PWD##*/cgroup}"
# add myself to cgroup
echo $$ > tasks
mygroup=`cat /proc/$$/cgroup | grep blkio`
echo "we're now in bklio cgroup: ${mygroup}"
# call sync to let kernel store data
sync
sleep 1
# fetch current writen bytes count for eMMC
before=$(cat blkio.throttle.io_service_bytes | grep "179:24 Write")
echo "before writing: ${before}"
echo "writing ${SIZE} random data to ${DST}/DELME ..."
dd if=/dev/urandom of=${DST}/DELME bs=${SIZE} count=1
sync
sleep 2
# fetch current writen bytes count for eMMC
after=$(cat blkio.throttle.io_service_bytes | grep "179:24 Write")
echo "after writing: ${after}"
written=$((${after##* }-${before##* }))
written=$((written/1024))
echo "written = ${after##* }B - ${before##* }B = ${written}kB"
rm -rf ${DST}/DELME
The output is;
/sys/fs/cgroup/blkio# ~/checkWrite
Using the cgroup: /blkio
we're now in bklio cgroup: 3:blkio:/ <- this task is in this blkio chgroup now
before writing: 179:24 Write 200701952 <- from blkio.throttle.io_service_bytes
writing 1M random data to /var/opt/bosch/dynweb/DELME ...
1+0 records in
1+0 records out
after writing: 179:24 Write 201906176
written = 201906176B - 200701952B = **1176kB** **<- fairly ok**
/sys/fs/cgroup/blkio/lxc/web# ~/checkWrite
Using the cgroup: /blkio/system.slice
we're now in bklio cgroup: 3:blkio:/system.slice
before writing: 179:24 Write 26064896
writing 1M random data to /var/opt/bosch/dynweb/DELME ...
1+0 records in
1+0 records out
after writing: 179:24 Write 26130432
written = 26130432B - 26064896B = **64kB** **<- much too less**
Do I misunderstand the handling?
If it is not working, then how to monitor/watch/read the block device write from the container applications.
I am running a for loop in which a command is run in background using &. In the end i want all commands to return value..
Here is the code i tried
for((i=0 ;i<3;i++)) {
// curl command which returns a value &
}
wait
// next piece of code
I want to get all three returned value and then proceed.. But the wait command does not wait for background processes to complete and runs the next part of code. I need the returned values to proceed..
Shell builtins have documentation accessible with help BUILTIN_NAME.
help wait yields:
wait: wait [-n] [id ...]
Wait for job completion and return exit status.
Waits for each process identified by an ID, which may be a process ID or a
job specification, and reports its termination status. If ID is not
given, waits for all currently active child processes, and the return
status is zero. If ID is a a job specification, waits for all processes
in that job's pipeline.
If the -n option is supplied, waits for the next job to terminate and
returns its exit status.
Exit Status:
Returns the status of the last ID; fails if ID is invalid or an invalid
option is given.
which implies that to get the return statuses, you need to save the pid and then wait on each pid, using wait $THE_PID.
Example:
sl() { sleep $1; echo $1; return $(($1+42)); }
pids=(); for((i=0;i<3;i++)); do sl $i & pids+=($!); done;
for pid in ${pids[#]}; do wait $pid; echo ret=$?; done
Example output:
0
ret=42
1
ret=43
2
ret=44
Edit:
With curl, don't forget to pass -f (--fail) to make sure the process will fail if the HTTP request did:
CURL Example:
#!/bin/bash
URIs=(
https://pastebin.com/raw/w36QWU3D
https://pastebin.com/raw/NONEXISTENT
https://pastebin.com/raw/M9znaBB2
)
pids=(); for((i=0;i<3;i++)); do
curl -fL "${URIs[$i]}" &>/dev/null &
pids+=($!)
done
for pid in "${pids[#]}"; do
wait $pid
echo ret=$?
done
CURL Example output:
ret=0
ret=22
ret=0
GNU Parallel is a great way to do high-latency things like curl in parallel.
parallel curl --head {} ::: www.google.com www.hp.com www.ibm.com
Or, filtering results:
parallel curl --head -s {} ::: www.google.com www.hp.com www.ibm.com | grep '^HTTP'
HTTP/1.1 302 Found
HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
Here is another example:
parallel -k 'echo -n Starting {} ...; sleep 5; echo done.' ::: 1 2 3 4
Starting 1 ...done.
Starting 2 ...done.
Starting 3 ...done.
Starting 4 ...done.
Good day i have some problem with forever start\stop script.
CentOS 6.2
kernel 2.6.32-220.el6.x86_64
nodejs v0.6.19
npm v 1.1.24
forever#0.9.2
i create nologin user for running my script
/etc/passwd
node:x:501:501::/usr/sbin/nologin:/bin/bash:/usr/local/bin/node:/usr/local/bin/forever:/usr/local/bin:/usr/local/lib/node_modules/forever/bin
i create script and named hello2.js
#!/bin/bash
echo "aight"
and try to running
[max#localhost Desktop]$ forever start hello2.js
info: Forever processing file: hello2.js
[max#localhost Desktop]$ forever list
info: Forever processes running
data: uid command script forever pid logfile uptime
data: [0] n4EB node hello2.js 2675 2728 /home/max/.forever/n4EB.log 0:0:0:0.130
Everything all right. And the next i create start-stop script for hello2.js and named him node
===========================
#!/bin/bash
#proccessname: node
USER=node
PWD=node
node=node
forever=forever
start() {
forever start -l forever.log -o out.log -e err.log /home/max/Desktop/hello2.js
}
stop(){
/usr/local/bin/forever stopall
}
restart() {
stop
start
}
status(){
/usr/local/bin/forever list
}
#see how we were called
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status
;;
*)
echo $ "usage $0 {start | stop | status | restart}"
exit 1
esac
exit 0
=========================================
made it executable.
And the next i want to look how this working
[max#localhost Desktop]$ ./node
$ usage ./node {start | stop | status | restart}
[max#localhost Desktop]$ ./node start
info: Forever processing file: /home/max/Desktop/hello2.js
[max#localhost Desktop]$ ./node status
**info: No forever processes running**
But
[max#localhost Desktop]$ forever start hello2.js
info: Forever processing file: hello2.js
[max#localhost Desktop]$ forever list
info: Forever processes running
data: uid command script forever pid logfile uptime
data: [0] n4EB node hello2.js 2675 2728 /home/max/.forever/n4EB.log 0:0:0:0.130
[max#localhost Desktop]$
where my mistake?
Try
nohup forever start -l forever.log -o out.log -e err.log /home/max/Desktop/hello2.js &