why is jq not working inside bash variable - bash

I have the following code
#/bin/bash
set -e
set -x
requestResponse=$(ssh jump.gcp.xxxxxxx """source dev2 spi-dev
kubectl get pods -o json | jq '.items[] |select(.metadata.name[0:3]=="fea")' | jq .status.podIP
2>&1""")
echo $requestResponse
In the above code source dev2 spi-dev means we have moved to spi-dev namespace inside dev2 cluster. kubectl get pods -o json | jq '.items[] |select(.metadata.name[0:3]=="fea")' | jq .status.podIP 2>&1""") means to print ip address of pod starting with fea. If I do manually kubectl command works. I have also tried escaping fea like \"fea\"

These triple quotes """ are not working as you expect.
Try to change it like this:
ssh jump.gcp.xxxxxxx << EOF
source dev2 spi-dev
kubectl get pods -o json | \
jq '.items[] | \
select(.metadata.name[0:3]=="fea")' | \
jq .status.podIP 2>&1
EOF

Related

grep failing in gitlab CI

I am trying to finalyze a script in Gitlab CI but struggling with some syntax error
script: |
echo "running jenkins job from user $EMAIL using following settings - $BRANCH / $TAGS in $ENV enviroment"
lastbuildNumber=$(curl -s --user ${EMAIL}:${TOKEN} "$JENKINS_URL/$ENV-SmokeTests-UIOnly/lastBuild/api/json" | jq ".number")
echo "last build was number ${lastbuildNumber}"
currentBuild=$((lastbuildNumber + 1 ))
echo "current build is ${currentBuild}"
echo "view cucumber report here"
baseurl="$JENKINS_URL/${ENV}-SmokeTests-UIOnly"
echo $baseurl
curl -s --user $EMAIL:$TOKEN $JENKINS_URL/$ENV-SmokeTests-UIOnly/ --output cucumber.txt
cucumber_endpoint=$(cat cucumber.txt | grep -o -m 1 "[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*")
full_cucumber=$baseurl$cucumber_endpoint
echo $full_cucumber
The script works fine on my local terminal, but fails in the CI when running
cucumber_endpoint=$(cat cucumber.txt | grep -o -m 1 "[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*")
is for sure something related to quotes but cannot work out what the issue is.
update:
I changed to:
after_script:
- |
echo "view cucumber report here"
baseurl="$JENKINS_URL/job/${ENV}-SmokeTests-UIOnly"
curl -s --user "$EMAIL":"$TOKEN" $JENKINS_URL/"$ENV"-SmokeTests-UIOnly/ --output cucumber.txt
cat cucumber.txt
cucumber_endpoint=$(cat cucumber.txt | grep -o -m 1 '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*')
full_cucumber="${baseurl}${cucumber_endpoint}"
echo "${full_cucumber}"
and I have run the script through 'shellcheck.net'
it's the grep that is not working but is not returning anyerror now.
the result of the cat command are correct, as on my local machine.
proving is not an issue with set -e
#!/bin/bash
set -e
echo "view cucumber report here"
baseurl="https://example"
cucumber_endpoint=$(curl -s --user "$EMAIL":"$TOKEN" ${JENKINS_URL}/"$ENV"-SmokeTests-UIOnly/ | grep -o -m 1 '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*')
# cat cucumber.txt
# cucumber_endpoint=$(cucumber.txt | grep -o -m 1 '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*')
full_cucumber="${cucumber_endpoint}"
echo "${baseurl}${full_cucumber}"
which gets what I want:
➜ ./cucumber.sh [16/02/23|11:39:59|]
view cucumber report here
https://example/cucumber-html-reports_fb3a3468-c298-3fb5-ad9a-dacbc0323763/overview-features.html
apparently gitlab ci did not like the -m 1 option in the grep call
so changed to
cucumber_endpoint=$(curl -s --user "$EMAIL":"$TOKEN" ${JENKINS_URL}/"$ENV"-SmokeTests-UIOnly/ | grep -o '[a-zA-Z.-]*/cucumber-html-reports_[a-zA-Z0-9.-]*/[a-zA-Z0-9.-]*'| sort -u)

Get logs from all pods in namespace using xargs

Is there anyway to get all logs from pods in a specific namespace running a dynamic command like a combination of awk and xargs?
kubectl get pods | grep Running | awk '{print $1}' | xargs kubectl logs | grep value
I have tried the command above but it's failing like kubectl logs is missing pod name:
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
Do you have any suggestion about how get all logs from Running pods?
Think about what your pipeline is doing:
The kubectl logs command takes as an argument a single pod name, but through your use of xargs you're passing it multiple pod names. Make liberal use of the echo command to debug your pipelines; if I have these pods in my current namespace:
$ kubectl get pods -o custom-columns=name:.metadata.name
name
c069609c6193930cd1182e1936d8f0aebf72bc22265099c6a4af791cd2zkt8r
catalog-operator-6b8c45596c-262w9
olm-operator-56cf65dbf9-qwkjh
operatorhubio-catalog-48kgv
packageserver-54878d5cbb-flv2z
packageserver-54878d5cbb-t9tgr
Then running this command:
kubectl get pods | grep Running | awk '{print $1}' | xargs echo kubectl logs
Produces:
kubectl logs catalog-operator-6b8c45596c-262w9 olm-operator-56cf65dbf9-qwkjh operatorhubio-catalog-48kgv packageserver-54878d5cbb-flv2z packageserver-54878d5cbb-t9tgr
To do what you want, you need to arrange to call kubectl logs multiple times with a single argument. You can do that by adding -n1 to your xargs command line. Keeping the echo command, running this:
kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 echo kubectl logs
Gets us:
kubectl logs catalog-operator-6b8c45596c-262w9
kubectl logs olm-operator-56cf65dbf9-qwkjh
kubectl logs operatorhubio-catalog-48kgv
kubectl logs packageserver-54878d5cbb-flv2z
kubectl logs packageserver-54878d5cbb-t9tgr
That looks more reasonable. If we drop the echo and run:
kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 kubectl logs | grep value
Then you will get the result you want. You may want to add the --prefix argument to kubectl logs so that you know which pod generated the match:
kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 kubectl logs --prefix | grep value
Not directly related to your question, but you can lose that grep:
kubectl get pods | awk '/Running/ {print $1}' | xargs -n1 kubectl logs --prefix | grep value
And even lose the awk:
kubectl get pods --field-selector=status.phase==Running -o name | xargs -n1 kubectl logs --prefix | grep value

How to I plug in script to YML file i.e. config.yml (CircleCI) and in github action

I have the following shell script:
for var in $(curl -u "abc_oOQxMv:tgMn6FYCKJcd6ujuDLEK" -X GET "https://api-cloud.browserstack.com/app-automate/recent_apps" | jq '.[] | select(.custom_id =="android")' | jq -r '.app_id')
do
echo "Deleting $var"
curl -u "abc_oOQxMv:tgMn6FYCKJcd6ujuDLEK" -X DELETE "https://api-cloud.browserstack.com/app-automate/app/delete/$var"
done
How do I write it into a YML file (i.e. config.yml in CircleCI) in Github Action?
You can run multiline commands in the workflow yaml file using run : | (with pipe) in a step.
For example:
script-job:
runs-on: ubuntu-latest
steps:
- name: Run my script
shell: bash
run: |
for var in $(curl -u "abc_oOQxMv:tgMn6FYCKJcd6ujuDLEK" -X GET
"https://api-cloud.browserstack.com/app-automate/recent_apps" | jq '.[] | select(.custom_id =="android")' | jq -r '.app_id')
do
echo "Deleting $var"
curl -u "abc_oOQxMv:tgMn6FYCKJcd6ujuDLEK" -X DELETE "https://api-cloud.browserstack.com/app-automate/app/delete/$var"
done

Kubernetes: Display Pods by age in ascending order

I use below command to sort the pods by age
kubectl get pods --sort-by={metadata.creationTimestamp}
It shows up pods in descending order. How can we select sorting order like ascending?
Not supported by kubectl or the kube-apiserver as of this writing (AFAIK), but a workaround would be:
$ kubectl get pods --sort-by=.metadata.creationTimestamp | tail -n +2 | tac
or if tac is not available (MacOS X):
$ kubectl get pods --sort-by=.metadata.creationTimestamp | tail -n +2 | tail -r
If you want the header:
$ echo 'NAME READY STATUS RESTARTS AGE' | \
kubectl get pods --sort-by=.metadata.creationTimestamp | tail -n +2 | tac
You might just have to adjust the tabs on the header accordingly. Or if you don't want to use tail -n +2 you can use --no-headers. For example:
$ kubectl get pods --sort-by=.metadata.creationTimestamp --no-headers | tac
It Is Quite EASY: Once you have used --no-headers option, the HEADER will not be part of output (ascending ordered-listing of pods) and you can simply reverse sort the outcome of the command.
Here's the complete command to get exactly what is expected:
kubectl get po --sort-by={metadata.creationTimestamp} --no-headers | tac
Sorted kubectl output and awk provide the table view with a header. Installation of extra tools is not needed.
# kubectl get pods --sort-by=.status.startTime | awk 'NR == 1; NR > 1 {print $0 | "tac"}'
An approach with JSON processor offered by paulogrell works also but could require more effort: for some Linux distributions you'll need to download and compile jq from source code. As for the jq command line I'd suggest to add the "name" to the map parameters and sort by "timestamp":
# kubectl get pods -o json | jq '.items | group_by(.metadata.creationTimestamp) | map({"name": .[0].metadata.name, "timestamp": .[0].metadata.creationTimestamp, "count": length}) | sort_by(.timestamp)'
I believe the Kubernetes API doesnt support this option yet, but as a workaround you can use a JSON processor (jq) to adjust its output.
Ascending
kubectl get pods -o json | jq '.items | group_by(.metadata.creationTimestamp) | map({"timestamp": .[0].metadata.creationTimestamp, "count": length}) | sort_by(.count)'
Descending
kubectl get pods -o json | jq '.items | group_by(.metadata.creationTimestamp) | map({"timestamp": .[0].metadata.creationTimestamp, "count": length}) | sort_by(.count) | reverse'
Hope this helps
A simpler version that works on MacOS and retains arbitrary headers:
kubectl get node --sort-by=.metadata.creationTimestamp | { read -r headers; echo "$headers"; tail -r; }
If you are looking for a way to find the latest pod, try:
kubectl get pod --selector='app=my-app-name' \
--sort-by='.metadata.creationTimestamp' \
-o=jsonpath='{.items[-1].metadata.name}'

Read line from stdin and run command on it

I'm trying to get the following command to work
gcloud compute instances list --format=json --regexp .*gluster.* | jq '.[].networkInterfaces[].networkIP' | tr -d '\"' | while read i; do gcloud compute ssh --zone $ZONE ubuntu#gluster-1 -- "sudo gluster peer probe $i && cat >> peers.txt"; done
Basically the gcloud command gives:
gcloud compute instances list --format=json --regexp .*gluster.* | jq '.[].networkInterfaces[].networkIP' | tr -d '\"'
10.128.0.2
10.128.0.3
10.128.0.4
However running the above command only gives seems to only run on the first ip which is the host, giving the warning
peer probe: success. Probe on localhost not needed
And none of the other nodes get connected.
Notes:
Weirdly running the gcloud command on the second node connects to the first one, running on the third doesn't do anything at all
The peers.txt file on all the nodes except the the third have again weirdly only the two latter ips
ubuntu#gluster-1:~$ cat peers.txt
10.128.0.3
10.128.0.4
Running echo on the value in the loop gives
gcloud compute instances list --format=json --regexp .*gluster.* | jq '.[].networkInterfaces[].networkIP' | tr -d '\"' | while read i; do echo ip: $i; done
ip: 10.128.0.2
ip: 10.128.0.3
ip: 10.128.0.4
There is nothing wrong with piping into a loop (assuming you don't need the body of the loop to execute in the current shell). You don't want to use a for loop for something like this, though; see Bash FAQ 001 for more information. Use a while loop.
gcloud compute instances list --format=json --regexp .*gluster.* |
jq -r '.[].networkInterfaces[].networkIP' |
while IFS= read -r ipaddr; do
echo "$ipaddr"
done
(Note that using the -r option with jq eliminates the need to pipe the output into tr to remove the double quotes.)
The problem you may be seeing is that the command you put in the while loop also reads from standard input, which consumes data from your pipeline before read can read it. In that case, you can redirect standard input from /dev/null:
gcloud compute instances list --format=json --regexp .*gluster.* |
jq -r '.[].networkInterfaces[].networkIP' |
while IFS= read -r i; do
gcloud compute ssh --zone $ZONE ubuntu#gluster-1 \
-- "sudo gluster peer probe $i < /dev/null &&
cat >> peers.txt"
done
Or, use a process substitution to read from a different file descriptor.
while IFS= read -r i <&3; do
gcloud ...
done 3< <(gcloud compute instances .. | jq -r '...')
Got it to work with a for loop.
Also learnt that for loops aren't for piping into :)
for item in $(gcloud compute instances list --format=json --regexp .*gluster.* | jq '.[].networkInterfaces[].networkIP' | tr -d '\"'); do gcloud compute ssh --zone $ZONE ubuntu#gluster-1 -- "sudo gluster peer probe $item && echo $item >> peers.txt"; done

Resources