kubectl exec returns unexcepted error messages? - bash

I am currently trying to execute a simple bash command onto my kubernetes pod but seem to be getting some errors which does not make sense.
If I exec into the docker container an run the command plain
I have no name!#kafka-0:/tmp$ if [ $(comm -13 <(sort selectedTopics) <(sort topics.sh) | wc -l) -gt 0 ]; then echo "hello"; fi
I get Hello as output.
But if execute the same from the outside as
kubectl exec --namespace default kafka-0 -c kafka -- bash -c "if [ $(comm -13 </tmp/selectedTopics </tmp/topics.sh| wc -l) -gt 0 ]; then echo topic does not exist && exit 1; fi"
Then I get an error message stating that /tmp/topics.sh: No such file or directory
event though I able to do this
kubectl exec --namespace $namespace kafka-0 -c kafka -- bash -c "cat /tmp/topics.sh"
why does kubectl exec causing me problems?

When you write:
kubectl ... "$(cmd)"
cmd is executed on the local host to create the string that is used as the argument to kubectl. In other words, you are executing comm -13 </tmp/selectedTopics </tmp/topics.sh| wc -l on the local host, and not in the pod.
You should use single quotes if you want to avoid expanding locally:
kubectl exec --namespace default kafka-0 -c kafka -- bash -c 'if comm -13 </tmp/topics.sh grep -q . ; then echo topic does not exist >&2 && exit 1; fi'

Related

How to run shell script via kubectl without interactive shell

I am trying to export a configuration from a service called keycloak by using shell script. To do that, export.sh will be run from the pipeline.
the script connects to k8s cluster and run the command in there.
So far, everything goes okay the export work perfectly.
But when I try to exit from the k8s cluster with exit and directly end the shell script. therefore it will move back to the pipeline host without staying in the remote machine.
Running the command from the pipeline
ssh -t ubuntu#example1.com 'bash' < export.sh
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
kubectl -n keycloak exec -it keycloak-0 bash
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
exit
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
exit
exit
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
After the first exit the whole shell script stopped, the left commands doesn't work. it won't stay on ubuntu#example1.com.
Is there any solutions?
Run the commands inside without interactive shell using HEREDOC(EOF).
It's not EOF. It's 'EOF'. this prevents a variable expansion in the current shell.
But in the other script's /tmp/export/master-* will expand as you expect.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
# the suggested code.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
Even if scp runs successfully or not, this code will exit.

Detect if script is already running in bash script, and only restart if not

I'm trying to write a script that will check if a script is already running, and not run it on cron if its still going from the last run. I found another post on here where they suggested using:
echo `pgrep -f $0` . "!=" . "$$";
if [[ `pgrep -f $0` != "$$" ]];
While this seems to work when I run it manually in SSH, it gives weird results when run via cron:
14767 14770 . != . 14770
Is this because there are 2 processes running with 2 different pids?
I have come up with this as an alternative:
if [ -n "$(ps -ef | grep -v grep | grep 'run.sh' | wc -l)" > 2 ];
then
echo "already running"
else
# do some stuff here
fi
Running the command on its own seems to work as expected:
# ps -ef | grep -v grep | grep 'run.sh' | wc -l)
2
But when in the code, it always shows "already running" , even though my condition is not met:
bash run.sh
2
already running
Maybe I'm doing something wrong with the variable as an int?
UPDATE: As suggested, I am trying flock:
#!/bin/bash
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$#" || :
#... rest of code here
But I get:
flock: failed to execute run.sh: No such file or directory
You could write your code like that but it will be complex and errorprone. Better to use file-locking. The flock command exists for this. Its man-page provides various examples you can cut and paste, including:
#!/bin/bash
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$#" || :
# ... rest of code ...
This is useful boilerplate code for shell scripts. Put it at
the top of the shell script you want to lock and it'll automatically
lock itself on the first run. If the env var $FLOCKER is
not set to the shell script that is being run, then execute
flock and grab an exclusive non-blocking lock (using the script
itself as the lock file) before re-execing itself with the right
arguments. It also sets the FLOCKER env var to the right value
so it doesn't run again.
man flock for details.

Output not showing all echo commands

I'm using a bash script which is run on serverA and connects to serverB to run a file.
The results are saved in a variable and then echo'd. However it doesn't echo all of the data.
The script on serverA is running:
count=$(sshpass -p password ssh -t -q user#serverB cd /home/tom && ./count.sh)
echo "Count: $count"
This echos: 341 not Count: 341
The count.sh script on serverB is looping through some folders and doing a count of files.
E.g.
total=0
count=$(ls -l | wc -l | xargs)
if [ "$count" > 0 ]; then
total=$(( total + count ))
fi
echo "$total"
How do I display the full echo on serverA?
You are attempting to run ./count.sh on the local machine, not the remote host. The && is a command separator that terminates the sshpass command. Use quotes to ensure your desired shell command is passed to the remote host.
count=$(sshpass -p password ssh -t -q user#serverB 'cd /home/tom && ./count.sh')
I don't see anyway of producing the reported output, unless count.sh can run locally but something (are you using set -e?) prevents the following echo from executing at all.

syntax error near unexpected token `<' for shell script block in Jenkinsfile

I have the below block of shell script code in Jenkinsfile
stage("Compose Source Structure")
{
sh '''
set -x
rm -vf config
wget -nv --no-check-certificate https://test-company/k8sconfigs/test-config
export KUBECONFIG=$(pwd)/test-config
kubectl config view
ns_exists=$(kubectl get namespaces | grep ${consider_namespace})
echo "Validating k8s namespace"
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${source_cluster}"
exit 1
else
echo "scanning namespace \'${namespace}\'"
mkdir -p "${HOME}/cluster-backup/${namespace}"
while read -r resource
do
echo "scanning resource \'${resource}\'"
mkdir -p "${HOME}/sync-cluster/${namespace}/${resource}"
while read -r item
do
echo "exporting item \'${item}\'"
kubectl get "$resource" -n "$namespace" "$item" -o yaml > "${HOME}/sync-cluster/${namespace}/${resource}/${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-$item.yaml"
done < <(kubectl get "$resource" -n "$namespace" 2>&1 | tail -n +2 | awk \'{print $1}\')
done < <(kubectl api-resources --namespaced=true 2>/dev/null | tail -n +2 | awk \'{print $1}\')
fi
'''
Unfortunately, I am getting error like below:
++ kubectl get namespaces
++ grep test
+ ns_exists='test Active 2d20h'
+ echo 'Validating k8s namespace'
Validating k8s namespace
/home/jenkins/workspace/k8s-sync-from-cluster#tmp/durable-852103cd/script.sh: line 24: syntax error near unexpected token `<'
I did try to escape "<" with "", so I did like the below
\<
But still having no success, any idea what I am doing wrong here?
From the docs for the sh step (emphasis mine):
Runs a Bourne shell script, typically on a Unix node. Multiple lines are accepted.
An interpreter selector may be used, for example: #!/usr/bin/perl
Otherwise the system default shell will be run, using the -xe flags (you can specify set +e and/or set +x to disable those).
The system default shell on your Jenkins server may be sh, not bash. POSIX sh will not recognize <(command) process substitution.
To specifically use the bash shell, you must include a #!/usr/bin/env bash shebang immediately after your triple quote. Putting a shebang on the next line will have no effect.
I also took the liberty of fixing shellcheck warnings for your shell code, and removing \' escapes that are not necessary.
Try this:
stage("Compose Source Structure")
{
sh '''#!/usr/bin/env bash
set -x
rm -vf config
wget -nv --no-check-certificate https://test-company/k8sconfigs/test-config
KUBECONFIG="$(pwd)/test-config"
export KUBECONFIG
kubectl config view
ns_exists="$(kubectl get namespaces | grep "${consider_namespace}")"
echo "Validating k8s namespace"
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${source_cluster}"
exit 1
else
echo "scanning namespace '${namespace}'"
mkdir -p "${HOME}/cluster-backup/${namespace}"
while read -r resource
do
echo "scanning resource '${resource}'"
mkdir -p "${HOME}/sync-cluster/${namespace}/${resource}"
while read -r item
do
echo "exporting item '${item}'"
kubectl get "$resource" -n "$namespace" "$item" -o yaml > "${HOME}/sync-cluster/${namespace}/${resource}/${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-$item.yaml"
done < <(kubectl get "$resource" -n "$namespace" 2>&1 | tail -n +2 | awk '{print $1}')
done < <(kubectl api-resources --namespaced=true 2>/dev/null | tail -n +2 | awk '{print $1}')
fi
'''
}

CasperJS pass exit code to Bash

I have a problem with running my CasperJS tests on Travis CI.
Whenever a test fails CasperJS returns status code 1, which would be the correct status code to be returned on a failed test.
I am running all my tests with a bash script and I would need the exit code of the tests in the bash script. I tried the $? operator, but this only returns wheter the command was executed properly or not. Since it is done properly it always returns 0.
So my question is: Is there a way to pass the CasperJs-Test status code to my bash script?
The reason I need all this is that I am running my tests on Travis CI and Travis always exits with status 0, since the tests are executed correctly and I would need to have Travis exit with the proper exit codes.
UPDATE:
Here is my script:
#!/bin/sh
WIDGET_NAME=${1:-widget} # defaults to 'widget'
PORT=${2:-4001} # default port is 4001
SERVER_PORT=${3:-4002} # default port is 4002
TEST_CASES=${4:-./test/features/*/*/*-test.casper.js} # default run all subdirectories
# bail on errors
set -e
# switch to root folder
cd `dirname $0`/..
echo "Starting feature tests ..."
echo "- start App server on port $PORT"
WIDGET_NAME_PASCAL_CASE=`node -e "console.log(require('pascal-case') (process.argv[1]))" $WIDGET_NAME`
./node_modules/.bin/beefy app/widget.js $PORT \
--cwd public \
--index public/widget-test.html \
-- \
--standalone $WIDGET_NAME_PASCAL_CASE \
-t [ babelify --sourceMapRelative . ] \
-t browserify-shim \
--exclude moment 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process1.pid
sleep 1
echo "- start Fake API server on port $SERVER_PORT"
bin/fake-api $SERVER_PORT 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process2.pid
sleep 1
echo "- run feature tests"
mocha-casperjs $TEST_CASES --viewport-width=800 --viewport-height=600 --fail-fast | grep --line-buffered -v -e '^$' | grep --line-buffered -v "Unsafe JavaScript"
echo "- stop App and Fake API server"
kill -9 `cat /tmp/appointment-widget-tester-process*.pid`
rm /tmp/appointment-widget-tester-process*.pid
echo "done."
I have found my problem:
It lies in the nature of the | operator! The first operation is the start of my tests and the second operation after the | operator is the grep and my $? references to the last command on the console, therefore it returns the exit code of the grep not the mocha-casperjs-runner
A solution: Pipe output and capture exit status in Bash

Resources