Passing shell variable to command executed via kubectl exec - bash

I have a repetitive task that I do while testing which entails connecting to a cassandra pod and running a couple of CQL queries.
Here's the "manual" approach:
On cluster controller node, I exec a shell on the pod using kubectl:
kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /bin/bash
Once in the pod I execute cqlsh:
cqlsh $(hostname -i) -u myuser
and then enter password interactively
I execute my cql queries interactively
Now, I'd like to have a bash script to automate this. My intent is to run cqlsh directly, via kubectl exec.
The problem I have is that apparently I cannot use a shell variable within the "command" section of kubectl exec. And I will need shell variables to store a) the pod's IP, b) an id which is the input to my first query, and c) intermediate query results (the two latter ones are not added to script yet).
Here's what I have so far, using a dummy CQL query for now:
#!/bin/bash
CASS_IP=$(kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /usr/bin/hostname -i)
echo $CASS_IP # This prints out the IP address just fine, say 192.168.79.208
# The below does not work, errors provided below
kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /opt/cassandra/bin/cqlsh $CASS_IP -u myuser -p 'mypass' -e 'SELECT now() FROM system.local;'
# The below works just fine and returns the CQL query output
kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /opt/cassandra/bin/cqlsh 192.168.79.208 -u myuser -p 'mypass' -e 'SELECT now() FROM system.local;'
The output from the above is as follows, where IP is echoed, first exec'd cqlsh breaks, and second succeeds:
192.168.79.208
Warning: Timezone defined and 'pytz' module for timezone conversion not installed. Timestamps will be displayed in UTC timezone.
Traceback (most recent call last):
File "/opt/cassandra/bin/cqlsh.py", line 2357, in <module>
main(*read_options(sys.argv[1:], os.environ))
File "/opt/cassandra/bin/cqlsh.py", line 2326, in main
encoding=options.encoding)
File "/opt/cassandra/bin/cqlsh.py", line 463, in __init__
load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
File "/opt/cassandra/bin/../lib/cassandra-driver-internal-only-3.25.0.zip/cassandra-driver-3.25.0/cassandra/policies.py", line 425, in __init__
File "/opt/cassandra/bin/../lib/cassandra-driver-internal-only-3.25.0.zip/cassandra-driver-3.25.0/cassandra/policies.py", line 426, in <listcomp>
File "/usr/lib64/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
command terminated with exit code 1
Warning: Timezone defined and 'pytz' module for timezone conversion not installed. Timestamps will be displayed in UTC timezone.
system.now()
--------------------------------------
e78e75c0-0d3e-11ed-8825-1de1a1b1c128
(1 rows)
Any ideas how to get around this? I've been researching this for quite a while now, but I'm stuck...

This is a very, very FAQ: the kubectl exec is, as its name says, using exec(3) versus system(3) -- which in your case wouldn't work anyway because the $ in your kubectl exec would be interpreted by your shell not the pod's shell
but thankfully the solution is the same to both problems: create your own system(3) by wrapping the command in a sh -c invocation (or bash -c if you have bash-isms and bash is available inside the pod):
kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- sh -c '/opt/cassandra/bin/cqlsh $(hostname -i) -u myuser -p "mypass" -e "SELECT now() FROM system.local;"'
as always, be cognizant of the "outer" versus "inner" quoting, especially if your "mypass" or the -e statement contains shell meta-characters

Related

How to pass env variable to a kubectl exec script call?

How do I pass a environment variable into a kubectl exec command, which is calling a script?
kubectl exec client -n namespace -- /mnt/script.sh
In my script.sh, I need the value of the passed variable.
I tried:
kubectl exec client -n namespace -- PASSWORD=pswd /mnt/script.sh
which errors with:
OCI runtime exec failed: exec failed: unable to start container process: exec: "PASSWORD=pswd": executable file not found in $PATH: unknown
You can use env(1)
kubectl exec client -n namespace -- \
env PASSWORD=pswd /mnt/script.sh
or explicitly wrap the command in an sh(1) invocation so a shell processes it
kubectl exec client -n namespace -- \
sh -c 'PASSWORD=pswd /mnt/script.sh'
This comes with the usual caveats around kubectl exec: a "distroless" image may not have these standard tools; you're only modifying one replica of a Deployment; your changes will be lost as soon as the Pod is deleted, which can sometimes happen outside of your control.

Bash - Connect to Docker container and execute commands in redis-cli

I'm trying to create a simple script which would:
Connect to docker container's BASH shell
Go into redis-cli
Perform a flushall command in redis-cli
So far, I have this in my docker_script.sh (this basically copies the manual procedure):
docker exec -it redis /bin/bash
redis-cli
flushall
However, when I run it, it only connects to the container's BASH shell and doesn't do anything else. Then, if I type exit into the container's BASH shell, it outputs this:
root#5ce358657ee4:/data# exit
exit
./docker_script.sh: line 2: redis-cli: command not found
./docker_script.sh: line 3: keys: command not found
Why is the command not found if commands redis-cli and flushall exist and are working in the container when I perform the same procedure manually? How do I "automate" it by creating such a small BASH script?
Thank you
Seems like you're trying to run /bin/bash inside the redis container, while the redis-cli and flushall commands are scheduled after in your current shell instance. Try passing in your redis-cli command to bash like this:
docker exec -it redis /bin/bash -c "redis-cli FLUSHALL"
The -c is used to tell bash to read a command from a string.
Excerpt from the man page:
-c string If the -c option is present, then commands are read from
string. If there are arguments after the string, they
are assigned to the positional parameters, starting with
$0.
To answer your further question in the comments, you want to run a single script, redis_flushall.sh to run that command. The contents of that file are:
docker exec -it redis /bin/bash -c redis-cli auth MyRedisPass; flushall
Breaking that down, you are calling redis-cli auth MyRedisPass as a bash command, and flushall as another bash command. The issue is, flushall is not a valid command, you'd want to call redis-cli flushall instead. Command chaining is something that has to be implemented in a CLI application deliberately, not something that falls out of the cracks.
If you replace the contents of your script with the following, it should work, i.e., after ; add a redis-cli call before specifying the flushall command.
docker exec -it redis /bin/bash -c redis-cli auth MYSTRONGPASSWORD; redis-cli FLUSHALL
The above proposed solution with auth still got me an error
(error) NOAUTH Authentication required
This worked for me:
docker exec -it redis /bin/sh -c 'redis-cli -a MYPASSWORD FLUSHALL'

How to run db migration script in a kubernetes pod from bash?

I would like to run database migration scripts in Ubuntu pod automatically.
How I am doing this manually:
$ kubectl run -i --tty ubuntu --image=ubuntu:focal -- bash
$ apt install -y postgresql-client
$ psql "hostaddr=addr port=5432 user=username password=pass dbname=dbname"
COPY persons(first_name, last_name, dob, email)
FROM './persons.csv'
DELIMITER ','
CSV HEADER;
$ exit
I would like to create a bash script for this purposes to run locally. Could you please advise how to script it? First command connects to a remote bash session, and I am not able to execute other commands. Definitely doing something wrong.
Thank you.
Use here documents.
#!/bin/bash
kubectl run -i --tty ubuntu --image=ubuntu:focal -- bash <<EOF
apt install -y postgresql-client
psql "hostaddr=addr port=5432 user=username password=pass dbname=dbname" <<EOF2
COPY persons(first_name, last_name, dob, email)
FROM './persons.csv'
DELIMITER ','
CSV HEADER;
EOF2
EOF
Let's assume we have a command that is supposed to be run to execute some SQL query on a postgresql server in a Kubernetes cluster:
export pgcmd="PGPASSWORD=pass1234 psql -U username -d mydatabase -h addr -p port -c \"COPY persons(first_name, last_name, dob, email) FROM './persons.csv' DELIMITER ',' CSV HEADER;\" "
or by using URL syntax
export pgcmd="psql postgresql://username:pass#addr:5432/mydatabase -c \"COPY persons(first_name, last_name, dob, email) FROM './persons.csv' DELIMITER ',' CSV HEADER;\" "
Actually, it's more convenient to use official postgres docker image instead of installing postgresql client on Ubuntu image:
(if I use the same image as used to spin up the posgresql server, I can save some time on pulling image from the repository)
kubectl run -it --rm pgclient --image=postgres -- $pgcmd
Alternatively you can run the command using posgresql pod itself
kubectl exec -it posgresql-server-pod-name -- $pgcmd
or proxy connection to the postgresql server and execute the command there
kubectl port-forward posgresql-server-pod-name 8888:5432 &
#or we can use parent object to connect
#kubectl port-forward deployment/posgresql-server-deploy-name 8888:5432 &
# save ID of the background process
proxyid=$!
# run postgres command locally
$pgcmd
# switch off port forwarding and cleanup environment variables
unset PGPASSWORD
kill $proxyid && unset proxyid

Unable to issue long sql query to postgres pod in kubernetes via bash script

I am trying to execute a query on postgres pod in k8s via bash script but cannot get results when i select a large number of columns. Here is my query:
kubectl exec -it postgres-pod-dcd-wvd -- bash -c "psql -U postgres -c \"Select json_build_object('f_name',json_agg(f_name),'l_name',json_agg(l_name),'email',json_agg(email),'date_joined',json_agg(date_joined),'dep_name',json_agg(dep_name),'address',json_agg(address),'zip_code',json_agg(zip_code),'city',json_agg(city), 'country',json_agg(country)) from accounts WHERE last_name='ABC';\""
When i reduce the number of columns to be selected in the query, i get the results but if I use all the column names, the query just hangs indefinitely. What could be wrong here?
Update:
I tried using the query as :
kubectl exec -it postgres-pod-dcd-wvd -- bash -c "psql -U postgres -c \"Select last_name,first_name,...(other column names).. row_to_json(accounts) from register_account WHERE last_name='ABC';\""
But this also hangs.
When i try from inside the pod, It works but i need to execute it via bash script
Means it is almost certainly the results pagination; when you run exec -t it sets up a TTY in the Pod, just like you were connected interactively, so it is likely waiting for you to press space or "n" for the next page
You can disable the pagination with env PAGER=cat psql -c "select ..." or use the --pset pager=off as in psql --pset pager=off -c "Select ..."
Also, there's no need to run bash -c unless your .bashrc is setting some variables or otherwise performing work in the Pod. Using exec -- psql should work just fine, all other things being equal. You will need to use the env command if you want to go with the PAGER=cat approach, because $ ENV=var some_command is shell syntax, and thus cannot be fed directly into exec
As the resulting columns are having a lot of json processing, I think the time taken to execute these two queries are different.
Maybe you can login into the pod and execute the query and see.
kubectl exec -it postgres-pod-dcd-wvd -- bash
Now you are inside the pod. Then we can execute the query.
# psql -U postgres -c \"Select json_build_object('f_name',json_agg(f_name),'l_name',json_agg(l_name),'email',json_agg(email),'date_joined',json_agg(date_joined),'dep_name',json_agg(dep_name),'address',json_agg(address),'zip_code',json_agg(zip_code),'city',json_agg(city), 'country',json_agg(country)) from accounts WHERE last_name='ABC';\"
# psql -U postgres -c \"Select last_name,first_name,...(other column names).. row_to_json(accounts) from register_account WHERE last_name='ABC';\"
Now you we will be able to see whether one query is taking longer time to execute.
Also, kubectl exec pod command can be executed with a request timeout value (--request-timeout=5m) to see if there is a slowness.

Corrent passing arguments to docker entrypoint

I have a super dumb script
$ cat script.sh
cat <<EOT > entrypoint.sh
#!/bin/bash
echo "$#"
EOT
docker run -it --rm -v $(pwd)/entrypoint.sh:/root/entrypoint.sh --entrypoint /root/entrypoint.sh bash:4 Hello World
But when I run script I got strange error:
$ sh script.sh
standard_init_linux.go:207: exec user process caused "no such file or directory"
Why script does not print Hello world ?
standard_init_linux.go:207: exec user process caused "no such file or directory"
The above error means one of:
Your script actually doesn't exist. This isn't likely with your volume mount but it doesn't hurt to run the container without the entrypoint, just open a shell with the same volume mount and list the file to be sure it's there. It's possible for the volume mount to fail on desktop versions of docker where the directory isn't shared to the docker VM and you end up with empty folders being created inside the container instead of mounting your file. When checking from inside of another container, also make sure you have execute permissions on the script.
If it's a script, the first line pointing to the interpreter is invalid. Make sure that command exists inside the container. E.g. alpine containers typically do not ship with bash and you need to use /bin/sh instead. This is the most common issue that I see.
If it's a script, similar to above, make sure your first line has linux linefeeds. A windows linefeed adds and extra \r to the name of the command trying to be run, which won't be found on the linux side.
If the command is a binary, it can refer to a missing library. I often see this with "statically" compiled go binaries that didn't have CGO disabled and have links to libc appear when importing networking libraries.
If you use json formatting to run your command, I often see this error with invalid json syntax. This doesn't apply to your use case, but may be helpful to others googling this issue.
This list is pulled from a talk I gave at last year's DockerCon: https://sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#59
First of all:
Request
docker run -it --rm bash:4 which bash
Output
/usr/local/bin/bash
So
#!/bin/bash
Should be changed to
#!/usr/local/bin/bash
And
docker run -it --rm -v $(pwd)/entrypoint.sh:/root/entrypoint.sh --entrypoint /root/entrypoint.sh bash:4 Hello World
Gives you
Hello World
Update
Code
cat <<EOT > entrypoint.sh
#!/bin/bash
echo "$#"
EOT
Should be fixed as
#!/usr/bin/env bash
cat <<EOT > entrypoint.sh
#!/usr/bin/env bash
echo "\$#"
EOT

Resources