I want to run a local script within Kubernetes pod and then set the output result to a linux variable
Here is what I tried:
# if I directly run -c "netstat -pnt |grep ssh", I get output assigned to $result:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> -- /bin/bash -c "netstat -pnt |grep ssh")
echo "result is $result"
What I want is something like this:
#script to be called:
cat netstat_tcp_conn.sh
#!/bin/bash
netstat -pnt |grep ssh
#script to call netstat_tcp_conn.sh:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> --
/bin/bash -c "./netstat_tcp_conn.sh)
echo "result is $result
the result showed result is /bin/bash: ./netstat_tcp_conn.sh: No such file or directory.
How can I let Kubernetes pod execute netstat_tcp_conn.sh which is at my local machine?
You can use following command to execute your script in your pod:
kubectl exec POD -- /bin/sh -c "`cat netstat_tcp_conn.sh`"
You can copy local files into pod using kubectl command like kubectl cp /tmp/foo :/tmp/
Then you can change its permission and make it executable and run it using kubectl exec.
I would like to create a variable name as POD inside script to assign kubectl output and then pass this variable while running kubectl port-forward pods..
But I received below error
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 151: illegal string body character after dollar sign;
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" # line 151, column 80.
e-context ${KUBE_CLUSTER_STAGE}
Here is my script.
environment {
POD = ''
}
steps {
script {
withCredentials([file(credentialsId: 'mbtkubeconfig', variable: 'config')]){
try {
// Expose PostreSQL
sh '''#!/bin/sh
chmod ug+w ${config}
export KUBECONFIG=\${config}
kubectl config use-context ${KUBE_CLUSTER_STAGE}
kubectl config set-context --current --namespace=database
POD = `$(kubectl get po -n database --selector='role==master' -o jsonpath="{.items[0].metadata.name}")`
kubectl port-forward pods/$POD 5432:64000 & echo \$! > filename.txt
'''
When I tried without variable there is no any error.Here is the script running without any error.
sh """#!/bin/sh
chmod ug+w ${config}
export KUBECONFIG=\${config}
kubectl config use-context ${KUBE_CLUSTER_STAGE}
kubectl config set-context --current --namespace=database
kubectl get pods -n database
kubectl port-forward pods/my-postgres-postgresql-helm-0 5432:64000 & echo \$! > filename.txt
"""
When you run commands with sh make sure you are using " not '. Groovy variables will only be resolved when using "${config}".
By the way, it is considered best practice to mark variables with env. although not needed to resolve the variable. For instance, try to mark your cluster stage with ${env.KUBE_CLUSTER_STAGE}
I have created a postgres container which is running detached.
I would like to be able to create a command in a Makefile make psql where I can connect from my host machine to the container via psql and check data is being inserted correctly.
I am struggling with how to compose the makefile command. So far I got:
Makefile
PG_CONTAINER=project_ch_pg_run_1
test_ip_1:
docker exec -it project_ch_pg_run_1 hostname -i
test_ip_2:
docker exec -it $(PG_CONTAINER) hostname -i
test_ip_3:
IP=$$(docker exec -it $(PG_CONTAINER) hostname -i); \
echo "Here's the IP of the container:$(IP)"
pslq:
IP=$$(docker exec -it project_ch_pg_run_1 hostname -i); \
psql postgres://ch_user:ch_pass#$(IP):5432/ch_dib
Results:
1 works fine.
make test_ip_1
docker exec -it project_ch_pg_run_1 hostname -i
192.168.96.2
2 variable substitution works.
docker exec -it project_ch_pg_run_1 hostname -i
192.168.96.2
3 storing result of command in IP variable and performing substitution does not work.
IP=$(docker exec -it project_ch_pg_run_1 hostname -i); \
echo "Here's the IP of the container:"
Here's the IP of the container:
4 storing result of command in IP variable and use it to compose pg URI does not work.
IP=$(docker exec -it project_ch_pg_run_1 hostname -i); \
psql postgres://ch_user:ch_pass#:5432/ch_dib
psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Makefile:15: recipe for target 'psql' failed
make: *** [psql] Error 2
I have been going around for hours but I cannot seem to find the right syntax to chain these commands - any help would be really appreciated.
You have to keep clear in your mind the difference between make variables and shell variables.
Here:
test_ip_3:
IP=$$(docker exec -it $(PG_CONTAINER) hostname -i); \
echo "Here's the IP of the container:$(IP)"
You correctly escape the invocation of the docker program using $$(...) so that this syntax is not considered a make variable.
But then you set the shell variable IP, and in the next line you use $(IP) which is a reference to the make variable IP, which you've never set.
You need to use:
test_ip_3:
IP=$$(docker exec -it $(PG_CONTAINER) hostname -i); \
echo "Here's the IP of the container:$$IP"
to print the value of the shell variable IP.
I'm trying to make my life easier and am coding a bash script. One of these allows me to kube into a pod with postgres access, get the credentials I need, and run the interactive psql shell.
However, upon running
kubectl <flags> exec $podname -- bash -c ' get_credentials && psql <psql args> -i -t
the terminal hangs.
I can't directly connect to the database, and the process to get the credentials is kinda cumbersome. Is there some bash concept I'm not understanding?
kubectl <flags> exec $podname
That exec is missing its -i and -t for --stdin=true and --tty=true to describe to kubernetes that you wish your terminal and the remote terminal to be associated with one another:
kubectl exec -it $podname -- etc etc
If you are intending the -i and -t present at the end of your cited example above to be passed to exec, be aware that the double dashes explicitly switch off argument parsing from kubectl, so there is no way it will see them
What's the simplest way to get an environment variable from a docker container that has not been declared in the Dockerfile?
For instance, an environment variable that has been set through some docker exec container /bin/bash session?
I can do docker exec container env | grep ENV_VAR, but I would prefer something that just returns the value.
I've tried using docker exec container echo "$ENV_VAR", but the substitution seems to happen outside of the container, so I don't get the env var from the container, but rather the env var from my own computer.
Thanks.
To view all env variables:
docker exec container env
To get one:
docker exec container env | grep VARIABLE | cut -d'=' -f2
The proper way to run echo "$ENV_VAR" inside the container so that the variable substitution happens in the container is:
docker exec <container_id> bash -c 'echo "$ENV_VAR"'
You can use printenv VARIABLE instead of /bin/bash -c 'echo $VARIABLE. It's much simpler and it does not perform substitution:
docker exec container printenv VARIABLE
The downside of using docker exec is that it requires a running container, so docker inspect -f might be handy if you're unsure a container is running.
Example #1. Output a list of space-separated environment variables in the specified container:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' container_name
the output will look like this:
ENV_VAR1=value1 ENV_VAR2=value2 ENV_VAR3=value3
Example #2. Output each env var on new line and grep the needed items, for example, the mysql container's settings could be retrieved like this:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{println $value}}{{end}}' \
container_name | grep MYSQL_
will output:
MYSQL_PASSWORD=secret
MYSQL_ROOT_PASSWORD=supersecret
MYSQL_USER=demo
MYSQL_DATABASE=demodb
MYSQL_MAJOR=5.5
MYSQL_VERSION=5.5.52
Example #3. Let's modify the example above to get a bash friendly output which can be directly used in your scripts:
docker inspect -f \
'{{range $index, $value := .Config.Env}}export {{$value}}{{println}}{{end}}' \
container_name | grep MYSQL
will output:
export MYSQL_PASSWORD=secret
export MYSQL_ROOT_PASSWORD=supersecret
export MYSQL_USER=demo
export MYSQL_DATABASE=demodb
export MYSQL_MAJOR=5.5
export MYSQL_VERSION=5.5.52
If you want to dive deeper, then go to Go’s text/template package documentation with all the details of the format.
Since we are dealing with JSON and unlike the accepted answer, we don't need to exec the container.
docker inspect <NAME|ID> | jq '.[] | .Config.Env'
Output sample
[
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.19.4",
"NJS_VERSION=0.4.4",
"PKG_RELEASE=1~buster"
]
To retrieve a specific variable
docker inspect <NAME|ID> | jq -r '.[].Config.Env[]|select(match("^<VAR_NAME>"))|.[index("=")+1:]'
See jq
None of the above answers show you how to extract a variable from a non-running container (if you use the echo approach with run, you won't get any output).
Simply run with printenv, like so:
docker run --rm <container> printenv <MY_VAR>
(Note that docker-compose instead of docker works too)
If by any chance you use VSCode and has installed the docker extension, just right+click on the docker you want to check (within the docker extension), click on Inspect, and there search for env, you will find all your env variables values
We can modify entrypoint of a non-running container with the docker run command.
Example show PATH environment variable:
using bash and echo: This answer claims that echo will not produce any output, which is incorrect.
docker run --rm --entrypoint bash <container> -c 'echo "$PATH"'
using printenv
docker run --rm --entrypoint printenv <container> PATH
#aisbaa's answer works if you don't care when the environment variable was declared. If you want the environment variable, even if it has been declared inside of an exec /bin/bash session, use something like:
IFS="=" read -a out <<< $(docker exec container /bin/bash -c "env | grep ENV_VAR" 2>&1)
It's not very pretty, but it gets the job done.
To then get the value, use:
echo ${out[1]}
This command inspects docker stack processes' environment in the host :
pidof dockerd containerd containerd-shim | tr ' ' '\n' \
| xargs -L1 -I{} -- sudo xargs -a '/proc/{}/environ' -L1 -0
The first way we use to find the ENV variables is docker inspect <container name>
The second way is docker exec <4 alphanumeric letter of CONTAINER id> bash -c 'echo "$ENV_VAR"'
There is a misconception in the question, that causes confusion:
you cannot access a "running session", so no bash session can change anything.
docker exec -ti container /bin/bash
starts a new console process in the container, so if you do export VAR=VALUE this will go away as soon as you leave the shell, and it won't exist anymore.
Perhaps a good example:
# assuming TESTVAR did not existed previously this is empty
docker exec container env | grep TESTVAR
# -> TESTVAR=a new value!
docker exec container /bin/bash -c 'TESTVAR="a new value!" env' | grep TESTVAR
# again empty
docker exec container env | grep TESTVAR
The variables from env come from the Dockerfile or command, docker itself and whatever the entrypoint sets.
The other answers here are good. But if you really need to get the environmental properties used when starting a program, then you can inspect the /proc/pid/environ contents in the container, where pid is the container process id of the running comand.
# environmental props
docker exec container cat /proc/pid/environ | tr '\0' '\n'
# you can check this is the correct pid by checking the ran command
docker exec container cat /proc/pid/cmdline | tr '\0' ' '