How to guess the terminal type that a pod is running in k8s - shell

I've been trying to automate a script to programmatically execute some commands in a set of pods in different namespaces for some internal debugging process.
But I've faced a problem: Not all of my pods are using the same type of shell, some of them use /bin/bash, other /sh, etc, etc.
I need this to be able to guess the type of bash since this tool will run across several namespaces as well as a different set of pods, all the filtering, execution, and data retrieval is made by a custom script that I'm working on.
./custom_script.sh <namespace-name>
For now, I've been using the status code $? for each terminal, for example: kubectl exec ... - /bin/bash if it fails I execute another call but with a different terminal kubectl exec .. - /bin/sh but this can get ugly really fast at a code level.
Is there a way to pass via CLI a set of possible shells to execute i.e:
kubectl exec .. - bash,sh,zsh
I think it should be a better way, but I've not found how to do it, any advice to improve this will be highly appreciated!
TIA!

Related

How to send input to a console/CLI program running on remote host using bash?

I have a script that I normally launch using the following syntax:
ssh -Yq user#host "xterm -e '. /home/user/bin/prog1 $arg1;prog2'"
(note: I've removed some of the complexities of the command, so please excuse if there are any syntax errors in the ssh command; it should not be relevant to the question)
This launches an xterm window that runs prog1, and after completion runs prog2. prog2 is a console-style program that performs some setup, then several seconds later waits for user input.
Is there a way via bash script (preferably without downloading external packages) that I can send data to prog2 that's running on $host?
I've looked into << and expect, but it's way over my head. My intuition is that there's probably a straightforward way of doing this, but I can't figure out what terms to search for. I also understand that I can remotely send keystrokes to a host using xdotools or something similar, but I'm hesitant to request a new package installation unless I know that's the only reasonable solution.
Thanks!

kubectl exec commands are not recorded in pod's bash history

Is there a way for the kubectl exec commands to be recorded by history command in the pod?
I want to collect all the commands executed on the container by monitoring history command.
Example:
kubectl exec podname -n namespace -c container -- bash -c "ls" ==> Can this be recorded by history command.
A couple of things to clarify in order to get the full context of this behavior:
First, kubectl exec is a neat API-based (Warning: Medium member's story) wrapper for docker exec.
This is essential as it means that it'll use the Kubernetes API to relay your commands to Docker. This implies that whatever behavior, shell-related in this case, is directly linked to how Docker implemented the command execution within containers, and has not much to do with kubectl.
The second thing to have in mind is the command itself: history, which is an shell feature and not a separate binary.
The implementation depends on the shell used, but generally it caches the command history in memory before writing it to the history file after the session is closed. This action can be "forced" using history features but can vary depending on the shell implementation and might not be an uniform, reliable approach while working with the docker exec command.
Considering these things and that your request seems to be aiming to monitor actions performed in your containers, maybe a better approach would be to use Linux audit to record commands executed by users.
Not only this doesn't rely on the aforementioned points, it writes logs to a file and this allows you to use your Kubernetes logging strategy to pick them, exporting them to whatever you're using as log sink, facilitating posterior inspection.

How do I run a multi-step cron job, but still make it able to execute a single step manually?

I have a data pipeline in Go with steps A, B and C. Currently those are three binaries. They share the same database but write to different tables. When developing locally, I have been just running ./a && ./b && ./c. I'm looking to deploy this pipeline to our Kubernetes cluster.
I want A -> B -> C to run once a day, but sometimes (for debugging etc.) I may just want to manually run A or B or C in isolation.
Is there a simple way of achieving this in Kubernetes?
I haven't found many resources on this, so maybe that demonstrates an issue with my application's design?
Create a docker image that holds all three binaries and a wrapper script to run all three.
Then deploy a Kubernetes CronJob that runs all three sequentially (using the wrapper script as entrypoint/command), with the appropriate schedule.
For debugging you can then just run the the same image manually:
kubectl -n XXX run debug -it --rm --image=<image> -- /bin/sh
$ ./b
...

Python Script Not Starting Bash Script when running as service

I have a python script that get started automatically as a service (activated with systemd). In this python script, I call a bash script using subprocess.call(script_file,shell=True).
When I call the python script manually ($ python my_python_script.py), everything works perfectly. However, the automatically started program does not execute the bash script (however it does run, I checked this my making it edit a text file, which it indeed does).
I (think) I gave everyone read-write permissions to the bash scripts. Does anyone have ideas as to what I'm doing wrong?
Addendum: I want to write a small script that sends me my public IP address via telegram. The service file looks like this:
[Unit]
Description=IPsender
After=networking.service
[Service]
Type=simple
User=root
WorkingDirectory=/home/pi/projects/tg_bot
ExecStart=/home/pi/miniconda3/bin/python /home/pi/projects/tg_bot/ip_sender_tg.py
Restart=always
[Install]
WantedBy=multi-user.target
Protawn, welcome to the Unix and Linux StackExchange.
Why scripts work differently under system is a common question. Check out this answer to the general question elsewhere on the site.
Without the source code for your Python and Bash scripts it's hard to guess which difference you have encountered.
My personal guess is that your bash script is calling some other binaries without full paths, and those paths are found in your shell $PATH but not the default systemd path.
Add set -x to the top of your bash script so that all actions are logged to standard out, which will be captured in the systemd journal. Then after it fails, use journalctl -u your-service-name to view the logs for your service to see if you can find the last command that bash executed successfully. Also consider using set -e in the bash script to have it stop at the first error.
Despite the two "off-topic" "close" votes on this topic, why things work differently under systemd is on topic for this Stack Exchange site.

How to ssh into a shell and run a script and leave myself at the prompt

I am using elastic map reduce from Amazon. I am sshing into hadoop master node and executing a script like.
$EMR_BIN/elastic-mapreduce --jobflow $JOBFLOW --ssh < hivescript.sh . It sshes me into the master node and runs the hive script. The hivescript contains the following lines
hive
add jar joda-time-1.6.jar;
add jar EmrHiveUtils-1.2.jar;
and some commands to create hive tables. The script runs fine and creates the hive tables and everything else, but comes back to the prompt from where I ran the script. How do I leave it sshed into hadoop master node at the hive prompt.
Consider using Expect, then you could do something along these lines and interact at the end:
/usr/bin/expect <<EOF
spawn ssh ... YourHost
expect "password"
send "password\n"
send javastuff
interact
EOF
These are the most common answers I've seen (with the drawbacks I ran into with them):
Use expect
This is probably the most well rounded solution for most people
I cannot control whether expect is installed in my target environments
Just to try this out anyway, I put together a simple expect script to ssh to a remote machine, send a simple command, and turn control over to the user. There was a long delay before the prompt showed up, and after fiddling with it with little success I decided to move on for the time being.
Eventually I came back to this as the final solution after realizing I had violated one of the 3 virtues of a good programmer -- false impatience.
Use screen / tmux to start the shell, then inject commands from an external process.
This works ok, but if the terminal window dies it leaves a screen/tmux instance hanging around. I could certainly try to come up with a way to just re-attach to prior instances or kill them; screen (and probably tmux) can make it die instead of auto-detaching, but I didn't fiddle with it.
If using gnome-terminal, use its -x or --command flag (I'm guessing xterm and others have similar options)
I'll go into more detail on problems I had with this on #4
Make a bash script with #!/bin/bash --init-file as the shebang; this will cause your script to execute, then leave an interactive shell running afterward
This and #3 had issues with some programs that required user interaction before the shell is presented to them. Some programs (like ssh) it worked fine with, others (telnet, vxsim) presented a prompt but no text was passed along to the program; only ctrl characters like ^C.
Do something like this: xterm -e 'commands; here; exec bash'. This will cause it to create an interactive shell after your commands execute.
This is fine as long as the user doesn't attempt to interrupt with ^C before the last command executes.
Currently, the only thing I've found that gives me the behavior I need is to use cmdtool from the OpenWin project.
/usr/openwin/bin/cmdtool -I 'commands; here'
# or
/usr/openwin/bin/cmdtool -I 'commands; here' /bin/bash --norc
The resulting terminal injects the list of commands passed with -I to the program executed (no parms means default shell), so those commands show up in that shell's history.
What I don't like is that the terminal cmdtool provides feels so clunky ... but alas.

Resources