kubectl exec commands are not recorded in pod's bash history - bash

Is there a way for the kubectl exec commands to be recorded by history command in the pod?
I want to collect all the commands executed on the container by monitoring history command.
Example:
kubectl exec podname -n namespace -c container -- bash -c "ls" ==> Can this be recorded by history command.

A couple of things to clarify in order to get the full context of this behavior:
First, kubectl exec is a neat API-based (Warning: Medium member's story) wrapper for docker exec.
This is essential as it means that it'll use the Kubernetes API to relay your commands to Docker. This implies that whatever behavior, shell-related in this case, is directly linked to how Docker implemented the command execution within containers, and has not much to do with kubectl.
The second thing to have in mind is the command itself: history, which is an shell feature and not a separate binary.
The implementation depends on the shell used, but generally it caches the command history in memory before writing it to the history file after the session is closed. This action can be "forced" using history features but can vary depending on the shell implementation and might not be an uniform, reliable approach while working with the docker exec command.
Considering these things and that your request seems to be aiming to monitor actions performed in your containers, maybe a better approach would be to use Linux audit to record commands executed by users.
Not only this doesn't rely on the aforementioned points, it writes logs to a file and this allows you to use your Kubernetes logging strategy to pick them, exporting them to whatever you're using as log sink, facilitating posterior inspection.

Related

How to guess the terminal type that a pod is running in k8s

I've been trying to automate a script to programmatically execute some commands in a set of pods in different namespaces for some internal debugging process.
But I've faced a problem: Not all of my pods are using the same type of shell, some of them use /bin/bash, other /sh, etc, etc.
I need this to be able to guess the type of bash since this tool will run across several namespaces as well as a different set of pods, all the filtering, execution, and data retrieval is made by a custom script that I'm working on.
./custom_script.sh <namespace-name>
For now, I've been using the status code $? for each terminal, for example: kubectl exec ... - /bin/bash if it fails I execute another call but with a different terminal kubectl exec .. - /bin/sh but this can get ugly really fast at a code level.
Is there a way to pass via CLI a set of possible shells to execute i.e:
kubectl exec .. - bash,sh,zsh
I think it should be a better way, but I've not found how to do it, any advice to improve this will be highly appreciated!
TIA!

Run two executables in docker

I would like my container to launch two processes when it is run;
the generic "this process is runnning to keep the container awake" process (keep_awake.sh), and
node app.js
Is there any way to have both of these launch at the start, based in the dockerfile?
I'm thinking of some sort of abuse of bash, but don't know specifically which one yet.
Further complicating things, keep_awake.sh is in a directory different than app.js.
You should never need an artificial “keep this container alive” process. This is doubly true in the situation you’re describing, where you have a single long-running application process.
Best practice is for a Docker container to run a single process, and run it as a foreground job. If that process ever exits, the container will exit too — and you want this. (It’d be kind of embarrassing for your Node app to die but for you to not notice, because Docker sees that tail -f /dev/null is still up and running.)
In short, end your Dockerfile with
CMD ["node", "app.js"]
and ignore the second do-nothing process.

Chain dependent bash commands

I'm trying to chain together two commands:
The first in which I start up postgres
The second in which I run a command meant for postgres(a benchmark, in this case)
As far as I know, the '||', ';', and '&/&&' operators all require that the first command terminate or exit somehow. This isn't the case with a server that's been started, so I'm not sure how to proceed. I can't run the two completely in parallel, as the server has to be started.
Thanks for the help!
I would recommend something along the lines of the following in a single bash script:
Start the Postgres server via a command like /etc/init.d/postgresql start or similar
Sleep for a period of time to give the server time to startup; perhaps a minute or two
Then run a psql command that connects to the server to test its up-ness
Tie that command together with your benchmark via &&, so it completes only if the psql command completes (depending on the exact return codes from psql, you may need to inspect the output from the command instead of the return code). The command run via psql would best be a simple query that connects to the server and returns a simple value that can be cross-checked.
Edit in response to comment from OP:
It depends on what you want to benchmark. If you just want to benchmark a command after the server has started, and don't want to restart the server every time, then I would tweak the code to run the psql up-ness test in a separate block, starting the server if not up, and then afterward, run the benchmark test command unconditionally.
If you do want to start the server up fresh each time (to test cold-start performance, or similar), then I would add another command after the benchmarked command to shutdown the server, and then sleep, re-running the test command to check for up-ness (where this time no up-ness is expected).
In other case you should be able to run the script multiple times.
A slight aside: If your test is destructive (that is, it writes to the DB), you may want to consider dumping a "clean" copy of the DB -- that is, the DB in its pre-test state -- and then creating a fresh DB, with a different name from the original, using that dump with each run of the script, dropping it beforehand.

How to ssh into a shell and run a script and leave myself at the prompt

I am using elastic map reduce from Amazon. I am sshing into hadoop master node and executing a script like.
$EMR_BIN/elastic-mapreduce --jobflow $JOBFLOW --ssh < hivescript.sh . It sshes me into the master node and runs the hive script. The hivescript contains the following lines
hive
add jar joda-time-1.6.jar;
add jar EmrHiveUtils-1.2.jar;
and some commands to create hive tables. The script runs fine and creates the hive tables and everything else, but comes back to the prompt from where I ran the script. How do I leave it sshed into hadoop master node at the hive prompt.
Consider using Expect, then you could do something along these lines and interact at the end:
/usr/bin/expect <<EOF
spawn ssh ... YourHost
expect "password"
send "password\n"
send javastuff
interact
EOF
These are the most common answers I've seen (with the drawbacks I ran into with them):
Use expect
This is probably the most well rounded solution for most people
I cannot control whether expect is installed in my target environments
Just to try this out anyway, I put together a simple expect script to ssh to a remote machine, send a simple command, and turn control over to the user. There was a long delay before the prompt showed up, and after fiddling with it with little success I decided to move on for the time being.
Eventually I came back to this as the final solution after realizing I had violated one of the 3 virtues of a good programmer -- false impatience.
Use screen / tmux to start the shell, then inject commands from an external process.
This works ok, but if the terminal window dies it leaves a screen/tmux instance hanging around. I could certainly try to come up with a way to just re-attach to prior instances or kill them; screen (and probably tmux) can make it die instead of auto-detaching, but I didn't fiddle with it.
If using gnome-terminal, use its -x or --command flag (I'm guessing xterm and others have similar options)
I'll go into more detail on problems I had with this on #4
Make a bash script with #!/bin/bash --init-file as the shebang; this will cause your script to execute, then leave an interactive shell running afterward
This and #3 had issues with some programs that required user interaction before the shell is presented to them. Some programs (like ssh) it worked fine with, others (telnet, vxsim) presented a prompt but no text was passed along to the program; only ctrl characters like ^C.
Do something like this: xterm -e 'commands; here; exec bash'. This will cause it to create an interactive shell after your commands execute.
This is fine as long as the user doesn't attempt to interrupt with ^C before the last command executes.
Currently, the only thing I've found that gives me the behavior I need is to use cmdtool from the OpenWin project.
/usr/openwin/bin/cmdtool -I 'commands; here'
# or
/usr/openwin/bin/cmdtool -I 'commands; here' /bin/bash --norc
The resulting terminal injects the list of commands passed with -I to the program executed (no parms means default shell), so those commands show up in that shell's history.
What I don't like is that the terminal cmdtool provides feels so clunky ... but alas.

How can i run two commands exactly at same time on two different unix servers?

My requirement is that i have to reboot two servers at same time (exactly same timestamp) . So my plan is to create two shell script that will ssh to the server and trigger the reboot. My doubt is
How can i run same shell script on two server at same time. (same timestamp)
Even if i run Script1 &; Script2. This will not ensure that reboot will be issued at same time, minor time difference will be there.
If you are doing it remotely, you could use a terminal emulator with broadcast input, so that what you type is sent to all sessions of the open terminal. On Linux tmux is one such emulator.
The other easiest way is write a shell script which waits for the same timestamp on both machines and then both reboot.
First, make sure both machine's time are aligned (use the best implementation of http://en.wikipedia.org/wiki/Network_Time_Protocol and your system's related utilities).
Then,
If you need this just one time: on each servers do a
echo /path/to/your/script | at ....
(.... being when you want it. See man at).
If you need to do it several times: use crontab instead of at
(see man cron and man crontab)

Resources