How do I run a multi-step cron job, but still make it able to execute a single step manually? - go

I have a data pipeline in Go with steps A, B and C. Currently those are three binaries. They share the same database but write to different tables. When developing locally, I have been just running ./a && ./b && ./c. I'm looking to deploy this pipeline to our Kubernetes cluster.
I want A -> B -> C to run once a day, but sometimes (for debugging etc.) I may just want to manually run A or B or C in isolation.
Is there a simple way of achieving this in Kubernetes?
I haven't found many resources on this, so maybe that demonstrates an issue with my application's design?

Create a docker image that holds all three binaries and a wrapper script to run all three.
Then deploy a Kubernetes CronJob that runs all three sequentially (using the wrapper script as entrypoint/command), with the appropriate schedule.
For debugging you can then just run the the same image manually:
kubectl -n XXX run debug -it --rm --image=<image> -- /bin/sh
$ ./b
...

Related

How to guess the terminal type that a pod is running in k8s

I've been trying to automate a script to programmatically execute some commands in a set of pods in different namespaces for some internal debugging process.
But I've faced a problem: Not all of my pods are using the same type of shell, some of them use /bin/bash, other /sh, etc, etc.
I need this to be able to guess the type of bash since this tool will run across several namespaces as well as a different set of pods, all the filtering, execution, and data retrieval is made by a custom script that I'm working on.
./custom_script.sh <namespace-name>
For now, I've been using the status code $? for each terminal, for example: kubectl exec ... - /bin/bash if it fails I execute another call but with a different terminal kubectl exec .. - /bin/sh but this can get ugly really fast at a code level.
Is there a way to pass via CLI a set of possible shells to execute i.e:
kubectl exec .. - bash,sh,zsh
I think it should be a better way, but I've not found how to do it, any advice to improve this will be highly appreciated!
TIA!

kubectl exec commands are not recorded in pod's bash history

Is there a way for the kubectl exec commands to be recorded by history command in the pod?
I want to collect all the commands executed on the container by monitoring history command.
Example:
kubectl exec podname -n namespace -c container -- bash -c "ls" ==> Can this be recorded by history command.
A couple of things to clarify in order to get the full context of this behavior:
First, kubectl exec is a neat API-based (Warning: Medium member's story) wrapper for docker exec.
This is essential as it means that it'll use the Kubernetes API to relay your commands to Docker. This implies that whatever behavior, shell-related in this case, is directly linked to how Docker implemented the command execution within containers, and has not much to do with kubectl.
The second thing to have in mind is the command itself: history, which is an shell feature and not a separate binary.
The implementation depends on the shell used, but generally it caches the command history in memory before writing it to the history file after the session is closed. This action can be "forced" using history features but can vary depending on the shell implementation and might not be an uniform, reliable approach while working with the docker exec command.
Considering these things and that your request seems to be aiming to monitor actions performed in your containers, maybe a better approach would be to use Linux audit to record commands executed by users.
Not only this doesn't rely on the aforementioned points, it writes logs to a file and this allows you to use your Kubernetes logging strategy to pick them, exporting them to whatever you're using as log sink, facilitating posterior inspection.

Run two executables in docker

I would like my container to launch two processes when it is run;
the generic "this process is runnning to keep the container awake" process (keep_awake.sh), and
node app.js
Is there any way to have both of these launch at the start, based in the dockerfile?
I'm thinking of some sort of abuse of bash, but don't know specifically which one yet.
Further complicating things, keep_awake.sh is in a directory different than app.js.
You should never need an artificial “keep this container alive” process. This is doubly true in the situation you’re describing, where you have a single long-running application process.
Best practice is for a Docker container to run a single process, and run it as a foreground job. If that process ever exits, the container will exit too — and you want this. (It’d be kind of embarrassing for your Node app to die but for you to not notice, because Docker sees that tail -f /dev/null is still up and running.)
In short, end your Dockerfile with
CMD ["node", "app.js"]
and ignore the second do-nothing process.

How can I run a Shell when booting up?

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help
Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

start multiple docker containers with a single command line shell script (without docker-compose)

I've got 3 containers that will run on a single server, which we'll call: A,B,C
Each server has a script on the host that has the commands to start docker:
A_start.sh
B_start.sh
C_start.sh
I'm trying to create a swarm script to start them all, but not sure how.
ABC_start.sh
UPDATE:
this seems to work, with the first being output to the terminal, cntrl+C exits out of them all.d
./A_start.sh & ./B_start.sh & ./C_start.sh
swarm will not help you start them at all..., it is used to distribute the work amongst docker machines that are part of the cluster.
there is no good reason not to use docker-compose for that use case, its main purpose is to link containers properly, and bring them up, so your collection of scripts could end up being a single docker-compose up command.
In bash,
you can do this:
nohup A_start.sh &
nohup B_start.sh &
nohup C_start.sh &

Resources