Executing two programs concurrently - bash

I have two programs I need to start in a concrete order:
./server
./client
I want to write a shell script that is running those two programs inside a for loop (script domain is benchmarking the application). For that reason I need to have the client call blocking and the server call async, but I also need to explicitly kill the server after the client has returned (so the server can be started fresh in the next iteration).
What is the easiest way to achieve this behavior?

server &
PID=$!
client
kill $PID

Perhaps start the server up and store its PID $!.
#!/bin/sh
./server & storepid="$!"
./client
kill "$storepid"

Related

Terminating multiple background processes in bash?

I'm trying to dump trade-data off binance for multiple symbol-pairs, e.g. doge/btc, ada/btc, etc.
I can background, thus:
wscat -c wss://stream.binance.com:9443/ws/dogebtc#trade > doge.txt &
wscat -c wss://stream.binance.com:9443/ws/adabtc#trade > ada.txt &
But how to terminate them all?
Is there some smart way, like terminating the parent process?
I think the right answer depends a lot on the way your current system is implemented / used.
At the most basic scripting level, you could simply run kill against all wscat processes; but that may be too generic depending on the details.
Slightly better, in a BASH script, directly after creating these processes you'd have access to their PID as $!. You could stash those PIDs in a variable or file and later use them to kill each individual process.
If you're aiming for something slicker than that, you'd likely want to look into things like:
the SIGCHLD signal, becoming a subreaper (prctl PR_SET_CHILD_SUBREAPER), running as PID 1 in a PID-namespace (unshare --pid ...), things like that.

How do we avoid Cron jobs interruption?

I am new at using Cron Jobs on Google Cloud: I was wondering if it is possible to launch a job on an instance and have it run continuously without interruption even after I shut down my local (Laptop). Is it possible to have a job running without any ssh connection?
The CronJobs are a possibility, but they are not meant to be used in your scenario, but when you want to run a command with a certain frequency over the time.
A Bash Builtin command that suits better your needs is disown. First, run your process/script in the background (using &, or stopping it with ^Z and then restarting with bg):
$ long_operation_command &
[1] 1156
Note that at this point the process is still linked to the session and in case it is closed it will be killed.
You can the process attached to the session check running jobs in the background:
$ jobs
[1]+ Running long_operation_command
Therefore you can run disown in order to detach the processes from the session:
$ disown
You can confirm this checking the result of your script or command logging in again or checking with top the process still running.
Check also this because it could be interesting, i.e. the difference between nohup foo, foo & and $ foo & disown
P.S.
The direct answer to your question is yes, the cronjobs run even if you shutdown your laptop/shutdown the session.

Deploy a TCP Server written in Ruby

I've written a TCP Server in ruby running on port 2000 with event machine.
Right now, what I do is ssh to my server and run the command ruby lib/tcp_server.rb to turn on the server, but it shuts down when I log out.
I've tried nohup and using & but nothing seems to stick for the server for a long time.
So my question is, how do I deploy this server on port 2000 and keep it running, like how we deploy Rails to nginx.
It's not a webserver, but an a tcp server for a connected device, if that helps.
Thanks!
Solution 1: tmux or screen
This is the simplest way to approach, you will have to create a tmux or screen session, then start your server in that session.
Solution 2: nohup
nohup ruby lib/tcp_server.rb > stdout.log 2> stderr.log &
You've tried nohup and using &, I suppose you've already known how to do.
Solution 3: daemonize
You can detach from the shell and daemonize the process by forking
it twice, setting the session ID and changing the current working directory.
def daemonize
exit if fork
Process.setsid
exit if fork
Dir.chdir '/'
end
With this approach, you will have to redirect stdout and stderr to keep logs.
Another way to daemonize is to use gems like daemons.
update:
To restart the process automatically after being killed, you need a process manager like god or pm2.
To start the process automatically after booting, you need to compose an init scripts but how it looks like depends on your service management system and operating system. One of the most well-known is System V. If you are using Ubuntu, you might want to take a look at Upstart or systemd.

Bash – How should I idle until I get a signal?

I have a script for launchd to run that starts a server, then tells it to exit gracefully when launchd kills it off (which should be at shutdown). My question: what is the appropriate, idiomatic way to tell the script to idle until it gets the signal? Should I just use a while-true-sleep-1 loop, or is there a better way to do this?
#!/bin/bash
cd "`dirname "$0"`"
trap "./serverctl stop" TERM
./serverctl start
# wait to receive TERM signal.
You can simply use "sleep infinity". If you want to perform more actions on shutdown and don't want to create a function for that, an alternative could be:
#!/bin/bash
sleep infinity & PID=$!
trap "kill $PID" INT TERM
echo starting
# commands to start your services go here
wait
# commands to shutdown your services go here
echo exited
Another alternative to "sleep infinity" (it seems busybox doesn't support it for example) could be "tail -fn0 $0" for example.
A plain wait would be significantly less resource-intensive than a spin lock, even with a sleep in it.
Why would you like to keep your script running? Is there any reason? If you don't do anything later after signal then I do not see a reason for that.
When you get TERM from shutdown then your serverctl and server executable (if there is any) also gets TERM at the same time.
To do this thing by design you have to install your serverctl script as rc script and let init (start and) stop that. Here I described how to set up server process that is not originally designed to work as server.

Are forked processes (bash) subject to server timeout disconnection?

If I am working on a remote server (ssh) and I fork a process using bash & operator, will that process be killed if I am booted off the server due to server time-out? I'm pretty sure the answer is yes, but would love to know if there are any juicy details.
It might depend, but generally when you log out with your "connection program" (e.g. ssh in your case although it could have been rlogin or telnet as well), the shell and children (I think?) will receive a SIGHUP signal (hangup) which will make them terminate when you log out. There are two common ways to avoid this, running the program you want to keep running through nohup or screen. If the server have some other time limitation on running processes you will have to look into that.
bash will send a HUP signal to all background jobs. You can stop this from happening by starting the job with nohup (which should have a man page). If it's too late for nohup, you can use disown to stop the shell from sending a HUP to a job. disown is a builtin, so help disown will tell you everything you need to know.

Resources