how to pass information to a background process in bash - bash

I have created a bash script and it runs in the background. It has a PID which is stored in a file, and I can use KILL to pass predefined signals to the process.
From time to time however, I'd like to pass information to the process manually. Preferably what I would like to happen is to be able to pass a string or array of information, which is captured through TRAP, then the forever loop inside the bash file will process the information. Is there an easy way to pass information into a background process?
Thanks

You can create a fifo, have the main process write to it and have the child read from it.
mkfifo link
run_sub < link &
generate_output > link

Have it listen on a socket and implement a protocol to achieve your communication aims, probably a bit much for bash.
Or, have it try to read a particular file on receipt of a particular signal. For example, it is common for programs to re-read their configuration files on receipt of a HUP.

Related

Limitations on file append when using in multi-processed environment

My process creates a log file and appends a new line at the end of the file by using a, e.g:
fopen("log.txt", "a");
The order of the writes is not critical, but I need to ensure that fopen always succeeds. My question is, can the call above be executed from multiple processes at the same time on Windows, Linux and macOS without any race-condition?
If not, what is the most common and easy way to ensure I can write to the log file? There is file-lokcing, but also a file-lock (aka log.txt.lock) possible. Could anyone share some insights or resources which go more into detail?
If you do not use any synchronization between processes, you'll highly likely have moment when several processes will try to write to the file and the best you can get is mesh of input strings.
In order to synchronize any work in several processes (multiprocessing module). Use Lock. It will prevent several processes to do some work simultaneously.
It will look something like this:
import multiprocessing
# create lock in main process and "send" it to child processes.
lock = multiprocessing.Lock()
# ...
# in child Process
with lock:
do_some_work()
If you need more detailed example, feel free to ask.
Also you can check example in official docs

Automatically background interactive process using job control

I have a command line program that receives interactive input.
I want to have the program backgrounded after it has read the input.
After it has read the input, it execs another program. I want to be able to control that program using shell job control.
I am lazy, so I don't want to type C-Z and bg to achieve that.
I am in control of that program (I wrote it and can change it), and it 'knows' when it should be backgrounded.
I'm sure this is achievable (for example, I guess an expect script could start my program, which could then signal its parent (expect) when it's ready to be backgrounded.
What is the best (simplest, easiest, best-behaving) way to achieve this?
Have you tried the following (Assuming bash)?
# Get necessary input
exec "/usr/bin/bash" ./AnotherScript.sh

clojure: spawn a process asyncronously

With clojure.java.shell/sh it's possible to execute a shell command. After the invoked process is finished, the function returns a map containing it's exit code, std-out and std-err strings.
How can I capture stdout/-err of a spawned process from the moment it started? And: How can I terminate the process from within a clojure program/repl?
As far as I know it is not possible with clojure.java.shell/sh. You might take a look at Raynes/conch which provides features you ask for (getting output right after start etc.)
You can also DIY with java.lang.ProcessBuilder and java.lang.Process where you have full access to process's input stream or a method to terminate it.

How to detect that foreground process is waiting for input in UNIX?

I have to create a script (ksh or perl) that starts certain number of parallel jobs (another scripts), each of them runs as a foreground process in a separate session. Plus I start monitoring job that has to determine if any of those scripts is expecting input from operator, and switch to the corresponding session if necessary.
My problem is that I have not found a good way to determine that process is expecting input. For the background process it's pretty easy: process state is "stopped" and this can be easily checked with 'ps' command. In case of foreground process this does not work.
So far I tried to attach to the process with dbx or truss to see if it's hanging on 'read', but this approach seems too heavyweight.
Could you suggest some better solution? Perl, shell, C, Java, etc. … is ok as long as it’s standard and does not require extra 3rd party or OS-specific stuff to install.
Thank you.
What you're asking isn't possible, at least not reliably. The process may be using select or other polling method rather than blocking on a read call. You can't know whether it's waiting for operator input or busy doing other stuff, and in general it could be both (doing stuff in the background while being responsive to operator input).
The normal way for a program to signal that it's waiting for operator input is to print a prompt. Thus you should consider a session to be active if it's displayed a prompt since the last time you fed it input.
If your programs don't behave this way, you'll need to find some other program-specific way to know that these processes are waiting for input.

2-way communication with background process (I/O)

I have a program that runs in the command line (i.e. $ run program starts up a prompt) that runs mathematical calculations. It has it's own prompt that takes in text input and responds back through standard-out/error (or creates a separate x-window if needed, but this can be disabled). Sometimes I would like to send it small input, and other times I send in a large text file filled with a series of input on each line. This program takes a lot of resources and also has a large startup time, so it would be best to only have one instance of it running at a time. I could keep open the program-prompt and supply the input this way, or I can send the process with an exit command (to leave prompt) which just prints the output. The problem with sending the request with an exit command is that the program must startup each time (slow ...). Furthermore, the output of this program is sometimes cryptic and it would be helpful to filter the output in some way (eg. simplify output, apply ANSI colors, etc).
This all makes me want to put some 2-way IO filter (or is that "pipe"? or "wrapper"?) around the program so that the program can run in the background as single process. I would then communicate with it without having to restart. I would also like to have this all while filtering the output to be more user friendly. I have been looking all over for ideas and I am stumped at how to accomplish this in some simple shell accessible manor.
Some things I have tried were redirecting stdin and stdout to files, but the program hangs (doesn't quit) and only reads the file once making me unable to continue communication. I think this was because the prompt is waiting for some user input after the EOF. I thought that this could be setup as a local server, but I am uncertain how to begin accomplishing that.
I would love to find some simple way to accomplish this. Additionally, if you can think of a way to perform this, do you think there is a way to also allow for attaching or detaching to the prompt by request? Any help and ideas would be greatly appreciated.
You could create two named pipes (man mkfifo) and redirect input and output:
myprog < fifoin > fifoout
Then you could open new terminal windows and do this in one:
cat > fifoin
And this in the other:
cat < fifoout
(Or use tee to save the input/output as well.)
To dump a large input file into the program, use:
cat myfile > fifoin

Resources