How to Get Child Process IDs Without the jobs Command - shell

Okay, so I'm working with a shell script that needs to be as portable as possible, and I want to make sure I can cleanly tidy up any child-processes using a trap command.
Now, on more recent platforms the jobs -p command can be used to get a list of child process ids, suitable for throwing straight into a kill command to tidy things up without any fuss.
However, some environments don't have this. To work around this I'm using a variable into which I throw process IDs, but it's messy, and a typo could result in some or all child processes not being killed when they should be.
So, in the absence of the jobs command, what alternatives are there? Or put another way, what is the most compatible method to kill all child-processes of a script?
To give you an idea of potential limitations; the most basic system I need to work with has no pgrep, and only a basic version of ps only supporting the -w flag. It does have access to special files under /proc/$$/ but I'm not sure what to do with those (do any of them even list child processes?). This has been a big part of the difficulty, as many similar questions list solutions using tools I don't have access to, I just love compatibility issues =)

You can get the child pid using `!`
$!

Related

Bash Scripts (even trivial ones) stuck when invoked on the terminal

I have a server on which we execute multiple bash scripts to automate tasks (like copying files to other servers, kicking off backups, etc). It has been working for some months, but today it started to get erratic.
What is happening, is that the script gets 'stuck' for a while, and after that, it runs with no problem. If I copy and paste the commands one by one on the terminal, it works, so is not something on the script itself, but it seems something that is preventing the bash interpreter (if that makes sense).
Another weird behavior is that the same script will run with no issues eventually. However, as we use Jenkins for automation, the scripts are re-created every time a new job starts.
For example, I created a new script, tst.sh, which only contains an echo. If I try to run it directly, it gets stuck for a while. I tried to debug it with bash -xeav but it does not print my script code, which means that it is not reading it. After a while, the script ran, with no changes. However, creating one script, with the same content and a different name, resurfaces the issue.
My hypothesis is that something prevents the script to be read, and just waits until whatever is blocking it to finish. However, I did not see any process holding the file, which means that it may not the case.
Is there any other thing I should try? My knowledge in bash is pretty basic, so I don't know if there is a flag that may help me on debugging this internally.
I am working on RHEL 8.85, the bash version is GNU bash, version 4.4.20(1)-release (x86_64-redhat-linux-gnu)
UPDATES BASED ON THE COMMENTS
Server resources are OK, no usage for them.
Hardware for the server also works fine, the ops team has not reached out with any known issue at least
Reboot makes the issue disappear, however, it reappears after 5 minutes or so
The issue seems that is not related to bash profiles and such.
Issue solved, posting this as an answer so people can find it quicker.
Turns out, as multiple users suggested in the comments (thanks to all!!) the problem was caused by a security monitor, which analyzed each of the scripts that were executed. The team changed some settings on that end to prevent it from happening, and so far is working.

Julia invoke script on existing REPL from command line

I want to run a Julia script from window command line, but it seems everytime I run > Julia code.jl, a new instance of Julia is created and the initiation time (package loading, compiling?) is quite long.
Is there a way for me to skip this initiation time by running the script on the current REPL/Julia instance? (which usually saves me 50% of running time).
I am using Julia 1.0.
Thank you,
You can use include:
julia> include("code.jl")
There are several possible solutions. All of them involve different ways of sending commands to a running Julia session. The first few that come to my mind are:
use sockets as explained in https://docs.julialang.org/en/v1/manual/networking-and-streams/#A-simple-TCP-example-1
set up a HTTP server e.g. using https://github.com/JuliaWeb/HTTP.jl
use named pipes, as explained in Named pipe does not wait until completion in bash
communicate e.g. through the file system (e.g. make Julia scan some folder for .jl files and if it finds them there they get executed and moved to another folder or deleted) - this is probably simplest to implement correctly
In all the solutions you can send the command to Julia by executing some shell command.
No matter which approach you prefer the key challenge is sanitizing the code to handle errors properly (i.e. a situation when you sent some command to the Julia session and it crashes or when you send requests faster than Julia is able to handle them). This is especially important if you want the Julia server to be detached from the terminal.
As a side note: when using the Distributed module from stdlib in Julia for multiprocessing you actually do a very similar thing (but the communication is Julia to Julia) so you can also have a look how this module is implemented to get the feeling how it can be done.

Custom shell script to kill uWSGI by pid

I'm to write a script which will grab all the text from a file (/tmp/pidfile.txt) which is just a pid number, and then store that as a variable (say pidvar) or something and execute the following:
kill -2 pidvar
Seems simple enough I just don't know how to grab the pid from the .txt file. I have python installed if this helps. Trying to make it easier to kill uWSGI, any suggestions on an alternative would be welcome.
Thanks in advance for any help.
The literal answer to your question (using a bash extension to be slightly more efficient) would be
kill -2 "$(</tmp/pidfile.txt)"
...or, to be compatible with POSIX sh but slightly less efficient...
kill -2 "$(cat /tmp/pidfile.txt)"
...but don't do it either of those ways.
pidfiles are prone to race conditions, whereas process-tree-based supervision systems can guarantee that they only ever deliver a signal to the correct process.
runit, daemontools, Upstart, systemd, and many other alternatives are available which will guarantee that there's no risk of sending a signal to the wrong process based on stale data. CentOS is probably the last major operating system that doesn't ship with one of these (though future versions will almost certainly use systemd), but they're available as third-party packages -- and if you want your system to be reliable (detecting unexpected failures and restarting services as soon as they go down, for instance, without having to do it with your own code), you should be using one of them.
For instance, with systemd:
systemctl kill -s SIGINT uwsgi.service
...or, with runit:
sv interrupt uwsgi
...whereas with upstart, you can configure a completely arbitrary restart command to be triggered on initctl reload uwsgi.
For general best-practices documentation on using shell scripts for process management, see the appropriate page on the wooledge.org wiki, maintained by irc.freenode.org's #bash channel.
It is generally easier to ask uwsgi to kill itself. You can do this by using the "master-fifo" option in your config, and then send a "q" to the fifo. This is described here: http://uwsgi-docs.readthedocs.org/en/latest/MasterFIFO.html.

run a process from script and kill it from another script using PID

I want to run a process from script and kill it from another script using PID. I can think of two things to do but I think you can help me get more effective way of doing this. The first one is to store the PID in temp file and read it from the other script. The other way is to save it as environment variable from the first script and use it by the other script. I am very sure there is nicer way of doing this. I used killall and pkill to kill the process by the process name. However, this did not work good. I want to kill it using PID.
I would suggest using the first method you talked about. Creating a temp .pid file with the information on the process is a very common way to track the PID of a running process. It is easy to do in most languages and pretty straightforward.
The second could create an issue because depending on what shell you use, that environment variable may be difficult to ensure that it is global and not touched.
Creating a PID or a .lck file is always a best practice. You're the owner of the identifier as well as the script, so you can plan to prevent other problems creeping in, such as another user attempting to start up the same application.

Run a process each time a new file is created in a directory in linux

I'm developing an app. The operating system I'm using is linux. I need to run if possible a ruby script on the file created in the directory. I need to keep this script always running. The first thing I thought about is inotify:
The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories.
It's exactly what I need, then I found "rb-inotify", a wrapper fir inotify.
Do you think there is a better way of doing what I need than using inotify? Also, I really don't understand the way that I have to use rb-inotify.
I just create, for example, a rb file with:
notifier = INotify::Notifier.new
notifier.watch("directory/to/check",:create) do |event|
#do task with event.name file
end
notifier.run
Then I just ruby myRBNotifier.rb, and it will stay looping for ever. How do I stop it? Any idea? Is this a good approach?
I'd recommend looking at god. It's designed for this sort of task, and makes it pretty easy to build a monitoring system for background and daemon apps.
As for the main code itself, inotify isn't cross-platform, so if you have a possibility you'll need to run on Windows or Mac OS then you'll need a different solution. It's not too hard to write a little piece of code that checks your target directory periodically for a change. If you need to know what changed, read and cache the directory entries then compare them the next time your code runs. Use sleep between runs to wait some period of time before looping.
The old-school method of doing similar things is to use cron to fire off a job at regular intervals. That job can be your script that checks whether the file list changed by comparing it to the cached version, then acting as needed if something is different.
Just run your script in the background with
ruby myRBNotifier.rb &
When you need to stop it, find the process id and use kill on it:
ps ux
kill [whatever pid your process gets from the OS]
Does that answer your question?
If you're running on a mac/unix machine, look at the launchctl man page. You can set up a process to run and execute a ruby script whenever a file changes. It's highly configurable.

Resources