killall command older-than option - bash

I'd like to ask about experience with the killall program, namely if anyone used the -o, --older-than CLI option.
We've recently encountered a problem that processes were killed under the hood by a command: "killall --older-than 1h -r chromedriver"
Killall was simply killing everything that matched regardless of the age. While killall man page is quite straightforward:
-o, --older-than
Match only processes that are older (started before) the time specified. The time is specified as a float then a unit. The units
are s,m,h,d,w,M,y for seconds, minutes, hours, days, weeks, Months and years respectively.
I wonder if this was a result of some false assumption or killall bug or something else.
Other posts here suggest a lot more complicated command involving sed, piping, etc which seem to work though.
Thanks,
Zdenek

I suppose you're referring to the Linux incarnation of killall, coming from the PSmisc package. Looking at the sources, it appears that some conditions for selecting PIDs to kill are AND-ed together, while others are OR-ed. -r is one of the conditions that is OR-ed with the others. I suspect the authors themselves can't really explain their intention there...

Related

Platform Agnostic Means to Detect Computer Went to Sleep?

Quite simply I have a shell script with some long-running operations that I run in the background and then wait for in a loop (so I can check if they're taking too long, report progress etc.).
However one case that I'd also like to check for is when the system has been put to sleep manually (I've already taken steps to ensure it shouldn't auto-sleep while my script is running).
Currently I do this in a fairly horrible way, namely my script runs sleep in a loop for a few seconds at a time, checking each time if the task is still running. To detect sleep I check if the time elapsed was longer than expected, like so:
start=$(date +%s)
while sleep 5; do
if [ $(($(date +%s) - $start)) -gt 6 ]; then
echo 'System may have been asleep'
start=$(date +%s)
elif kill -0 $PID; then
echo 'Task is still running'
start=$(date +%s)
else
echo 'Task is complete'
break
fi
done
The above is very much simplified so please forgive any mistakes, it's just to give the basic idea; for example, on platforms where the wait command supports timeouts I already use that in place of sleep.
Now, while this mostly works, it's not especially pretty and it's not really detecting sleep, but guessing whether the system might have slept; for example, it can't differentiate cases where a system hangs long enough to confound the check, making the check time longer will help with this, but it's still guesswork.
On macOS I can more reliably check for sleep using pmset -g uuid which returns a new UUID if the system went to sleep. What I would like to know is, are there any alternatives for other platforms?
In essence all I need is a way to find out if the system has been asleep since the last time I checked, though if there's a way to receive a signal or such instead then that may be even better.
While I'm looking to hear of the best options available on various platforms, I crucially need a shell agnostic option as well that I can use as a reliable fallback, as I'd still like the shell script to be as portable as possible.

Limiting number of CPU cores used for indexing

Is there a way to limit how many CPU cores Xcode can use to index code in the background?
I write code in emacs but I run my app from Xcode, because the debugger is pretty great. The problem is that in emacs I use rtags for indexing, which already needs a lot of CPU, and then Xcode wants to do the same. Basically whenever I touch a common header file, my computer is in big trouble...
I like this question, it presents hacky problem-solving :)
Not sure if this would work (not sure how to force Xcode to index) but here are some thoughts that might set you on the right track: there's a tool called cpulimit that you can use to slow down processes (it inserts a sleep or something into a given process, I used it when experimenting with mining crypto).
If you can figure out the process ID of the indexing daemon, maybe you can cpulimit it!
I'd start by running ps -A | grep -i xcode before and after indexing occurs to see what's changed (if anything), or using Activity Monitor to see what spikes (/Applications/Xcode10.1.app/Contents/SharedFrameworks/DVTSourceControl.framework/Versions/A/XPCServices/com.apple.dt.Xcode.sourcecontrol.WorkingCopyScanner.xpc/Contents/MacOS/com.apple.dt.Xcode.sourcecontrol.WorkingCopyScanner looks interesting)
There is a -i or --include-children param on cpulimit that should take care of this, but not sure how well it works in practice.
I made a script /usr/local/bin/xthrottle;
#!/bin/ksh
PID=$(pgrep -f Xcode | head -n 1)
sudo renice 10 $PID
You can play with the nice level, -20 is least nice, 20 is nicest for your neighbour processes.

Is there a way to prevent the creation of any processes matching a pattern?

I'd like to know if there's a way (maybe in .bash_profile?) to prevent the creation of any child processes with names matching a regex. In particular, examining output from ps -ax I am finding extra processes started by tools that I use, which I do not want to be created. I'm using a mac running os 10.14 mojave.
I'm not looking for post-fixes such as killall or pkill -f my_pattern which kills all processes matching regex 'my_pattern' as described here. I also realize that the "right" way to do this is likely to be fixing this at the application level. Probably doable, but not what I'm asking about. Thanks!

Is it ok to use check PID for rare exceptions?

I read this interesting question, that basically says that I should always avoid reaching PID of processes that aren't child processes. It's well explained and makes perfect sense.
BUT, while OP was trying to do something that cron isn't meant to, I'm in a very different situation :
I want to run a process say every 5 minutes, but once in a hundred times it takes a little more than 5 minutes to run (and I can't have two instances running at once).
I don't want to kill or manipulate other processes, I just want to end my process without doing anything if another instance of the process is running.
Is it ok to fetch PID of "not-child processes" in that case ? If so, how would I do it ?
I've tried doing if pgrep "myscript"; then ... or stuff like that, but the process finds its own PID. I need to detect if it finds more than one.
(Initially before being redirected I read this question, but the solution given doesn't work: it can give pid of the process using it)
EDIT: I should have mentioned it before, but if the script is already in use I still need to write something in a log file, at least : date>>script.log; echo "Script already in use">>script.log", I may be wrong but I think flock doesn't allow to do that.
Use lckdo or flock to avoid duplicated running.
DESCRIPTION
lckdo runs a program with a lock held, in order to prevent multiple
processes from running in parallel. Use just like nice or nohup.
Now that util-linux contains a similar command named flock, lckdo is
deprecated, and will be removed from some future version of moreutils.
Of course you can implement this primitive lockfile feature by yourself.
if [ ! -f /tmp/my.lock ];then
touch /tmp/my.lock
run prog
rm -f /tmp/my.lock
fi

Bash piping output and input of a program

I'm running a minecraft server on my linux box in a detached screen session. I'm not very fond of screen very much and would like to be able to constantly pipe the output of the server to a file(like a pipe) and pipe some input from a file to the server(so that I can input and output to the server from remote programs, like a python script). I'm not very experienced in bash, so could somebody tell me how to do this?
Thanks, NikitaUtiu.
It's not clear if you need screen at all. I don't know the minecraft server, but generally for server software, you can run it from a crontab entry and redirect output to log files.
Assuming your server kills itself at midnight sunday night, (we can discuss changing this if restarting 1x per week is too little or too much OR you require ad-hoc restarts), but for a basic idea of what to do, here is a crontab entry that starts the server each monday at 1 minute after midnight.
01 00 * * 1 dtTm=`/bin/date +\%Y\%m\%d.\%H\%M\%S`; export dtTm; { /usr/bin/mineserver -o ..... your_options_to_run_mineserver_here ... ; } > /tmp/mineserver_trace_log.${dtTm} 2>&1
consult your man page for crontab to confirm that day-of-week ranges are 0-6 (0=Sunday), and change the day-of-week value if 0!=Sunday.
Normally I would break the code up so it is easier to read, but for crontab entries, each entry has to be all on one line, (with some weird exceptions) AND usually a limit of 1024b-8K to how long the line can be. Note that the ';' just before the closing '}' is super-critical. If this is left out, you'll get un-deciperable error messages, or no error messages at all.
Basically, you're redirecting any output into a file (including std-err output). Now you can do a lot of stuff with the output, use more or less to look at the file, grep ERR ${logFile}, write scripts that grep for error messages and then send you emails that errors have been found, etc, etc.
You may have some sys-admin work on your hands to get the mineserver user so it can run crontab entries. Also if you're not comfortable using the vi or emacs editors, creating a crontab file may require help from others. Post to superuser.com to get answers for problems you have with linux admin issues.
Finally, there are two points I'd like to make about dated logfiles.
Good: a. If you app dies, you never have to rerun it to then capture output and figure out why something has stopped working. For long running programs this can save you a lot of time. b. keeping dated files gives you the ability to prove to you, your boss, others, that It used to work just fine, see here are the log files. c. Keeping the log files, assuming there is useful information in them, gives you the opportunity to mine those files for facts. I.E. : program used to take 1 sec for processing, now it is taking 1 hr, etc etc.
Bad: a. You'll need to set up a mechanism to sweep old log files, otherwise at some point everything will have stopped, AND when you finally figure out what the problem was, you discover that your /tmp OR whatever dir you chose to use IS completely full.
There is a self-maintaining solution to using dates on the logfiles I can tell you about if you find this approach useful. It will take a little explaining, so I don't want to spend the time writing it up if you don't find the crontab solution useful.
I hope this helps!

Resources