Platform Agnostic Means to Detect Computer Went to Sleep? - shell

Quite simply I have a shell script with some long-running operations that I run in the background and then wait for in a loop (so I can check if they're taking too long, report progress etc.).
However one case that I'd also like to check for is when the system has been put to sleep manually (I've already taken steps to ensure it shouldn't auto-sleep while my script is running).
Currently I do this in a fairly horrible way, namely my script runs sleep in a loop for a few seconds at a time, checking each time if the task is still running. To detect sleep I check if the time elapsed was longer than expected, like so:
start=$(date +%s)
while sleep 5; do
if [ $(($(date +%s) - $start)) -gt 6 ]; then
echo 'System may have been asleep'
start=$(date +%s)
elif kill -0 $PID; then
echo 'Task is still running'
start=$(date +%s)
else
echo 'Task is complete'
break
fi
done
The above is very much simplified so please forgive any mistakes, it's just to give the basic idea; for example, on platforms where the wait command supports timeouts I already use that in place of sleep.
Now, while this mostly works, it's not especially pretty and it's not really detecting sleep, but guessing whether the system might have slept; for example, it can't differentiate cases where a system hangs long enough to confound the check, making the check time longer will help with this, but it's still guesswork.
On macOS I can more reliably check for sleep using pmset -g uuid which returns a new UUID if the system went to sleep. What I would like to know is, are there any alternatives for other platforms?
In essence all I need is a way to find out if the system has been asleep since the last time I checked, though if there's a way to receive a signal or such instead then that may be even better.
While I'm looking to hear of the best options available on various platforms, I crucially need a shell agnostic option as well that I can use as a reliable fallback, as I'd still like the shell script to be as portable as possible.

Related

Is it ok to use check PID for rare exceptions?

I read this interesting question, that basically says that I should always avoid reaching PID of processes that aren't child processes. It's well explained and makes perfect sense.
BUT, while OP was trying to do something that cron isn't meant to, I'm in a very different situation :
I want to run a process say every 5 minutes, but once in a hundred times it takes a little more than 5 minutes to run (and I can't have two instances running at once).
I don't want to kill or manipulate other processes, I just want to end my process without doing anything if another instance of the process is running.
Is it ok to fetch PID of "not-child processes" in that case ? If so, how would I do it ?
I've tried doing if pgrep "myscript"; then ... or stuff like that, but the process finds its own PID. I need to detect if it finds more than one.
(Initially before being redirected I read this question, but the solution given doesn't work: it can give pid of the process using it)
EDIT: I should have mentioned it before, but if the script is already in use I still need to write something in a log file, at least : date>>script.log; echo "Script already in use">>script.log", I may be wrong but I think flock doesn't allow to do that.
Use lckdo or flock to avoid duplicated running.
DESCRIPTION
lckdo runs a program with a lock held, in order to prevent multiple
processes from running in parallel. Use just like nice or nohup.
Now that util-linux contains a similar command named flock, lckdo is
deprecated, and will be removed from some future version of moreutils.
Of course you can implement this primitive lockfile feature by yourself.
if [ ! -f /tmp/my.lock ];then
touch /tmp/my.lock
run prog
rm -f /tmp/my.lock
fi

Check status of a forked process?

I'm running a process that will take, optimistically, several hours, and in the worst case, probably a couple of days.
I've tried a couple of times to run it and it just never seems to complete (I should add, I didn't write the program, it's just a big dataset). I know my syntax for the command is correct as I use it all the time for smaller data and it works properly (I'll spare you the details as it is obscure for SO and I don't think that relevant to the question).
Consequently, I'd like to leave the program unattended running as a fork with &.
Now, I'm not totally sure whether the process is just grinding to a halt or is running but taking much longer than expected.
Is there any way to check the progress of the process other than ps and top + 1 (to check CPU use).
My only other thought was to get the process to output a logfile and periodically check to see if the logfile has grown in size/content.
As a sidebar, is it necessary to also use nohup with a forked command?
I would use screen for this purpose. see the man for more reference
Brief summary how to use:
screen -S some_session_name - starts a new screen session named session_name
Ctrl + a + d - detach session
screen -r some_session_name returns you to your session

killall command older-than option

I'd like to ask about experience with the killall program, namely if anyone used the -o, --older-than CLI option.
We've recently encountered a problem that processes were killed under the hood by a command: "killall --older-than 1h -r chromedriver"
Killall was simply killing everything that matched regardless of the age. While killall man page is quite straightforward:
-o, --older-than
Match only processes that are older (started before) the time specified. The time is specified as a float then a unit. The units
are s,m,h,d,w,M,y for seconds, minutes, hours, days, weeks, Months and years respectively.
I wonder if this was a result of some false assumption or killall bug or something else.
Other posts here suggest a lot more complicated command involving sed, piping, etc which seem to work though.
Thanks,
Zdenek
I suppose you're referring to the Linux incarnation of killall, coming from the PSmisc package. Looking at the sources, it appears that some conditions for selecting PIDs to kill are AND-ed together, while others are OR-ed. -r is one of the conditions that is OR-ed with the others. I suspect the authors themselves can't really explain their intention there...

bash shell script sleep or cronjob which is preferred?

I want to do a task every 5 mins. I want to control when i can start and when i can end.
One way is to use sleep in a while true loop, another way is to use cronjob. Which one is preferred performance-wise?
Thanks!
cron is almost always the best solution.
If you try to do it yourself with a simple script running in a while loop:
while true; do
task
sleep 300
done
you eventually find that nothing is happening because your task failed due to a transient error. Or the system rebooted. Or some such. Making your script robust enough to deal with all these eventualities is hard work, and unnecessary. That's what cron is for, after all.
Also, if the task takes some non-trivial amount of time, the above simple-minded while loop will slowly shift out of sync with the clock. That could be fixed:
while true; do
task
sleep $((300 - $(date +%s) % 300))
done
Again, it's hardly worth it since cron will do that for you, too. However, cron will not save you from starting the task before the previous invocation finished, if the previous invocation got stuck somehow. So it's not a completely free ride, but it still provides you with some additional robustness.
A simple approach to solving the stuck-task problem is to use the flock utility. For example, you could cron a script containing the following:
(
flock -n 8 || {
logger -p user.warning Task is taking too long
# You might want to kill the stuck task here. See pkill
exit 1
}
# Do the task here
) 8> /tmp/my_task.lck
Use a cron job. Cron is made for this type of use case. It frees you of having to to code the while loop yourself.
However, cron may be unsuitable if the run time of the script is unpredictable and exceeds the timer schedule.
Performance-wise It is hard to tell unless you share what the script does and how often it does it. But generally speaking, neither option should have a negative impact on performance.

Elegant and efficient way to start GUI programs from terminal without spamming it (Bash or any Posix shell)

Every once in a while I have to fire up a GUI program from my terminal session to do something. It usually is Chrome to display some HTML file are some task alike.
These programs however throw warnings all over the place and it can actually become ridiculous to write anything so I always wanted to redirect stderr/stdout to /dev/null.
While ($PROGRAM &) &>/dev/null seems okay I decided to create a simple Bash function for it so I don't have to repeat myself everytime.
So for now my solution is something like this:
#
# silly little function
#
gui ()
{
if [ $# -gt 0 ] ; then
($# &) &>/dev/null
else
echo "missing argument"
fi
}
#
# silly little example
#
alias google-chrome='gui google-chrome'
So what I'm wondering about is:
Is there a way without an endless list of aliases that's still snappy?
Are there different strategies to accomplish this?
Do other shells offer different solutions?
In asking these questions I want to point out that your strategies and solutions might deviate substantially form mine. Redirecting output to /dev/null and aliasing it was the only way I know but there might be entirely different ways that are more efficient.
Hence this question :-)
As others have pointed in the comments, I think the real problem is a way to distinguish between gui vs. non-gui apps on the commandline. As such, the cleanest way I could think of is to put this part of your script:
#!/bin/bash
if [ $# -gt 0 ] ; then
($# &) &>/dev/null
else
echo "missing argument"
fi
into a file called gui, then chmod +x it and put it in your ~/bin/ (make sure ~/bin is in your $PATH). Now you can launch gui apps with:
`gui google-chrome`
on the prompt.
Alternatively, you can do the above, then make use of bind:
bind 'RETURN: "\e[1~gui \e[4~\n"'
This will allow you to just do:
google-chrome
on the prompt and it would automatically append gui before google-chrome
Or, you can bind the above action to F12 instead of RETURN with
bind '"\e[24~": "\e[1~gui \e[4~\n"'
To separate what you want launched with gui vs. non-gui.
More discussion on binding here and here.
These alternatives offer you a way out of endless aliases; a mix of:
Putting gui in your ~/bin/, and
Binding use of gui to F12 as shown above
seems the most ideal (albeit hacky) solution.
Update - #Enno Weichert's Resultant Solution:
Rounding this solution out ...
This would take care of aliases (in a somewhat whacky way though) and different escape encodings (in a more pragmatic rather than exhaustive way).
Put this in $(HOME)/bin/quiet
#!/bin/bash -i
if [ $# -gt 0 ] ; then
# Expand if $1 is an alias
if [ $(alias -p | awk -F "[ =]" '{print $2}' | grep -x $1) > 0 ] ; then
set -- $(alias $1 | awk -F "['']" '{print $2}') "${#:2}"
fi
($# &) &>/dev/null
else
echo "missing argument"
fi
And this in $(HOME)/.inputrc
#
# Bind prepend `quiet ` to [ALT][RETURN]
#
# The condition is of limited use actually but serves to seperate
# TTY instances from Gnome Terminal instances for me.
# There might very well be other VT emulators that ID as `xterm`
# but use totally different escape codes!
#
$if $term=xterm
"\e\C-j": "\eOHquiet \eOF\n"
$else
"\e\C-m": "\e[1~quiet \e[4~\n"
$endif
Although it is an ugly hack, I sometimes use nohup for a similar needs. It has the side effect of redirecting the command's output, and it makes the program independent of the terminal session.
For the case of running GUI programs in a desktop envinroment it has only little risk for resource leaks as the program will anyway end with window manager session. Nevertheless it should be taken into account.
There is also the option of opening another terminal session. In Gnome for instance, you can use gnome-terminal --comand 'yourapp'.
But this will result in opening many useless terminal windows.
First, it's a great idea to launch GUI applications from the terminal, as this reduces mouse usage. It's faster, and more convenient in terms of options and arguments. For example, take the browser. Say you have the URL in the clipboard, ready to paste. Just type, say, ice (for Iceweasel) and hit Shift-Insert (to paste) and Enter. Compare this to clicking an icon (possibly in a menu), wait for the window to load (even worse if there is a start up page), then click the URL bar (or hit Ctrl-L), then Ctrl-V... So I understand you desire for this to work.
But, I don't see how this would require an "infinite" list of aliases and functions. Are you really using that many GUI applications? And even so, aliases are one line - functions, which may be more practical for handling arguments, are perhaps 1-5 lines of (sparse) code. And, don't feel you need to set them up once and for all - set them up one by one as you go along, when the need arises. Before long, you'll have them all.
Also, if you have a tabbed terminal, like urxvt (there is a Perl extension), you'd benefit from moving from "GUI:s" to "CLI:s": for downloading, there's rtorrent; for IRC, irssi; instead of XEmacs (or emacs), emacs -nw; there is a CLI interface to vlc for streaming music; for mail, Emacs' rmail; etc. etc.! Go hunt :)
(The only sad exception I've run across that I think is a lost cause is the browser. Lynx, W3M, etc., may be great from a technical perspective, but that won't always even matter, as modern web pages are simply not designed with those, text-only browsers in mind. In all honesty, a lot of those pages look a lot less clear in those browsers.)
Hint: To get the most out of a tabbed terminal, you'd like the "change tab" shortcuts "close" (e.g., Alt-J for previous tab and Alt-K for next tab, not the arrow keys that'll make you reach).
Last, one solution that'll circumvent this problem, and that is to launch the "GUI:s" as background processes (with &) in your ~/.xinitrc (including your terminal emulator). Not very flexible, but great for the stuff you always use, every time you use your computer.
Ok, so I kept thinking that the one function should be enough. No aliases, no bashisms, no nonsense. But it seemed to me that the only the way to do that without possibly affecting regular use, such as expansions and completions, was to put the function at the end of the command. This is not as easy as one might first assume.
First I considered a function tacked onto the end in which I would call, say, printf "%b" "\u" in order to cut the current line, plug in another printf, paste it back in, and a quote or two at the beginning, then do what little is needed at the end. I don't know how to make this work though, I'm sorry to say. And even if I did, I couldn't hope for any real reliability/portability with this method due to the varying ways shells interpret escape sequences, not to mention the terminal emulators they run in. Perhaps stty could offer a way forward along these lines, but if so, you won't find it here... now, anyway.
I eventually instead resorted to actually copying the current command's /proc/{PID}/cmdline to a variable, then (shamefully) killing it entirely, and finally wrapping it as I pleased. On the plus side, this is very easily done, very quickly done (though I can imagine arguing its 'efficiency' either way), and seems mostly to work, regardless of the original input, whether that be an alias, a variable, a function, etc. I believe it is also POSIX portable (though I can't remember if I need to specify the kill SIGNALS by name for POSIX or not), and is definitely no nonsense.
On the other hand, its elegance certainly leaves much to be desired, and, though it's probably not worth worrying about, it does waste entirely a single PID. And it doesn't stop completely the shell spam; that is, in my shell I have enabled background jobs reporting with set and so, when first run, the shell kindly informs me that I've just opened and wasted a PID in two lines. Also, because I copy the ../cmdline instead of interfacing directly with the 0 file descriptor, I expect pipes and ; and etc to be problematic. This I can, and likely will, fix myself very soon.
I will fix that, that is, if I cannot find a way to instead make use of, as I suspect can be done,SIGTSTP + SIGCONT by first suspending the process outside a subshell then within one continuing it after redirecting the subshell's output. It seems that this works unreliably for reasons I haven't yet discovered, but I think it's promising. Perhaps nohup and a trap (to effectively rehup it, as it were) is what is needed, but I'm not really sure how to put those together either...
Amyway, without further ado, my semi-aborted, backwards gui launcher:
% _G() { (
_gui_cmd="$(tr '\0' ' ' </proc/"$\!"/cmdline )" ;
kill -9 "$\!" ;
exec eval "${_gui_cmd} &" )
&>/dev/null 2&>1
}
% google-chrome-beta --disk-cache-dir="/tmp/cache" --disk-cache-size=100000000 &_G
[1] 2674
[1] + 2674 killed google-chrome-beta --disk-cache-dir="/tmp/cache" --disk-cache-
%
So another problem one might encounter if attempting to do similar is the order of expansion the shell assumes. Without some trick such as mine you're sure to have some serious difficulty expanding your function before the prior command snags it as an argument. I believe I have guarded against this without unnecessary redundancy by simply tacking my _G function call onto the original command's & background intstruction. Simply add &_Gto the tail-end of the command you wish to run and, well, good luck.
-Mike
P.S. Ok, so writing that last sentence makes me think of what might be done with tee.

Resources