Check process in makefile - makefile

All,
I am currently having a makefile who have some code like:
.PHONY: start-server start-checker kill-server
kill-server:
-pkill -f 'python3 start_server_host.py'
start-server:
nohup python3 start_server_host.py >> server.out 2>&1 &
start-checker:
nohup python3 checker.py ${ARGS} test >> checker.out 2>&1 &
start-checker needs start-serverto be there. I am trying to figure out how to do something like:
start-checker:
if not process start_server_host.py
then start-server
nohup python3 checker.py ${ARGS} test >> checker.out 2>&1 &
the goal is to start the start-serverwhen not done to avoid having this manual action

I strongly advise against using makefiles for system administration tasks. I understand the apparent advantages (minimal boilerplate code for argument handling) is sweet. But I assure you it will come to bite you sooner or later.
to answer your question: modify check-server to depend on start-server. modify start-server to create a file named start-server. modify kill-server to delete the file start-server
now if you do make start-server the server will be started and a file named start-server will be created. if you do make check-server and the file start-server exists it will not invoke start-server. if you do make kill-server the file start-server will be deleted. if you do make check-server again the file start-server will be missing and start-server will be invoked first.
what happens when server crashes and the file start-server is not deleted? that is when this makefile sysadmin hackery will come to bite you.
again i strongly urge to not use makefiles for sysadmin tasks. I wrote in more detail as answer to a similar question: https://unix.stackexchange.com/questions/496793/script-or-makefile-to-automate-new-user-creation/497601#497601

Related

Run Script on AWS #reboot?

I'm currently trying to run script that will run in the background when my AWS instance boots for the duration of the instance life. I'm testing it with a simple script to see if it works, before I test with my more complicated one:
#!/bin/bash
while [true]; do
sleep 1
echo "Hello World" >> "tempStorage.json"
done
And my sudo crontab -l returns:
# All the comment stuff
#reboot sh /home/ubuntu/test/testScript/test.sh
Which is the path to the script. I've also obviously run chmod +x test.sh to make sure its an executable.
The problem is when I stop and then start the AWS instance there's nothing in the tempStorage.json file. I've checked other threads and they all suggest this is what I should be doing, so I'm very confused and advice would be appreciated. Thanks.
As Mark B mentioned, the issue is the execution directory of the cron script. There are two solutions then.
A) Change the path to file as Mark B recommended so the script would look something like:
#!/bin/bash
while [true]; do
sleep 1
echo "Hello World" >> "/home/ubuntu/test/testScript/tempStorage.json"
done
B) Change the directory of the cron execution and keep the script as it was. This works better if you need to put the script in any directory. It would look like this for the crontab:
# All the comment stuff
#reboot cd /home/ubuntu/test/testScript && sh test.sh
That should work fine. I think the issue is that you aren't giving the full path to the tempSTorage.json file within your script. So it is being written to in a different folder than the one you are looking in, specifically whatever folder cron starts processes in by default. Try changing it to something like /tmp/tempSTorage.json and then rebooting the server again.
Note that if you are wanting something that starts on boot and runs forever, this probably isn't the best method. In that case I would look into running your process as a service.

Hot reload on save

I'm currently using a terminal and vim on OSX as a development environment for Flutter. Things are going pretty well except that the app does not reload when I save any dart files. Is there a way to trigger that behavior?Currently I have to go to the terminal and hit "r" to see my changes.
Sorry for the plug, but I wrote a very simple plugin to handle this.
It makes use of Flutter's --pid-file command line flag to send it a SIGUSR1 signal.
You can achieve the same result as my two-line plugin by adding this to an autocmd
silent execute '!kill -SIGUSR1 "$(cat /tmp/flutter.pid)"'
And launching Flutter with the --pid-file flag.
I made a vim plugin hankchiutw/flutter-reload.vim based on killing with SIGUSR1.
You don't have to use --pid-file flag with this plugin. (Thanks to the pgrep :))
Simply execute flutter run, modify your *.dart file and see the reloading.
I did it with the excellent little tool called entr. On OS/X you can install it from brew: brew install entr. The home page of the tool is at http://eradman.com/entrproject/
Then you start flutter run with the pidfile as #nobody_nowhere suggests.
How do you run entr depends on the level of service. In the simplest case you just do find lib/ -name '*.dart' | entr -p kill -USR1 $(cat /tmp/flutter.pid)
But such invocation will not detect new files in the source tree (because find builds a list of files to watch only once, at the start). You can get away with slightly more complex one-liner:
while true
do
find lib/ -name '*.dart' | \
entr -d -p kill -USR1 $(cat /tmp/flutter.pid)
done
The -d option makes entr exit when it does detect a new file in one of the directories and the loop runs again.
I personally use even more complex approach. I use Redux and change to middleware or other state files does not work with hot reload, it doesn't pick up these changes. So you need to resort to hot restart.
I have a script hotrestarter.sh:
#!/bin/bash
set -euo pipefail
PIDFILE="/tmp/flutter.pid"
if [[ "${1-}" != "" && -e $PIDFILE ]]; then
if [[ "$1" =~ \/state\/ ]]; then
kill -USR2 $(cat $PIDFILE)
else
kill -USR1 $(cat $PIDFILE)
fi
fi
It checks if the modified file lives in /state subdirectory and if true does hot restart or else hot reload. I call the script like that:
while true
do
find lib/ -name '*.dart' | entr -d -p ./hotreloader.sh /_
done
The /_ parameter makes entr to pass the name of the file to the program being invoked.
You don't say what platform, but all platforms have a "watcher" app that can run a command when any file in a tree changes. You'll need to run one of those.
vscode has this feature. If you don't mind moving to vscode you can get it out of the box. You could also reach out to the author and see if they have any suggestions on how you could do it in vim or check the source directly. Most likely vim will have a mechanism to do so.

Any reason not to exec in shell script?

I have a bunch of wrapper shell scripts which manipulate command line arguments and do some stuff before invoking another binary at the end. Is there any reason to not always exec the binary at the end? It seems like this would be simpler and more efficient, but I never see it done.
If you check /usr/bin, you will likely find many many shell scripts that end with an exec command. Just as an example, here is /usr/bin/ps2pdf (debian):
#!/bin/sh
# Convert PostScript to PDF.
# Currently, we produce PDF 1.4 by default, but this is not guaranteed
# not to change in the future.
version=14
ps2pdf="`dirname \"$0\"`/ps2pdf$version"
if test ! -x "$ps2pdf"; then
____ps2pdf="ps2pdf$version"
fi
exec "$ps2pdf" "$#"
exec is used because it eliminates the need for keeping the shell process active after it is no longer needed.
My /usr/bin directory has over 150 shell scripts that use exec. So, the use of exec is common.
A reason not to use exec would be if there was some processing to be done after the binary finished executing.
I disagree with your assessment that this is not a common practice. That said, it's not always the right thing.
The most common scenario where I end a script with the execution of another command, but can't reasonably use exec, is if I need a cleanup hook to be run after the command at the end finishes. For instance:
#!/bin/sh
# create a temporary directory
tempdir=$(mktemp -t -d myprog.XXXXXX)
cleanup() { rm -rf "$tempdir"; }
trap cleanup 0
# use that temporary directory for our program
exec myprog --workdir="$tempdir" "$#"
...won't actually clean up tempdir after execution! Changing that exec myprog to merely myprog has some disadvantages -- continued memory usage from the shell, an extra process-table entry, signals being potentially delivered to the shell rather than to the program that it's executing -- but it also ensures that the shell is still around on myprog's exit to run any traps required.

whether a shell script can be executed if another instance of the same script is already running

I have a shell script which usually runs nearly 10 mins for a single run,but i need to know if another request for running the script comes while a instance of the script is running already, whether new request need to wait for existing instance to compplete or a new instance will be started.
I need a new instance must be started whenever a request is available for the same script.
How to do it...
The shell script is a polling script which looks for a file in a directory and execute the file.The execution of the file takes nearly 10 min or more.But during execution if a new file arrives, it also has to be executed simultaneously.
the shell script is below, and how to modify it to execute multiple requests..
#!/bin/bash
while [ 1 ]; do
newfiles=`find /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -newer /afs/rch/usr$
touch /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/.my_marker
if [ -n "$newfiles" ]; then
echo "found files $newfiles"
name2=`ls /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -Art |tail -n 2 |head $
echo " $name2 "
mkdir -p -m 0755 /afs/rch/usr8/fsptools/WWW/dumpspace/$name2
name1="/afs/rch/usr8/fsptools/WWW/dumpspace/fipsdumputils/fipsdumputil -e -$
$name1
touch /afs/rch/usr8/fsptools/WWW/dumpspace/tempfiles/$name2
fi
sleep 5
done
When writing scripts like the one you describe, I take one of two approaches.
First, you can use a pid file to indicate that a second copy should not run. For example:
#!/bin/sh
pidfile=/var/run/$(0##*/).pid
# remove pid if we exit normally or are terminated
trap "rm -f $pidfile" 0 1 3 15
# Write the pid as a symlink
if ! ln -s "pid=$$" "$pidfile"; then
echo "Already running. Exiting." >&2
exit 0
fi
# Do your stuff
I like using symlinks to store pid because writing a symlink is an atomic operation; two processes can't conflict with each other. You don't even need to check for the existence of the pid symlink, because a failure of ln clearly indicates that a pid cannot be set. That's either a permission or path problem, or it's due to the symlink already being there.
Second option is to make it possible .. nay, preferable .. not to block additional instances, and instead configure whatever it is that this script does to permit multiple servers to run at the same time on different queue entries. "Single-queue-single-server" is never as good as "single-queue-multi-server". Since you haven't included code in your question, I have no way to know whether this approach would be useful for you, but here's some explanatory meta bash:
#!/usr/bin/env bash
workdir=/var/tmp # Set a better $workdir than this.
a=( $(get_list_of_queue_ids) ) # A command? A function? Up to you.
for qid in "${a[#]}"; do
# Set a "lock" for this item .. or don't, and move on.
if ! ln -s "pid=$$" $workdir/$qid.working; then
continue
fi
# Do your stuff with just this $qid.
...
# And finally, clean up after ourselves
remove_qid_from_queue $qid
rm $workdir/$qid.working
done
The effect of this is to transfer the idea of "one at a time" from the handler to the data. If you have a multi-CPU system, you probably have enough capacity to handle multiple queue entries at the same time.
ghoti's answer shows some helpful techniques, if modifying the script is an option.
Generally speaking, for an existing script:
Unless you know with certainty that:
the script has no side effects other than to output to the terminal or to write to files with shell-instance specific names (such as incorporating $$, the current shell's PID, into filenames) or some other instance-specific location,
OR that the script was explicitly designed for parallel execution,
I would assume that you cannot safely run multiple copies of the script simultaneously.
It is not reasonable to expect the average shell script to be designed for concurrent use.
From the viewpoint of the operating system, several processes may of course execute the same program in parallel. No need to worry about this.
However, it is conceivable, that a (careless) programmer wrote the program in such a way that it produces incorrect results, when two copies are executed in parallel.

starting remote script via ssh containing nohup

I want to start a script remotely via ssh like this:
ssh user#remote.org -t 'cd my/dir && ./myscript data my#email.com'
The script does various things which work fine until it comes to a line with nohup:
nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &
it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:
Connection to remote.org closed.
What is the problem here?
Thanks for any help
Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.
Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:
nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &
so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)
This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:
Redirect all I/O on the remote nohup'ed command
Tell your local SSH command to exit as soon as it's done starting the remote process(es).
Quoting the answer I already mentioned, in turn quoting wikipedia:
Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
UPDATE
I've just had success with this pattern:
ssh -f user#host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'
Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:
https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs
https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21
http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:
nohup script.sh >&- 2>&- <&-&
2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)

Resources