Last run time of shell script? - shell

I need to create some sort of fail safe in one of my scripts, to prevent it from being re-executed immediately after failure. Typically when a script fails, our support team reruns the script using a 3rd party tool. Which is usually ok, but it should not happen for this particular script.
I was going to echo out a time-stamp into the log, and then make a condition to see if the current time-stamp is at least 2 hrs greater than the one in the log. If so, the script will exit itself. I'm sure this idea will work. However, this got me curious to see if there is a way to pull in the last run time of the script from the system itself? Or if there is an alternate method of preventing the script from being immediately rerun.
It's a SunOS Unix system, using the Ksh Shell.

Just do it, as you proposed, save the date >some file and check it at the script start. You can:
check the last line (as an date string itself)
or the last modification time of the file (e.g. when the last date command modified the somefile
Other common method is create one specified lock-file, or pid-file such /var/run/script.pid, Its content is usually the PID (and hostname, if needed) of the process what created it. Of course, the file-modification time tell you when it is created, by its content you can check the running PID. If the PID doesn't exists, (e.g. pre process is died) and the file modification time is older as X minutes, you can start the script again.
This method is good mainly because you can use simply the cron + some script_starter.sh what will periodically check the script running status and restart it when needed.
If you want use system resources (and have root access) you can use the accton + lastcomm.
I don't know SunOS but probably knows those programs. The accton starts the system-wide accounting of all programs, (needs to be root) and the lastcomm command_name | tail -n 1 shows when the command_name is executed last time.
Check the man lastcomm for the command line switches.

Related

Call UniVerse Command from Shell

I have a UniVerse (Rocket U2) system, and want to be able to call certain UniVerse/TCL commands from a shell script. However whenever I run the uv binary it seems to stop the execution of the rest of the shell script.
For Example if I run:
/u2/uv/bin/uv
It starts a UniVerse session. The next line of the script (RUNPY run_tests.py) is meant to be executed in the TCL environment, but is never input to TCL. I have tried passing in string parameters to the uv binary to be executed, but doesn't appear to do anything.
Is there a way to call UniVerse/TCL commands from a UNIX/Shell environment?
You can type this manually or put it into a shell script. I have not run into any issues with this paradigm, but your choice of shell could theoretically affect this. You certainly want to either be in the directory of the account you want execute it in or cd to it in the script.
/u2/uv/bin/uv <<start
RUNPY run_tests.py
start
Good Luck.
One thing to watch out for is if you have a LOGIN paragraph or something else that runs automatically to start your application (which is really common), then you need to find a way to bypass this for non-interactive users.
https://groups.google.com/forum/#!topic/comp.databases.pick/B2hzuXq3X9A mentions
IF OCONV(#TTY,'MCU')='PHANTOM' THEN ABORT
In UD, I kick off scripts from unix as a phantom to a) capture the log output in PH and b) end the process if extra input is requested, rather than hanging around. In UD that's
$echo "PHANTOM COUNT VOC" | udt
UniData Release 8.1 Build: (2008)
Current UniData home is /unidata/ud81/.
Current working directory is /usr/ud81/demo
:PHANTOM COUNT VOC
PHANTOM process 18743448 started.
COMO file is '_PH_/dsiroot45172_18743448'.
:
Critical abort condition found.
$cat _PH_/dsiroot45172_18743448
COUNT VOC
14670 record(s) counted.
PHANTOM process 18743448 has completed.
Van Amburg's answer is the most correct for handling multiple lines of input. The variant I used was instead of the << command for multi-line strings I just added quotes around a single command (single and double quotes both work):
/u2/uv/bin/uv "RUNPY run_tests.py"

When data is piped from one program via | is there a way to detect what that program was from the second program?

Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

How can I capture output of background process

What is the best way of running process in background and receiving its output only when needed?
Intended usage: make prompt-outputting script with heavy initialization be initialized once per session and not on each prompt run. Note: two-way communication is needed: shell needs to tell when new prompt is needed, what is the last command status.
Known solutions:
some explicitly created files on filesystem (FIFO files, UNIX sockets): it would be better to avoid this as this means I need to choose file name, be sure it is garbage-collected on exit and add something to clean no longer used files in case of a crash.
zsh/zpty module: it is a bit like overkill for this job and does not work in bash.
coprocesses: does not work in bash and AFAIK only one coprocess per session is allowed.
Bash supports coprocesses sinces 4.0, but multiple coprocesses is still experimental.
I would have gone with some explicitly created files, naming them ~/.myThing-$HOSTNAME/fifo if they're per user and host. You can use flock to relatively easily determine if the command is still running and optionally start it:
(
flock -n 123 || exit 1
rm/mkfifo ..
exec yourServer < .. > ..
) 123> ~/".myThing-$HOSTNAME/lockfile"
If the command or server dies, the lock is automatically released and you only have a few zero length files lying around. The next time the server starts, it deletes and sets them up again.
Querying the server would be similar, but exiting if the lock is not in use (and optionally using a wait lock to avoid contention).

Get last bash command including pipes

I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck

Resources