Telling nohup to write output in real-time - bash

When using nohup the output of a script is buffered and only gets dumped to the log file (nohup.out) after the script has finished executing. It would be really useful to see the script output in something close to real-time, to know how it is progressing.
Is there a way to make nohup write the output whenever it is produced by the script? Or, since such frequent file access operations are slow, to dump the output periodically during execution?

There's a special program for this: unbuffer! See http://linux.die.net/man/1/unbuffer
The idea is that your program's output routines recognize that stdout is not a terminal (isatty(stdout) == false), so they buffer output up to some maximum size. Using the unbuffer program as a wrapper for your program will "trick" it into writing the output one line at a time, as it would do if you ran the program in an interactive terminal directly.

What's the command you are executing? You could create a .bash file which inside is redirecting the output to files after each command (echo "blabh" >> your-output.txt) so you can check that file during the nohup execution (you should run: nohup script.bash &)
Cheers

Please use stdbuf. It's added in GNU coreutils from 7.5.
stdbuf -i0 -o0 -e0 cmd
Command with nohup like:
nohup stdbuf -i0 -o0 -e0 ./server >> log_server.log 2>&1
Reference

Related

nohup bash myscript.sh > log.log yeilds empty file until the process is stopped manually

I have a python file that I run using a .sh file providing some args. When I run this file on the terminal using bash myscript.sh it runs fine, prints progress on the console. When I use nohup like nohup bash myscript.sh > log.log nothing gets saved to log.log file but the processes are running (it's a 4 GPU process and I can see the GPU usage in top and nvidia-smi). As soon as I kill the process using either ctrl+c or kill command, all the output gets printed to the file at once along with keyboard interrupt or process killed message.
I have tried nohup myscript.sh &> log.log nohup myscript.sh &>> log.log but the issue remains the same. What's the reason for such a behaviour?
myscript.sh runs a python file somewhat like
python main.py --arg1 val1 --arg2 val2
I tried
python -u main.py
but it doesn't help. I know the script is working fine as it occupies exactly same amount of memory as it should.
Use stdbuf:
nohup stdbuf -oL bash myscript.sh > log.log
Your problem is related to output buffering. In general, non-interactive output to files tend to be block buffered. The -oL changes output buffering to line mode. There is also the less efficient -o0 to change it to unbuffered.

make nohup write other than nohup.out

I've been using below command to make tail to write nohup.out and also print the output on the terminal.
nohup train.py & tail -f nohup.out
However, I need nohup to use different file names.
When I try
nohup python train.py & tail -F vanila_v1.out
I'm getting following error message.
tail: cannot open 'vanila_v1.out' for readingnohup: ignoring input and appending output to 'nohup.out': No such file or directory
I also tried
nohup python train.py & tail -F nohup.out > vanila_v1.txt
Then it doesn't write an output on stdout.
How do I make nohup to write other than nohup.out? I don't mind simultaneously writing two different files. But to keep track of different processes, I need the name to be different.
Thanks.
You need to pipe the STDOUT and STDERR for the nohup command like:
$ nohup python train.py > vanila_v1.out 2>&1 & tail -F vanila_v1.out
At this point, the process will go into the background and you can use tail -f vanila_v1.out. That's one way to do it.
A little more information is available here for the STDOUT and STDERR link. Here is another question that uses the tee command rather that > to achieve the same in one go.

writing sysdig output to a file

I want to check my users commands on my server for a period of time so I use sysdig command with nohup. I want to write output to a file like so:
# nohup sysdig -c spy_users 1>>/path/to/true 2>>/path/to/false &
But result is not written to file realtime. Any idea?
Probably it is being buffered. Try to set the buffering to line-buffer or deactivete it completely:
nohup stdbuf -oL -eL sysdig -c spy_users 1>>/path/to/true 2>>/path/to/false &

Capture historical process history UNIX?

I'm wondering if there a way of capturing a list of the processes executed on a non-interactive shell?
Basically I have a script which calls some variables from other sources and I want to see what the values of said variables are. However, the script executes and finishes very quickly so I can't capture the values using ps.
Is there a way to log processes and what arguments were used?
TIA
Huskie
EDIT:
I'm using Solaris in this instance. I even thought about about having a quick looping script to capture the values being passed - but this doesn't seem very accurate and I'm sure executions aren't always being captured.
I tried this:
#!/bin/ksh
while [ true ]
do
ps -ef | grep $SCRIPT_NAME |egrep -v 'shl|lis|grep' >> grep_out.txt
done
I'd use sleep but I can't specify any precision as all my sleep executables want an integer value rather than any fractional value.
On Solaris:
truss -s!all -daDf -t exec yourCommand 2>&1 | grep -v ENOENT
On AIX and possibly other System V based OSes:
truss -s!all -daDf -t execve yourCommand 2>&1 | grep -v ENOENT
On Linux and other OSes supporting strace, you can use this command:
strace -ff -etrace=execve yourCommand 2>&1 >/dev/tty | grep -v ENOENT
In case the command you want to trace is already running, you can replace yourCommand by -p pid with pid being the process to be traced process id.
EDIT:
Here is a way to trace your running script(s) under Solaris:
for pid in $(pgrep -f $SCRIPT_NAME); do
truss -s!all -daDf -t exec -p $pid 2>&1 | grep -v ENOENT > log.$pid.out &
done
Note that with Solaris, you might also use dtrace to get the same (and more).
Most shells can be invoked in debug mode, where each statement being executed is printed to stdout (or stderr) after variable substitution and expansion.
For Bourne like shells (sh, bash), debug is enabled with the -x option (as in bash -x myscript) or using the set -x statement within the script itself.
However, debugging only works for the 'current' script. If the script calls other scripts, these other scripts will not execute in debug mode. Furthermore, the code inside functions may not be executed in debug mode either - depends on the specific shell - although you can use set -x within a function to enable debug explicitly.
A very much more verbose (at least by default) option is to use something like strace for this.
strace -f -o trace.out script.sh
will give you huge amounts of information about what the script is doing. For your specific usage you will likely want to limit the output a bit with the -e trace=.... option to control which system calls are traced.
Use truss instead of strace on Solaris. Use dtruss on OS X (I believe). With appropriate command line argument changes as well.

Reading realtime output from airodump-ng

When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.

Resources