Version 1 fails to write to my output.txt file:
nohup python test.py &> output.txt &
But Version 2 works:
nohup python test.py &> output.txt
I need the trailing & because I want to do other tasks on the command line!
Solved: ... python -u ... was the answer. Seems this additional option prevents Python output buffering.
Related
I have a python file that I run using a .sh file providing some args. When I run this file on the terminal using bash myscript.sh it runs fine, prints progress on the console. When I use nohup like nohup bash myscript.sh > log.log nothing gets saved to log.log file but the processes are running (it's a 4 GPU process and I can see the GPU usage in top and nvidia-smi). As soon as I kill the process using either ctrl+c or kill command, all the output gets printed to the file at once along with keyboard interrupt or process killed message.
I have tried nohup myscript.sh &> log.log nohup myscript.sh &>> log.log but the issue remains the same. What's the reason for such a behaviour?
myscript.sh runs a python file somewhat like
python main.py --arg1 val1 --arg2 val2
I tried
python -u main.py
but it doesn't help. I know the script is working fine as it occupies exactly same amount of memory as it should.
Use stdbuf:
nohup stdbuf -oL bash myscript.sh > log.log
Your problem is related to output buffering. In general, non-interactive output to files tend to be block buffered. The -oL changes output buffering to line mode. There is also the less efficient -o0 to change it to unbuffered.
I've been using below command to make tail to write nohup.out and also print the output on the terminal.
nohup train.py & tail -f nohup.out
However, I need nohup to use different file names.
When I try
nohup python train.py & tail -F vanila_v1.out
I'm getting following error message.
tail: cannot open 'vanila_v1.out' for readingnohup: ignoring input and appending output to 'nohup.out': No such file or directory
I also tried
nohup python train.py & tail -F nohup.out > vanila_v1.txt
Then it doesn't write an output on stdout.
How do I make nohup to write other than nohup.out? I don't mind simultaneously writing two different files. But to keep track of different processes, I need the name to be different.
Thanks.
You need to pipe the STDOUT and STDERR for the nohup command like:
$ nohup python train.py > vanila_v1.out 2>&1 & tail -F vanila_v1.out
At this point, the process will go into the background and you can use tail -f vanila_v1.out. That's one way to do it.
A little more information is available here for the STDOUT and STDERR link. Here is another question that uses the tee command rather that > to achieve the same in one go.
I have a Bash script that runs a program with parameters. That program outputs some status (doing this, doing that...). There isn't any option for this program to be quiet. How can I prevent the script from displaying anything?
I am looking for something like Windows' "echo off".
The following sends standard output to the null device (bit bucket).
scriptname >/dev/null
And if you also want error messages to be sent there, use one of (the first may not work in all shells):
scriptname &>/dev/null
scriptname >/dev/null 2>&1
scriptname >/dev/null 2>/dev/null
And, if you want to record the messages, but not see them, replace /dev/null with an actual file, such as:
scriptname &>scriptname.out
For completeness, under Windows cmd.exe (where "nul" is the equivalent of "/dev/null"), it is:
scriptname >nul 2>nul
Something like
script > /dev/null 2>&1
This will prevent standard output and error output, redirecting them both to /dev/null.
An alternative that may fit in some situations is to assign the result of a command to a variable:
$ DUMMY=$( grep root /etc/passwd 2>&1 )
$ echo $?
0
$ DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
1
Since Bash and other POSIX commandline interpreters does not consider variable assignments as a command, the present command's return code is respected.
Note: assignement with the typeset or declare keyword is considered as a command, so the evaluated return code in case is the assignement itself and not the command executed in the sub-shell:
$ declare DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
0
Try
: $(yourcommand)
: is short for "do nothing".
$() is just your command.
Like andynormancx' post, use this (if you're working in an Unix environment):
scriptname > /dev/null
Or you can use this (if you're working in a Windows environment):
scriptname > nul
This is another option
scriptname |& :
Take a look at this example from The Linux Documentation Project:
3.6 Sample: stderr and stdout 2 file
This will place every output of a program to a file. This is suitable sometimes for cron entries, if you want a command to pass in absolute silence.
rm -f $(find / -name core) &> /dev/null
That said, you can use this simple redirection:
/path/to/command &>/dev/null
In your script you can add the following to the lines that you know are going to give an output:
some_code 2>>/dev/null
Or else you can also try
some_code >>/dev/null
I scheduled a script using at scheduler in linux.
The job ran fine but the echo statements which I had redirected to a file are no where to be found.
The at scheduling command is as follows:
at -f /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1 -v 09:50
Can anyone point out what is the issue with the above command.
I cannot see any echo statements from the script in the log.txt file
To include shell syntax like I/O redirection, you'll need to either fold it into your script, or pass the input to at via standard input, like so:
at -v 09:50 <<EOF
sh /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1
EOF
If func_test.sh is already executable, you can omit the sh from the beginning of the command; it's there to ensure that you are passing a valid command line to at.
You can also simply ensure that your script itself redirects all its output to a specific log file. As an example,
#!/bin/bash
echo foo
echo bar
becomes
#!/bin/bash
{
echo foo
echo bar
} >> /app/data/log/log.txt 2>&1
Then you can simply run your script with at using
at -f /app/data/scripts/func_test.sh -v 09:50
with no output redirection, because the script itself already redirects all its output to that file.
While running a bash shell script which contains this command:
iperf -c $server_ip -p $iperf_port -t $iperf_duration >> outputfile
the output is many times displayed on console rather than getting appended in the outputfile. Any solutions for same? Am I doing anything wrong?
I am using Ubuntu 12.04
I'm not familiar with iperf, but presumably it's writing some or all of its output to standard-error rather than to standard-output. To merge standard-error with standard-output and send them both to your file, you can add 2>&1 to the end:
iperf -c $server_ip -p $iperf_port -t $iperf_duration >> outputfile 2>&1