How to completely hide nohup messages? - bash

I try to completely hide nohup messages from my terminal. My nohup usage looks like this:
if eval 'nohup grep "'$arg1' '$arg2' [A-Za-z]\+" /tmp/dict.txt >& /dev/null &'; then
but in console I get one nohup message in Polish:
nohup: zignorowanie wejścia i przekierowanie standardowego błędu do standardowego wyjścia
which means "ignoring standard input redirecting standard error to standard output"
Is there any chance to hide every nohup message?

The trick I use is to run nohup in a sub-shell and redirect the standard error output of the sub-shell to /dev/null:
if (nohup grep "'$arg1' '$arg2' [A-Za-z]\+" /tmp/dict.txt & ) 2>/dev/null
then : …whatever…
else : …never executed…
fi
However, the exit status of that command is 0, even if the grep fails to do anything (mine wrote grep: /tmp/dict.txt: No such file or directory in nohup.out, for example), but the exit status was still 0, success. The one disadvantage of the notation used here is that the job is run by the sub-shell, so the main process cannot wait for it to complete. You can work around that with:
(exec nohup grep "'$arg1' '$arg2' [A-Za-z]\+" /tmp/dict.txt ) 2>/dev/null &
This gives you the job control information at an interactive terminal; it won't in a non-interactive script, of course. You could also place the & outside the sub-shell and after the error redirection in the first command line, without the exec, and the result is essentially the same.

Related

Logs are not inserting in 'nohup.out' file

I am executing script1.sh through crontab.
script1.sh
echo "Crontab is executing this script."
echo "Some code is running here."
cd cron
./script2.sh
'script1.sh' is invoking 'script2.sh'.
'script2.sh'
echo "Exceuting script2."
echo "log of script3.sh is inserting in nohup.out file."
nohup sh script3.sh &
echo "Above syntax is not logging in nohup.out file. but this syntax is 'sh script3.sh > nohup.out &' is logging in nohup.out file. Why is it so?"
'script2.sh' is invoking 'script3.sh',but not able to log in nohup.out file. For logging following syntax is used.
nohup sh script3.sh &
'script3.sh' contains below mention line of codes.
echo "Executing script3. This is just test example to simulate the things."
Why logs are not inserting in nohup.out file?
From info nohup:
23.4 ‘nohup’: Run a command immune to hangups
=============================================
‘nohup’ runs the given COMMAND with hangup signals ignored, so that
the command can continue running in the background after you log out.
...
If standard output is a terminal, the command’s standard output is
appended to the file ‘nohup.out’;
if that cannot be written to, [waltera: the write permission/space in current dir] it is
appended to the file ‘$HOME/nohup.out’; and if that cannot be written
to, the command is not run.
...
However, if standard output is closed, standard error terminal output
is instead appended to the file ‘nohup.out’ or ‘$HOME/nohup.out’ as
above.
When cron executes the script, stdout is not a terminal nor closed, so nohup has no reason to redirect any output.
Another demo of nohup without a terminal is when you start your script2.sh, that has an nohup command, with another nohup:
nohup ./script2.sh > masternohup.out
The output of script3.sh will be written to masternohup.out.
From man nohup
...
If standard input is a terminal, redirect it from an unreadable file.
If standard output is a terminal, append output to 'nohup.out' if possible, '$HOME/nohup.out' otherwise.
If standard error is a terminal, redirect it to standard output.
To save output to FILE, use 'nohup COMMAND > FILE'.
...
So it could be in $HOME/nohup.out either way it's best to use this to control output:
nohup COMMAND &> FILE
You don't need nohup in modern days.
Shell escape method allows a process to leave its process group and never receive SIGHUP nor any other signals directed to a process group.
In bash shell:
(command &>log.txt &)

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Want to redirect the output of the nohup command [duplicate]

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Hudson : "yes: standard output: Broken pipe"

I need to run a shell script in hudson. That script needs an answer from the user. To give an automatic answer I did the following command line :
yes | ./MyScript.sh
This works well in Ubuntu terminal. But when I use the same command in the Hudson job, the script will be automated and do all the needed work, but at the end, I get these two lines of error :
yes: standard output: Broken pipe
yes: write error
And this causes the failure to my Hudson job.
How should I change my command line to work well in Hudson?
But how would you explain that I dont get this error while running the script locally, but I get the error when running it remotely from a Hudson job?
When you are running it in a terminal (locally); yes is killed by SIGPIPE signal that is generated when it tries to write to the pipe when MyScript.sh has already exited.
Whatever runs the command (remotely) in Hudson traps that signal (set its handler to SIG_IGN, you can test it by running trap command and searching for SIGPIPE in the output) and it doesn't restore the signal for new child processes (yes and whatever runs MyScript.sh e.g., sh in your case). It leads to the write error (EPIPE) instead of the signal. yes detects the write error and reports it.
You can simply ignore the error message:
yes 2>/dev/null | ./MyScript.sh
You could also report the bug against the component that runs the pipeline. The bug is in not restoring SIGPIPE to the default handler after the child is forked. It is what programs expect when they are run in a terminal on POSIX systems. Though I don't know whether there is a standard way to do it for a java-based program. jvm probably raises an exception for every write error so not-dying on SIGPIPE is not a problem for a java program.
It is common for daemons such as hudson process to ignore SIGPIPE signal. You don't want your daemon to die only because the process you are communicating with dies and you would check for write errors anyway.
Ordinary programs that are written to be run in a terminal do not check status of every printf() for errors but you want them to die if programs down the pipeline die e.g., if you run source | sink pipeline; usually you want source process to exit as soon as possible if sink exits.
EPIPE write error is returned if SIGPIPE signal is disabled (as it looks like in hudson's case) or if a program does not die on receiving it (yes program does not defined any handlers for SIGPIPE so it should die on receiving the signal).
I don't want to ignore the error, I want to do the right command or fix to get rid of the error.
the only way yes process stops if it is killed or encountered a write error. If SIGPIPE signal is set to be ignored (by the parent) and no other signal kills the process then yes receives write error on ./MyScript.sh exit. There are no other options if you use yes program.
SIGPIPE signal and EPIPE error communicate the exact same information -- pipe is broken. If SIGPIPE were enabled for yes process then you wouldn't see the error. And only because you see it; nothing new happens. It just means that ./MyScript.sh exited (successfully or unsuccessfully -- doesn't matter).
I had this error, and my problem with it is not that it output yes: standard output: Broken pipe but rather than it returns an error code.
Because I run my script with bash strict mode including -o pipefail, when yes "errors" it causes my script to error.
How to avoid an error
The way I avoided this is like so:
bash -c "yes || true" | my-script.sh
You are trying to use the yes program to pipe to the script? or echo yes to the script? If the process is working through jenkins, add "; true" to the end of your shell command.
Since yes and ./MyScript.sh can each be run in an explicit subshell, it is possible to background the yes command, send yespid to the ./MyScript.sh subshell and then implement a trap on EXIT there to manually terminate the yes command. (The trap on EXIT should always be implemented in the subshell of the last command of a piped commmand sequence).
# avoid hangup or "broken pipe" error message when parent process set SIGPIPE to be ignored
# sleep 0 or cat /dev/null: do nothing but with external command (for a shell builtin command see: help :)
(
trap "" PIPE
( (sleep 0; exec yes) & echo ${!}; wait ${!} ) |
(
trap 'trap - EXIT; kill "$yespid"; exit 0' EXIT
yespid="$(head -n 1)"
head -n 10 # replacement for ./MyScript.sh
)
echo ${PIPESTATUS[*]}
)
If you want to exit the yes subshell with exit code 0 you can do this as well:
# avoid hangup or "broken pipe" error message when parent process set SIGPIPE to be ignored
# set exit code of yes subshell to 0
(
trap "" PIPE
(
trap 'trap - TERM; echo "kill from yes subshell ..." 1>&2; kill "${!}"; exit 0' TERM
subshell_pid="$(bash -c 'echo "$PPID"')"
(sleep 0; exec yes) & echo "${subshell_pid}"; wait ${!}
) |
(
trap 'trap - EXIT; kill -s TERM "$subshell_pid"; exit' EXIT
subshell_pid="$(head -n 1)"
head -n 10 # replacement for ./MyScript.sh
)
echo ${PIPESTATUS[*]}
)
The command yes being running in an infinite loop I supposed that this might be the solution :
yes | head -1 | ./MyScript.sh #only one "Y" would be output of the command yes
But, I got the same error.
We can redirect the error to /dev/null as suggested by #J.F. Sebastian, or enforce that the command is correct by this :
yes | head -1 | ./MyScript.sh || yes
But, this suggestions were less appreciated. And so, I had to create my own named pipe, as follow :
mkfifo /tmp/my_fifo #to create the named pipe
exec 3<>/tmp/my_fifo #to make the named pipe in read and write mode by assigning it to a file descriptor
echo "Y" >/tmp/my_fifo #to write into the named pipe, "Y" is the default value of yes
./MyScript.sh </tmp/my_fifo #to read from the named pipe
rm /tmp/my_fifo #remove the named pipe
I'm expecting more valuable solutions with greater explainations.
Here it is an explaination for a file descriptor in linux.
Thanks

Can I change the name of `nohup.out`?

When I run nohup some_command &, the output goes to nohup.out; man nohup says to look at info nohup which in turn says:
If standard output is a terminal, the
command's standard output is appended
to the file 'nohup.out'; if that
cannot be written to, it is appended
to the file '$HOME/nohup.out'; and if
that cannot be written to, the command
is not run.
But if I already have one command using nohup with output going to /nohup.out and I want to run another, nohup command, can I redirect the output to nohup2.out?
nohup some_command &> nohup2.out &
and voila.
Older syntax for Bash version < 4:
nohup some_command > nohup2.out 2>&1 &
For some reason, the above answer did not work for me; I did not return to the command prompt after running it as I expected with the trailing &. Instead, I simply tried with
nohup some_command > nohup2.out&
and it works just as I want it to. Leaving this here in case someone else is in the same situation. Running Bash 4.3.8 for reference.
Above methods will remove your output file data whenever you run above nohup command.
To Append output in user defined file you can use >> in nohup command.
nohup your_command >> filename.out &
This command will append all output in your file without removing old data.
As the file handlers points to i-nodes (which are stored independently from file names) on Linux/Unix systems You can rename the default nohup.out to any other filename any time after starting nohup something&. So also one could do the following:
$ nohup something&
$ mv nohup.out nohup2.out
$ nohup something2&
Now something adds lines to nohup2.out and something2 to nohup.out.
my start.sh file:
#/bin/bash
nohup forever -c php artisan your:command >>storage/logs/yourcommand.log 2>&1 &
There is one important thing only. FIRST COMMAND MUST BE "nohup", second command must be "forever" and "-c" parameter is forever's param, "2>&1 &" area is for "nohup". After running this line then you can logout from your terminal, relogin and run "forever restartall" voilaa... You can restart and you can be sure that if script halts then forever will restart it.
I <3 forever

Resources