Whats the best way to switch user run multiple commands and send output from all commands to file?
Needs to be done in one line so something like:
sudo -u user (command1; command2; command3; command4; command5) > out.log 2&1
Thanks in advance.
Running multiple commands separated by semicolons requires a shell. To get such shell features, you must invoke a shell like, in this case, bash:
sudo -u user bash -c 'command1; command2; command3; command4; command5' > out.log 2&1
Here, sudo -u user bash runs bash under user. The -c option tells bash which commands to run.
sudo -u user command1 > outFile; command2 >> outFile; command3 >> outFile; command3 >> outFile; command4 >> outFile;
If you want subsequent command to not execute if the previous one failed then...
command1 > outFile && command2 >> outFile && command3 >> outFile && command3 >> outFile && command4 >> outFile;
Note that command1 output is piped to the outFile by '>' (which will create a file and add output) whereas all subsequent commands are piping through '>>' (which will append the output)
Related
In efforts to create more manageable scripts that write their own output to only one location themselves (via 'exec > file'), is there a better solution than below for combining stdout redirection + zenity (which in this use relies on piped stdout)?
parent.sh:
#!/bin/bash
exec >> /var/log/parent.out
( true; sh child.sh ) | zenity --progress --pulsate --auto-close --text='Executing child.sh')
[[ "$?" != "0" ]] && exit 1
...
child.sh:
#!/bin/bash
exec >> /var/log/child.out
echo 'Now doing child.sh things..'
...
When doing something like-
sh child.sh | zenity --progress --pulsate --auto-close --text='Executing child.sh'
zenity never receives stdout from child.sh since it is being redirected from within child.sh. Even though it seems to be a bit of a hack, is using a subshell containing a 'true' + execution of child.sh acceptable? Or is there a better way to manage stdout?
I get that 'tee' is acceptable to use in this scenario, though I would rather not have to write out child.sh's logfile location each time I want to execute child.sh.
Your redirection exec > stdout.txt will lead to error.
$ exec > stdout.txt
$ echo hello
$ cat stdout.txt
cat: stdout.txt: input file is output file
You need an intermediary file descriptor.
$ exec 3> stdout.txt
$ echo hello >&3
$ cat stdout.txt
hello
My Bash script calls a lot of commands, most of them output something. I want to silence them all. Right now, I'm adding &>/dev/null at the end of most command invocations, like this:
some_command &>/dev/null
another_command &>/dev/null
command3 &>/dev/null
Some of the commands have flags like --quiet or similar, still, I'd need to work it out for all of them and I'd just rather silence all of them by default and only allow output explicitly, e.g., via echo.
You can use the exec command to redirect everything for the rest of the script.
You can use 3>&1 to save the old stdout stream on FD 3, so you can redirect output to that if you want to see the output.
exec 3>&1 &>/dev/null
some_command
another_command
command_you_want_to_see >&3
command3
You can create a function:
run_cmd_silent () {
# echo "Running: ${1}"
${1} > /dev/null 2>&1
}
You can remove the commented line to print the actual command you run.
Now you can run your commands like this, e.g.:
run_cmd_silent "git clone git#github.com:prestodb/presto.git"
This question already has answers here:
Why can't I use Unix Nohup with Bash For-loop?
(3 answers)
Closed 7 years ago.
I have a series of commands that I want to use nohup with.
Each command can take a day or two, and sometimes I get disconnected from the terminal.
What is the right way to achieve this?
Method1:
nohup command1 && command2 && command3 ...
or
Method2:
nohup command1 && nohup command2 && nohup command3 ...
or
Method3:
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I can see method 3 might work, but it forces me to create a temp file. If I can choose from only method 1 or 2, what is the right way?
nohup command1 && command2 && command3 ...
The nohup will apply only to command1. When it finishes (assuming it doesn't fail), command2 will be executed without nohup, and will be vulnerable to a hangup signal.
nohup command1 && nohup command2 && nohup command3 ...
I don't think this would work. Each of the three commands will be protected by nohup, but the shell that handles the && operators will not. If you logout before command2 begins, I don't think it will be started; likewise for command3.
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I think this should work -- but there's another way that doesn't require creating a script:
nohup sh -c 'command1 && command2 && command3'
The shell is then protected from hangup signals, and I believe the three sub-commands are as well. If I'm mistaken on that last point, you could do:
nohup sh -c 'nohup command1 && nohup command2 && nohup command3'
I have this
script -q -c "continuously_running_command" /dev/null > out
When I have the above command line running I can stop it by doing CTRL+C
However I'd like to run the above commandline in back ground so that I can stop it by doing kill -9 %1
But when I try to run this
script -q -c "continuously_running_command" /dev/null > out &
I get
[2]+ Stopped (tty output) script -q -c "continuously_running_command" /dev/null 1>out
Question:
How can I run the above commandline in back ground?
In order to background a process with redirection to a file, you must also redirect stderr. With stdout and stderr redirected, you can then background the process:
script -q -c "continuously_running_command" /dev/null > out 2>&1 &
Fully working example:
#!/bin/bash
i=$((0+0))
while test "$i" -lt 100; do
((i+=1))
echo "$i"
sleep 1
done
Running the script and tail of output file while backgrounded:
alchemy:~/scr/tmp/stack> ./back.sh > outfile 2>&1 &
[1] 31779
alchemy:~/scr/tmp/stack> tailf outfile
10
11
12
13
14
...
100
[1]+ Done ./back.sh > outfile 2>&1
In the case of:
script -q -c "continuously_running_command" /dev/null
The problem in in this case is the fact that script itself causes redirection of all dialog with script to FILE, in this case to /dev/null. So you need to simply issue the command without redirecting to /dev/null or just redirect stderr to out:
script -q -c "continuously_running_command" out 2>&1 &
or
script -q -c "continuously_running_command" /dev/null/ 2>out &
I believe I have everything setup correctly for my if else statement however it keeps outputting content into my shell terminal as if i ran the command myself. is there anyway i can escape this so i can run these commands without it populating my terminal with text from the results?
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running." &
echo $!
else
echo "Process is not running... Starting..."
python likebot.py &
echo $!
fi
Here is what the output looks like a few minutes after running my bash script
[~]# sh check.sh
Process is not running... Starting...
12359
[~]# Your account has been rated. Sleeping on kranze for 1 minute(s). Liked 0 photo(s)...
Your account has been rated. Sleeping on kranze for 2 minute(s). Liked 0 photo(s)...
If you want to redirect output from within the shell script, you use exec:
exec 1>/dev/null 2>&1
This will redirect everything from now on. If you want to output to a log:
exec 1>/tmp/logfile 2>&1
To append a log:
exec 1>>/tmp/logfile 2>&1
To backup your handles so you can restore them:
exec 3>&1 4>&2
exec 1>/dev/null 2>&1
# Do some stuff
# Restore descriptors
exec 1>&3 2>&4
# Close the descriptors.
exec 3>&- 4>&-
If there is a particular section of a script you want to silence:
#!/bin/bash
echo Hey, check me out, I can make noise!
{
echo Thats not fair, I am being silenced!
mv -v /tmp/a /tmp/b
echo Me too.
} 1>/dev/null 2>&1
If you want to redirect the "normal (stdout)" output use >/dev/null if you also want to redirect the error output as well use 2>&1 >/dev/null
eg
$ command 2>&1 >/dev/null
I think you have to redirect STDOUT (and may be STDERR) of the python interpreter:
...
echo "Process is not running... Starting..."
python likebot.py >/dev/null 2>&1 &
...
For further details, please have a look at Bash IO-Redirection.
Hope that helped a bit.
*Jost
You have two options:
You can redirect standard output to a log file using > /path/to/file
You can redirect standard output to /dev/null to get rid of it completely using > /dev/null
If you want error output redirected as well use &>
See here
Also, not relevant to this particular example, but some bash commands support a 'quiet' or 'silent' flag.
Append >> /path/to/outputfile/outputfile.txt to the end of every echo statement
echo "Process is running." >> /path/to/outputfile/outputfile.txt
Alternatively, send the output to the file when you run the script from the shell
[~]# sh check.sh >> /path/to/outputfile/outputfile.txt