Always execute 2 (or more) commands in one line, but return the exit status of 1st one - bash

Is it possible to execute 2 or more commands in one line but return the status of the 1st command in bash?
I have the following step in Docker build:
RUN bin/myserver && cat tmp/log && rm -rf tmp
It is essential to run that process inside my docker build, and it makes sense to make a cleanup afterwards to remove unneeded things to be stored as docker layer.
The myserver logs to log file, not console output, so I need to cat the log to know what was going on, especially in case of failure.
How can I put those things together?

How about:
RUN sh -c 'bin/myserver; status=$?; cat tmp/log; rm -rf tmp; exit $status'

You can redirect the stderr of a command using 2>
To ignore the errors, you can redirect it to /dev/null: 2> /dev/null
So your command line becomes:
RUN bin/myserver && cat tmp/log 2> /dev/null && rm -rf tmp 2> /dev/null

Related

How to ignore output of diff in bash

I tried to compare two files and output customized string. Following is my script.
#!/bin/bash
./${1} > tmp
if ! diff -q tmp ans.txt &>/dev/null; then
>&2 echo "different"
else
>&2 echo "same"
fi
When I execute script, I get:
sh cmp.sh ans.txt
different
Files tmp and ans.txt differ
The weird part is when I type diff -q tmp ans.txt &>/dev/null. No output will show up.
How to fix it(I don't want line:"Files tmp and ans.txt differ")? Thanks!
Most probably the version of sh you are using doesn't understand the bash (deprecated/obsolete) extension &> that redirect both stdout and stderr at the same time. In posix shell the command &>/dev/null I think is parsed as { command & }; > /dev/null - it results in running the command in the background & and the > /dev/null part I think is ignored, as it just redirect output of a nonexistent command - it's valid syntax, but executes nothing. Because running the command in the background succeeds, the if always succeeds.
Prefer not to use &> - use >/dev/null 2>&1 instead. Use diff to pretty print the files comparison. Use cmp in batch scripts to compare files.
if cmp -s tmp ans.txt; then

process does not log when run as background

I want to run this command inside a docker container ( ubuntu:18.04 image ):
(cd inse/; sh start.sh > log.txt 2>&1 ;) &
but when I run it, it does not log it to log.txt. When I run it this way:
(cd inse/; sh start.sh > log.txt 2>&1 ;)
It locks the forground (as it should do) and when I kill it I see that the log.txt file is filled with log stuff, which means It works correctly.
Why is this behaviour happening?
The contents of start.sh is:
#!/usr/bin/env sh
. venv/bin/activate;
python3 main.py;
UPDATE:
Actually this command is not the entry point of container and I run it inside another shell but inside a long running container (testing container).
Using with nohup, no success:
(cd inse/; nohup sh start.sh | tee log.txt;) &
I think this problem refers to using () the subshell concept inside sh. It seems it does not let output go anywhere when ran in background.
UPDATE 2:
Even this does not work:
sh -c "cd inse/; sh start.sh > log.txt 2>&1 &"
UPDATE 3:
Not even this:
sh -c "cd inse/; sh start.sh > log.txt 2>&1;" &
I found what was causing the problem.
It was buffered python output. This problem is caused by python.
I should have used python unbuffered output:
python -u blahblah
Try to use this command and please check that have full access to that folder where log.txt is created.use CMD/RUN step in Dockerfile to run start.sh.
CMD /inse/start.sh > log.txt 2>&1 ;
OR
RUN /inse/start.sh > log.txt 2>&1 ;

eval commands with STDERR/STDOUT redirection causing problems

I'm trying to write a bash script where every command is passed through a function that evaluates the command using this line:
eval $1 2>&1 >>~/max.log | tee --append ~/max.log
An example of a case where it does not work is when trying to evaluate a cd command:
eval cd /usr/local/src 2>&1 >>~/max.log | tee --append ~/max.log
The part the causes the issue is the | tee --append ~/max.log part. Any idea why I'm experiencing issues?
From the bash(1) man page:
Each command in a pipeline is executed as a separate process (i.e., in a subshell).
Therefore, cd can not change the working directory of the current shell when used in a pipeline. To work around this restriction, the usual approach would be to group cd with other commands and redirect the output of the group command:
{
cd /usr/local/src
command1
command2
} | tee --append ~/max.log
Without breaking your existing design, you could instead handle cd specially in your filter function:
# eval all commands (will catch any cd output, but will not change directory):
eval $1 2>&1 >>~/max.log | tee --append ~/max.log
# if command starts with "cd ", execute it once more, but in the current shell:
[[ "$1" == cd\ * ]] && $1
Depending on your situation, this may not be enough: You may have to handle other commands that modify the environment or shell variables like set, history, ulimit, read, pushd, and popd as well. In that case it would probably be a good idea to re-think the program's design.

Can I make bash report errors only errors at the end of a script?

I have a bash script which sequentially executes many tasks.
However, because I do not want to see simple status messages (such as the long output of yum -y update), I ignored all those messages using:
#!/bin/bash
(
yum -y update
cd /foo/bar
cp ~/bar /usr/bin/foo
...
...
) > /dev/null
This does the job just fine, but what if something went wrong, like cp failed to copy some file? If this happens, I would like to catch the error and exit immediately, before the process continues.
How can I exit the process and show the related error? Normally,
an if/else clause would have to be used, like this:
#!/bin/bash
yum -y update
if [ $? -ne 0 ]; then
echo "error "
exit 1
fi
But the problem with this approach is that it would show the process; therefore, I would have to use > /dev/null on each line; more importantly, if I had more than 100 things to do, then the I would have to use many if/else statements.
Is there a convenient solution for this?
Rather than running your commands in (...) use set -e OR bash -ec to execute them:
bash -ec 'set -e
yum -y update
cd /foo/bar
...
...
cp ~/bar /usr/bin/foo' > /dev/null 2> errlog
OR using set -e:
(
set -e
yum -y update
cd /foo/bar
...
...
cp ~/bar /usr/bin/foo
) > /dev/null 2> errlog
-e option will make sure to exit the sub shell as soon as an error occurs.

How does one suppress the output of a command?

I would like to run some commands in my shell script however would like to know some method to it returns nothing.
example:
#! / bin / bash]
rm / home / user
return: rm: can not lstat `/ home / user ': No such file or directory
I would have put the command to run invisibly no return!
To suppress the standard output of a command, the conventional way is to send the output to the null device, with somecommand arg1 arg2 > /dev/null. To also suppress error output, standard error can be redirected to the same place: somecommand arg1 arg1 > /dev/null 2>&1.
Your direct error is coming from incorrect spacing in the path, should be rm /home/whatever without spaces in the path, assuming you don't have actual spaces in the dir names (in which case you will need to quote or escape properly)
About suppressing output. Redirecting stdout here is a bit strange.
$ touch test.txt
$ rm test.txt > /dev/null 2>&1
^ interactive rm is actually asking if you really want to delete the file here, but not printing the message
If you just want to not get error messages, just redirect stderr (file descriptor 2) to /dev/null
$ rm test.txt 2> /dev/null
or
$ rm test.txt 2>&-
If you want it to not prompt do you really want to delete type messages, use the force flag -f
$ rm -f test.txt 2> /dev/null
or
$ rm -f test.txt 2>&-
To delete a directory you either want rmdir if it's empty or use the recursive -r flag, however, this will wipe away everything /home/user so you really need to be careful here.
Unless you have it running in --verbose mode, I can't think of any case where it needs to close stdout for the rm command.
Also as of bash 4, if you want to redirect both stdout and stderr to the same location just use rm whatever &> /dev/null or something similar

Resources