Why use OR TRUE operator in script with set -e? [duplicate] - shell

This question already has an answer here:
What is the use of "echo || true"?
(1 answer)
Closed 5 years ago.
I'm looking at a simple shell script that I found on github to install CouchDB 2.0 on Ubuntu 16.04. It has these lines:
#!/bin/sh
...
sudo apt-get update || true
...
What is the || pipe component doing? I.e. what is being piped to true and why? As far as I can tell, when I run it on my server I get the same result as running the apt-get update command without piping.
Previously, if I wanted to update/install packages I would do:
sudo apt-get update
sudo apt-get upgrade
Does piping to true result in the upgrade command being run? Also, can I assume that everything in a shell/bash script happens synchronously?

|| is not a pipe operator. It is a shell operator meaning "or". It only executes the following command if the preceding command fails. Since true always succeeds, and otherwise does nothing, the only point of || true is to ensure that the compound command succeeds.
Normally this is unnecessary, but you can put the shell into terminate-on-failure mode with set -e. In that case, any script command which fails will cause the script to immediately terminate. (This is sometimes done in order to avoid having to check the status of every command, but it is not generally recommended as best practice.)
With set -e, it is sometimes desirable to ignore failure for certain commands (such as apt-get update); appending ||true to such a command will guarantee success and allow the script to continue even if the update fails.

Related

Shell shows bunch of error in Azure Devops pipeline although commands succeed

I am using Azure Devops to build and deploy my git repo to a third party vps. I do this by logging into the server from Azure Devops through SSH, executing a shell script to pull git repo, and build it with ie. vue-cli and Laravel.
When the bash script is executed I receive a lot of errors on nearly all commands although everything is succeeding - can anyone tell me how to get rid of these unless something is really failing (would be nice to fail if npm build exit with code 1 for instance).
See screenshot below.
Screenshots are only really helpful for visual issues. You can use PasteBin or etc to share long logs if necessary.
According to this issue Azure just follows the lead of whatever shell it's running code in. So, in Bash it continues unless explicitly told to stop.
To easily change this behavior you can add set -e (or set -o errexit) at the start of your script. The errexit option causes Bash to exit as soon as a command/etc returns a non-zero exit code.
Another worthy addition is the set -o pipefail option. If you've got any pipes like command1 | command2 this will return the first non-zero exit code from a chain of pipes of any length as the result. So, if command1 fails above but command2 succeeds it would return the failure code from command1 instead of overwriting it.
Finally, set -u (or -o nounset) causes an error when unset variables are encountered during parameter expansion. If running in a non-interactive shell, it will also exit.
Many scripts combine these by running set -euo pipefail at the beginning to stop them from running after the first problem is encountered.
If you want to explicitly force a bash script to exit you can use || and && accordingly. The expression command || exit will exit if the command fails and command && exit will exit if the command succeeds.
This seems to be one bug starting from npm V.3.10.8. You can check this discussion.
As a workaround you can add this script to package.json and run the command with --no-optional switch:
"optionalDependencies": {
"fsevents": "*"
},
Also, there's possibility that your NPM version is too old. You can use Node.js tool installer task with version spec = 12.x to install higher node.js and npm versions.

bash call is creating a new process.I want to the next command in same process

I am logging into a remote server using SSH client. I have written a script that will execute two commands on the server.But, as the first command executes a bash script that calls "bash" command at the end. This results in execution of only one command not the other.
I cannot edit the first script to comment or remove the bash call.
i have written following script:
abc.sh
#!/bin/bash
command1="sudo -u user_abc -H /abc/xyz/start_shell.sh"
command2=".try1.sh"
$command1 && $command2
Only command 1 is getting executed not the second as the "bash" call is creating a new process the second command is not executing.
Solution 1
Since you can execute start_shell.sh you must have read permissions. Therefore, you could copy the script, modify it such that it doesn't call bash anymore, and execute the modified version.
I think this would be the best solution. If you really really really have to use start_shell.sh as is, then you could try one of the following solutions.
Solution 2
Try closing stdin using <&-. An interactive bash session will exit immediately if there is no stdin.
sudo -u user_abc -H /abc/xyz/start_shell.sh <&-; ./try1.sh
Solution 3
Change the order if both commands are independent.
./try1.sh; sudo -u user_abc -H /abc/xyz/start_shell.sh

Shell script: unexpected `(' [duplicate]

I have written the following code:
#!/bin/bash
#Simple array
array=(1 2 3 4 5)
echo ${array[*]}
And I am getting error:
array.sh: 3: array.sh: Syntax error: "(" unexpected
From what I came to know from Google, that this might be due to the fact that Ubuntu is now not taking "#!/bin/bash" by default... but then again I added the line but the error is still coming.
Also I have tried by executing bash array.sh but no luck! It prints blank.
My Ubuntu version is: Ubuntu 14.04
Given that script:
#!/bin/bash
#Simple array
array=(1 2 3 4 5)
echo ${array[*]}
and assuming:
It's in a file in your current directory named array.sh;
You've done chmod +x array.sh;
You have a sufficiently new version of bash installed in /bin/bash (you report that you have 4.3.8, which is certainly new enough); and
You execute it correctly
then that should work without any problem.
If you execute the script by typing
./array.sh
the system will pay attention to the #!/bin/bash line and execute the script using /bin/bash.
If you execute it by typing something like:
sh ./array.sh
then it will execute it using /bin/sh. On Ubuntu, /bin/sh is typically a symbolic link to /bin/dash, a Bourne-like shell that doesn't support arrays. That will give you exactly the error message that you report.
The shell used to execute a script is not affected by which shell you're currently using or by which shell is configured as your login shell in /etc/passwd or equivalent (unless you use the source or . command).
In your own answer, you say you fixed the problem by using chsh to change your default login shell to /bin/bash. That by itself should not have any effect. (And /bin/bash is the default login shell on Ubuntu anyway; had you changed it to something else previously?)
What must have happened is that you changed the command you use from sh ./array.sh to ./array.sh without realizing it.
Try running sh ./array.sh and see if you get the same error.
Instead of using sh to run the script,
try the following command:
bash ./array.sh
I solved the problem miraculously. In order to solve the issue, I found a link where it was described to be gone by using the following code. After executing them, the issue got resolved.
chsh -s /bin/bash adhikarisubir
grep ^adhikarisubir /etc/passwd
FYI, "adhikarisubir" is my username.
After executing these commands, bash array.sh produced the desired result.

Check if possible to run command as sudo in Bourne shell?

I'm writing a Bourne shell deployment script, which runs some commands as root and some as the current user. I want to not run all commands as root, and check upfront if the commands I'll need are available to root (to prevent aborted half-done deployments).
In order to do this, I want to make a function that checks if a command can be run as root. My idea was to do this:
sudo_command() {
sudo sh -c 'type "$1"'
}
And then to use it like so:
required_sudo_commands="cp rm apt"
for command in $required_sudo_commands do
sudo_command "$command" || (
echo "missing required command: $command;
exit 1;
)
done
As you might guess by my question here: it doesn't work. Does any of you see what I'm doing wrong here?
I tried running the command inside sudo_command by itself, but that miraculously (to me) did work. But when I put the command into a separate file, it didn't work.
There are two immediate problems:
The $1 not expanding in single quotes.
You can semi-fix this by expanding it in double quotes instead: sudo sh -c "type '$1'"
Your command not exiting. That's easily fixed by replacing your || (..) with || {..}.
(..) creates a subshell that limits the scope of everything inside it including exit. To group commands, use {..}
However, there is also the fundamental problem of trying to use sh -c 'type "$1" to do anything.
One of the major points of sudo is the ability to limit what a user can and can't do. You're assuming that a user has complete, unrestricted access to run arbitrary commands as root, and that any problems are due to root not having these commands available.
That may be a valid assumption for you, but you may want to instead run e.g. sudo apt --version to get a better (but still incomplete) picture of whether you're allowed and able to run apt with sudo without requiring complete and unrestricted access.

Running a delayed command with 'sudo'

I want run a Bash script as root, but delayed. How can I achieve this?
sudo "sleep 3600; command" , or
sudo (sleep 3600; command)
does not work.
You can use at:
sudo at next hour
And then you have to enter the command and close the file with Ctrl+D. Alternatively you can specify commands to be run in a file:
sudo at -f commands next hour
If you really must avoid using cron:
sudo sh -c "(sleep 3600; command)&"
The simplest answer is:
sudo bash -c 'sleep 3600; command' &
Because sleep is a shell command and not an executable, and the semicolon is a shell “operator” too, it is a shell script, and hence needs to run in a shell. bash -c tells sudo to run bash and pass it a script to execute as a string.
Of course this will “hang” until command has actually finished running, or be killed if you exit the surrounding shell. I haven’t found a simple way to use nohup to prevent that here, and at that point, you’re basically reimplementing the at command anyway. I have found the above solution useful in many simple cases though. ;)
For anything more complex… of course you can always make a real shell script file, with a shebang (#! …) at the start, and run that. But I assume the whole point is that you wanted to avoid this for something that simple.
You could theoretically pass a string as a file using Bash’s … <( … ) syntax, but sudo expects it to be a real file, and marked as executable too, so that won’t work.
Use:
sleep 3600; sudo <command>
Anyway, I would consider using cron in your case…

Resources