I need help with some scripts I'm writing.
Scenario:
Script A is executed by a scheduling process. This script takes the arguments passed to it, parses them in some way and runs script B feeding it with those arguments;
Script B does sudo -u user ssh user#REMOTEMACHINE, runs some commands (in the remote machine) and finally runs script C (also in the remote machine). I am passing those commands using a HERE DOCUMENT. Also, I'm passing the previous arguments to this script too.
This "flow" runs correctly and the job completes successfully.
My problems are:
Since this "flow" is ran by a scheduling process, I need to tell it if the job completed successfully or not. I'm doing this via exit codes, so what I want is to have a chain of exit codes, returning back from the last script to the first, in case of errors. I'm not able to perform this, because exit codes works correctly for the single scripts (I tried executing them singularly and look for the exit codes), but they are not sended back to the parent script. In my opinion, the problem is that ssh is getting the exit code from the child script, which in fact ended successfully, because there was no error executing it: it's the command inside of it that gone wrong.
While the process works correctly, I still get this line:
ssh: Could not resolve hostname : Name or service not known
But actually the script completes successfully.
I hope you understand what I wrote, I can eventually post my scripts here.
Thanks
O.
EDIT:
This are the scripts. There could be some problem with variable names because I renamed it quikly to upload the files.
Since I can't upload 3 files because of my low reputation, I merged them in a single file
SCRIPT FILE
I managed to solve the problem.
I followed olivier's advice and used the escape char to make the variable expanded by the remote machine.
Also I implemented different exit codes based on where the error occured.
At last, I modified the first script as follows, after launching sudo -u for the second script:
EXITCODEOFTHESECONDSCRIPT=$?
if [ $EXITCODEOFTHESECONDSCRIPT = 0 ]
then
echo ""
echo "Export job took $SECONDS seconds."
echo ""
exit 0
else
exit $EXITCODEOFTHESECONDSCRIPT
fi
This way I am able to exit the main script MAINTAINING the exit code provided from the second script.
In fact, I found that the problem was that the process worked well, even in case of errors, but the fact that I was giving more commands after the second script fail (the echo command was enough) provided other exit codes that overwrited the one I wanted.
Thanks to all !
Related
I have a Bash script that gets executed inside Codebuild--it currently reads the last line in a file and, depending on what's in it, kicks off some AWS cli commands to deploy a new Lambda. I'm trying to update the script to read multiple lines and be able to deploy multiple Lambdas, but my current attempt at just putting the original code in a loop for each line in the file failed with the error An error occurred (ResourceConflictException) when calling the UpdateFunctionCode operation: The operation cannot be performed at this time. An update is in progress for resource blahblahblah.
I think putting the CLI command inside a loop and sleeping until the command executes successfully should work. However, I'm pretty inexperienced with Bash and not quite sure of the syntax to do it, and it's not the easiest thing to test locally. Something like:
mins=0
while [aws lambda update-function-code blahblahblah -ne 0] && [ $mins -lt 5 ]
do
echo "Resource conflict...sleeping for 1 min"
sleep 1m
mins=$(( $mins + 1 ))
done
There are actual args instead of blahblahblah, and I'm not sure how long I actually want to wait for the resource conflict to resolve. Also, earlier in the script, set -eou pipefail has been called, so I don't know if I'd need to unset it or do something special to capture the output of the AWS command.
I'd most like to know how to actually write the code using this logic, but if there are any problems with doing it this way that I should account for, or entirely different and better ways of doing it, that would be helpful, too.
My requirement is to run multiple shell scripts at a time.
After searching on Google could conclude that I can use "&" at the end of filename while triggering the run like:
sh file.sh &
the thing is I have for loop which generates the values and gives runtime parameters for the shell script:
sample code:
declare -a arr=("1" "2")
for ((i=0;i<${#arr[#]};++i));
do
sh fileto_run.sh ${arr[i]}
done
this successfully triggers the fileto_run.sh in parallel but it hangs there itself.. imagine I have echo statement in the script then the following is how the code hangs:
-bash-x.x$ 1
2
until I use ctrl+c the code execution wont exit.
I thought of using a break statement but that breaks the loop.
Am I doing wrong anywhere?
Please do correct me.
basically I have written a shell script for a homework assignment that works fine however I am having issues with exiting. Essentially the script reads numbers from the user until it reads a negative number and then does some output. I have the script set to exit and output an error code when it receives anything but a number and that's where the issue is.
The code is as follows:
if test $number -eq $number >dev/null 2>&1
then
"do stuff"
else
echo "There was an error"
exit
The problem is that we have to turn in our programs as text files using script and whenever I try to script my program and test the error cases it exits out of script as well. Is there a better way to do this?
The script is being run with the following command in the terminal
script "insert name of program here"
Thanks
If the program you're testing is invoked as a subprocess, then any exit command will only exit the command itself. The fact that you're seeing contrary behavior means you must be invoking it differently.
When invoking your script from the parent testing program, use:
# this runs "yourscript" as its own, external process.
./yourscript
...to invoke it as a subprocess, not
# this is POSIX-compliant syntax to run the commands in "yourscript" in the current shell.
. yourscript
...or...
# this is bash-extended syntax to run the commands in "yourscript" in the current shell.
source yourscript
...as either of the latter will run all the commands -- including exit -- inside your current shell, modifying its state or, in the case of exit, exec or similar, telling it to cease execution.
When using Net::SSH to run commands on a remote connection, it adds the following script to the end of each and every command:
DONTEVERUSETHIS=$?; echo #{manager.separator} $DONTEVERUSETHIS; echo \"exit $DONTEVERUSETHIS\"|sh
the output produced looks like:
DONTEVERUSETHIS=$?; echo 10e75e2821012645fa3a3cc08ec5de527a392af68db4c3cac63dac22d4de2a8708fcc176190817fe $DONTEVERUSETHIS; echo "exit $DONTEVERUSETHIS"|sh
Here's a link to the source code Net::SSH::Shell::Process and look at the 'run' method
Can anyone explain why this is always added?
It doesn't appear in the console output but plays hell with parsing ~/.bash_history
A quick look into the source repository reveals this commit:
keep the exitcode 1 available for the next command
In effect, this allows you to inspect the value of $? (i.e. the exitcode of the previous command) in the next command.
TL;DR: It's the machine readable equivalent of a colored shell prompt. It's there to tell the library when the issued command has finished, and whether it was successful.
When running a command with Net::SSH (not ::Shell), here's what happens:
Connection is established
Command is sent
Output is received
The command exits, sshd returns the exit code and ends the connection.
This means that it's easy to:
Get the output: just read until sshd closes the connection.
Get the exit code. sshd returns it.
However, it means that each command is run in a separate session, so cd /tmp followed by pwd will return /home/youruser because these are two different sessions, so the former doesn't affect the latter.
The purpose of Net::SSH::Shell is instead to run multiple, individual commands in the same shell session:
Connection is established.
Commands are sent as a single, infinite, concatenated stream
Output is received as a single, infinite, concatenated stream
This leaves two open questions:
How do you know whether the command has finished or whether it's still processing?
How do you get the exit code now that sshd doesn't return it?
The way Net::SSH::Shell solves this is by modifying the command in the way you saw, to make it print a unique ID and exit code when done:
To get the command's output, read until a line with the unique ID is printed.
To get the exit code, read it from the same line.
I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately.
After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit?
Thanks for the help,
Ryan
With set -e, the script will stop at the first command which gives a non-zero exit status. This does not necessarily mean that you will see an error message.
Here is an example, using the false command which does nothing but exit with an error status.
Without set -e:
$ cat test.sh
#!/bin/sh
false
echo Hello
$ ./test.sh
Hello
$
But the same script with set -e exits without printing anything:
$ cat test2.sh
#!/bin/sh
set -e
false
echo Hello
$ ./test2.sh
$
Based on your observations, it sounds like your script is failing for some reason (presumably related to the different environment, as Jim Lewis suggested) before it generates any output.
To debug, add set -x to the top of the script (as well as set -e) to show commands as they are executed.
When your script runs under cron, the environment variables and path may be set differently than when the script is run directly by a user. Perhaps that's why it behaves differently?
To test this: create a new script that does nothing but printenv and echo $PATH.
Run this script manually, saving the output, then run it as a cron job, saving that output.
Compare the two environments. I am sure you will find differences...an interactive
login shell will have had its environment set up by sourcing a ".login", ".bash_profile",
or similar script (depending on the user's shell). This generally will not happen in a
cron job, which is usually the reason for a cron job behaving differently from running
the same script in a login shell.
To fix this: At the top of the script, either explicitly set the environment variables
and PATH to match the interactive environment, or source the user's ".bash_profile",
".login", or other setup script, depending on which shell they're using.