Preventing a program from breaking my Bash script - bash

I'm using S3cmd in a bash script upon startup. If it returns an error code, the script is ready to do something. However, s3cmd seem to (sometimes) break it all when an error occurs, and outputs information on screen. It just exists my script.
How do I prevent a program from breaking my Bash script? If something is wrong, I just want the bash script to keep on doing the next thing in line.
EDIT: It seems this only happens with /etc/rc.local. If I runt the script as something else (/home/whateverscript) it does as I want it to.

May be you can wrap your
s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/"
in the script as below,
outputText=$(s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/" 2>&1;echo ,$?)
This will redirect your stderr to stdout (2>&1) and the variable outputText would contain your desired output (in the form *stdout,exit_status*) of the command, if it's required later in the script context.
If you don't want stdout but only the status of the command in outputText, you can use the following
status=$(s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/" > /dev/null 2>&1;echo $?)
status variable would contain the status of teh command that was run.
I hope it makes sense. Please comment, if it hasn't solved your problem.

Related

bash script fails to delete s3 object inside a while loop

I have text a file with a list of s3 objects, in the form of:
prefix_x/prefix_y/file_name_1
prefix_w/prefix_z/file_name_88
etc...
I wrote a bash script to delete all of these objects, as follows:
#! /bin/bash
LIST_OF_PATHS=$1
while read FILE_PATH; do
aws s3 rm s3://bucket-name/$FILE_PATH
done < $LIST_OF_PATHS
The script doesn't seem to delete the objects (they appear in the UI and in the terminal when lsing them with the CLI).
Further details and things I've already tried:
deleting the objects manually with similar command in the CLI - it works.
adding ls command to the loop provides no output, whereas typing this command manually on the very same files does give output.
adding sleep 0.1 to each iteration of the loop didn't help either.
of course the script runs - I see the output: delete: s3://bucket-name/prefix_x/prefix_y/file_name_1, but the file doesn't actually get deleted.
running a simpler bash script with the same command and a specific file name (not inside a loop) does delete successfully.
What might be the problem?
Solved!
The issue was that in a script, bash expects the newline char to be '\n', but my input file contained '\r' at the end of each line. More on this can be found here:
https://superuser.com/questions/489180/remove-r-from-echoing-out-in-bash-script/489191
Many thanks to #Barmar, whose comment helped me see this and debug the issue.
I simply changed input file itself, and the script ran perfectly as it was.
Check your AWS permissions on the S3 service.
Check your AWS command line tool configuration.
Check your s3 bucket configuration.
Run your AWS s3 command without using a loop, see if the command is run successfully, before using a loop

Script piped into bash fails to expand globs during rm command

I am writing a script with the intention of being able to download and run it from anywhere, like:
bash <(curl -s https://raw.githubusercontent.com/path/to/script.sh)
The command above allows me to download the script, run interactive commands (e.g. read), and - for the most part - Just Works. I have run into an issue during the cleanup portion of my script, however, and haven't been able to discern a fix
During cleanup I need to remove several .bkp files created by the script's execution. To do so I run rm -f **/*.bkp inside the script. When a local copy of the script is run, this works great! When run via bash/curl, however, it removes nothing. I believe this has something to do with a failure to expand the glob as a result of the way I've connected the I/O of bash and curl, but have been unable to find a way to get everything to play nice
How can I meet all of the following requirements?
Download and run a script from a remote resource
Ensure that the user's keyboard input is connected for use in e.g. read calls within the script
Correctly expand the glob passed to rm
Bonus points: colorize input with e.g. echo -e "\x1b[31mSome error text here\x1b[0m" (also not working, suspected to be related to the same bash/curl I/O issues)

Output of complete script to variable

I have rather complex bash script which is normally run manually and thus needs live output on the console (stdout and stderr).
However, since the outcome and output of this script is rather important I'd like to save its output at the end into a database.
I have already a trap function for this and the database query as such is also not a problem. The problem is: How do I get the output of the entire script till that point into a variable?
The live console output should be preserved. The output after the database query (if any) does not matter.
Is this possible at all? Might it be necessary to wrap the script into another script (file)?
I'm doing similar task like this
exec 5>&1
message=$(check|tee /dev/fd/5)
mutt -s "$subjct" "$mailto" <<< "$message"
Add your script instead of check function and change mailing to db query.

Problems running bash script from incron

I have a simple incron task setup to run a command whenever a particular .json file is written-to, then closed.
/var/www/html/api/private/resources/myfile.json IN_CLOSE_WRITE,IN NO LOOP /var/www/html/api/private/resources/run_service.sh
I can see that whenever the file to written to, there is a syslog entry for the event, and the command that was triggered - along the lines of <date> - incrond: CMD (/var/www/html/api/private/resources/run_service.sh).
But nothing seems to happen...
initially I thought this would be caused by an issue with the script, but replacing the script command to something simple such as echo "hello world" > /tmp/mylog.log still yields no output or results. I seem to have hit a brick wall with this one!
Update
Changing the incron command to read "/bin/bash /var/www/html/api/private/resources/run_service.sh" now seems to triggering the script correctly, as I can now get output from the script.
A simple mistake on my part, despite all examples online showing that using the script as the command should run it, for me it only works if I explicitly call bash to execute it
"<my directory/file to watch> <trigger condition> /bin/bash /var/www/html/api/private/resources/run_service.sh

How to get error output and store it in a variable or file

I'm having a little trouble figuring out how to get error output and store it in a variable or file in ksh. So in my script I have cp -p source.file destination inside a while loop.
When I get the below error
cp: source.file: The file access permissions do not allow the specified action.
I want to grab it and store it in a variable or file.
Thanks
You can redirect the error output of the command like so:
cp -p source.file destination 2>> my_log.txt
It will append the error message to the my_log.txt file.
In case you want a variable you can redirect stderr to stdout and assign the command output to a variable:
my_error_var=$(cp -p source.file destination 2>&1)
In ksh (as per Q), as in bash and other sh derivatives, you can get all/just stderr output from cp using redirection, then grabbing in a var (using $(), better than backtick if using a vaguely recent version):
output=$(cp -p source.file destination 2>&1)
cp doesn't normally output anything though this would capture stdout and stderr; to capture just stderr this way, use 1>/dev/null also. The other solutions redirecting to a file could use cat/various other commands to output/process the logfile.
Reason why I don't suggest using outputting to temporary files:
Redirecting to a file then reading that in (via read command or more inefficiently via $(cat file) ), particularly for just a single line, is less efficient and slower; though not so bad if you want to append to it each time for multiple operations before displaying the errors. You'll also leave the temporary file around unless you ALWAYS clear it up, don't forget when people interrupt (ie. Ctrl-C) or kill the script.
Using temporary files also could be a problem if the script is run multiple times at once (eg. could happen via cron if filesystem/other delays cause massive overruns or just from multiple users), unless the temporary filename is unique.
Generating temporary files is also a security risk unless done very carefully, especially if the file data is processed again or the contents could be rewritten before display by something else to confuse/phish the user/break the script. Don't get into a habit of doing it too casually, read up on temporary files (eg. mktemp) first via other questions here/google.
You can do STDERR redirects by doing:
command 2> /path/to/file.txt

Resources