Cron stops the script if the file is not found - shell

I have the following simple script:
#!/bin/sh
a() {
echo 1
}
a
b() {
for file in "${DOWNLOADS}"123_*; do
mv "${file}" "${DOWNLOADS}321"
done
}
b
c() {
echo 2
}
c
it is executable and if I call it from the terminal it works exactly right: a, b, c. But if I try to execute it via cron and there is no "123_{something}" file in the "${DOWNLOADS}" directory, then only function a is executed, and the beginning of the foor loop. Function c is not called because the script stops.
crontab -l
=>
10 20 * * * zsh /user/file
Debugging showed the following:
10 20 * * * zsh /user/file >> ~/tmp/cron.txt 2>&1
=>
+/user/file:47> a
+a:1> echo 1
1
+/user/file:67> b
file:12: no matches found: /Users/ivan/Downloads/123_*
As can be seen the execution of the script stopped immediately after the file was not found.
I don't understand why the execution of this script via cron stops if the file is not found, and how this can be avoided; can anyone explain this?
Or maybe it's just the limitations of my environment?

I don't understand why the execution of this script via cron stops if the file is not found, and how this can be avoided; can anyone explain this?
Using globs that don't match anything is an error:
% print nope*
zsh: no matches found: nope*
You can fix this by setting the null_glob option:
% setopt null_glob
% print nope*
[no output, as nothing was found]
You can set this for a single pattern by adding (N) to the end:
% print nope*(N)
So in your example, you end up with something like:
b() {
for file in ${DOWNLOADS}123_*(N); do
mv $file ${DOWNLOADS}321
done
}
NOTE: this only applies to zsh; in your script you have #!/bin/sh at the top, but you're running it with zsh. It's best to change that to #!/usr/bin/env zsh if you plan on using zsh.
In standard /bin/sh, it behaves different: a pattern that doesn't match gets replaced by the pattern itself:
% sh
$ echo nope*
nope*
In that case, you need to check if $file in the loop matches nope* and continue if it does. But you're running zsh so it's not really applicable: just be aware that /bin/sh and zsh behave quite different with the default settings (you can also get the same behaviour in zsh if you want with setopt no_nomatch).

Related

Using diff in a dedicated function causes the calling script to hang on first file comparison

For some reason after running the main script:
sudo bash main.sh
-> execution stops at the first diff redirected to file.
However, when I comment out the function name and parentheses and call the patching.sh directly - it works.
What is wrong with my script that when calling it in a form of a function from another file - it stops, but when called directly it works?
main.sh:
set -e
source $(dirname $0)/Scripts/patching.sh
# Overwrite files
update_files
patching.sh:
#!/bin/bash
function update_files() {
declare -r SW_DIR='Source/packages/'
CMP_FILE='file1.c'
diff -u ./$SW_DIR/examples/$CMP_FILE ./Source/$CMP_FILE > file.diff
cp -v ./Source/$CMP_FILE ./$SW_DIR/examples/$CMP_FILE
}
During my debugging - I added the -x option to set. This is what I see now:
+ declare -r SW_DIR=Source/packages
+ CMP_FILE=file1.c
+ diff -u ./Source/packages/examples/file1.c ./Source/file1.c
And that's the last line. If I omit the redirection operator - the diff is simply shown in the console and that's it. It does not proceed further, with no error message.
See What does set -e mean in a bash script? and BashFAQ/105 (Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?). Execution stops after the diff when set -e is in effect because diff exits with non-zero status when the files that it is comparing are different. This kind of behaviour is one of the downsides of using set -e. Follow the links for more, and useful, information.

Using printf to print out the time before running a program in Crontab giving an "invalid directive" error

I am running Ubuntu 21.10 and trying to setup a crontab task.
I want the date and time to be printed to the console before the task starts all of which get's logged to a log file.
This is what I have currently.
{ printf "\%(\%Y-\%m-\%d \%H:\%M:\%S)T Started Task\n"; <run task> } >> /path/to/log/file
When I type printf "%(%Y-%m-%d %H:%M:%S)T Started Task\n" I get the expected output:
2021-10-20 15:22:03 Started Task
However, in the crontab I know you have to escape out the "%" with \. Except even after escaping out all the % I end up with a error :
/bin/sh: 1: printf: %(: invalid directive
No matter how a try and format the crontab I still end up with the same error.
If anyone know what I have done wrong please help me, thank you.
My specific cronjob is as follows:
0 * * * * { printf "\%(\%Y-\%m-\%d \%H:\%M:\%S)T Updating Certs\n"; sudo certbot renew --post-hook "systemctl reload nginx.service postfix.service dovecot.service"; } >> /home/ubuntu/logs/certbot.log
cron can be surprising because its working directory is not $HOME and its default shell is /bin/sh instead of the user shell.
While writing your crontab, you have to keep in mind that:
Using full-paths is almost mandatory.
When grouping instructions with curly-brackets { }, you must put a semi-colon at the end of the last instruction.
With the default /bin/sh, printf can be limited to POSIX features (which means that it'll not understand the %()T construct).
That said, here's how you can make do of /bin/sh for your purpose:
0 * * * * { date '+\%Y-\%m-\%d \%H:\%M:\%S Started Task'; <command>; } >> /full/path/to/log/file
A completely different solution is to change your crontab's shell, like Cyrus's answer is suggesting.
Your cron uses sh as shell. Your code (printf with feature T) expects bash (version >= 4.2). I suggest to add this in a separate line before your cronjob to force bash as shell.
SHELL=/bin/bash

ubuntu function, works when sourced, but not with the bash command

I'm trying to learn how to write some basic functions in Ubuntu, and I've found that some of them work, and some do not, and I can't figure out why.
Specifically, the following function addseq2.sh will work when I source it, but when I just try to run it with bash addseq2.shit doesn't work. When I check with $? I get a 0: command not found. Does anyone have an idea why this might be the case? Thanks for any suggestions!
Here's the code for addseq2.sh:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
Thanks everyone for all the useful advice and help!
To expand on my original question, I have two simple functions already written. The first one, hello.sh looks like this:
#!/usr/bin/env bash
# File: hello.sh
function hello {
echo "Hello"
}
hello
hello
hello
When I call this function, without having done anything else, I would type:
$ bash hello.sh
Which seems to work fine. After I source it with $ source hello.sh, I'm then able to just type hello and it also runs as expected.
So what has been driving me crazy is the first function I mentioned here, addseq2.sh. If I try to repeat the same steps, calling it just with $ bash addseq2.sh 1 2 3. I don't see any result. I can see after checking as you suggested with $ echo $?that I get a 0 and it executed correctly, but nothing prints to the screen.
After I source it with $ source addseq2.sh, then I call it just by typing $ addseq2 1 2 3 it returns 6 as expected.
I don't understand why the two functions are behaving differently.
When you do bash foo.sh, it spawns a new instance of bash, which then reads and executes every command in foo.sh.
In the case of hello.sh, the commands are:
function hello {
echo "Hello"
}
This command has no visible effects, but it defines a function named hello.
hello
hello
hello
These commands call the hello function three times, each printing Hello to stdout.
Upon reaching the end of the script, bash exits with a status of 0. The hello function is gone (it was only defined within the bash process that just stopped running).
In the case of addseq2.sh, the commands are:
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
This command has no visible effects, but it defines a function named addseq2.
Upon reaching the end of the script, bash exits with a status of 0. The addseq2 function is gone (it was only defined within the bash process that just stopped running).
That's why bash addseq2.sh does nothing: It simply defines (and immediately forgets) a function without ever calling it.
The source command is different. It tells the currently running shell to execute commands from a file as if you had typed them on the command line. The commands themselves still execute as before, but now the functions persist because the bash process they were defined in is still alive.
If you want bash addseq2.sh 1 2 3 to automatically call the addseq2 function and pass it the list of command line arguments, you have to say so explicitly: Add
addseq2 "$#"
at the end of addseq2.sh.
When I check with $? I get a 0: command not found
This is because of the way you are checking it, for example:
(the leading $ is the convention for showing the command-line prompt)
$ $?
-bash: 0: command not found
Instead you could do this:
$ echo $?
0
By convention 0 indicated success. A better way to test in a script is something like this:
if addseq.sh
then
echo 'script worked'
else
# Redirect error message to stderr
echo 'script failed' >&2
fi
Now, why might your script not "work" even though it returned 0? You have a function but you are not calling it. With your code I appended a call:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
addseq2 1 2 3 4 # <<<<<<<
and I got:
10
By the way, an alternative way of saying:
let sum=sum+$element
is:
sum=$((sum + element))

How do I prevent a continuous loop from ending

I have a simple bash script which calls a php script every 10 minutes thats performs some maintenance. Every once in a while this php script terminates while it's running and when this happens the bash script exits.
I'd like to make it so the bash script keeps on looping even if the php script falters. Can anyone point me in the right direction? I've been searching for a while but I can't seem to find the answer, maybe I'm not using the right search terms.
#!/bin/sh
set -e
while :
do
/usr/bin/php /path/to/maintenance/script.php
sleep 600
done
Rjz's comment is correct, you should use cron. To do that, run crontab -e and add this line:
*/10 * * * * /usr/bin/php /path/to/maintenance/script.php
If it's set up properly, cron will email you any output (including error messages).
The:
set -e
line sets the shell's "exit on error" flag, which tells it that if a program it runs exits with a non-zero status, the shell should also exit:
set -e
false
echo if this prints, your shell is not honoring "set -e"
There are exceptions for programs whose status is being tested, of course, so that:
set -e
if prog; then
echo program succeeded
else
echo program failed
fi
echo this will still print
will work correctly (one or or the other echo will occur, and then the last one will as well).
Back in the Dim Time, when /bin/sh was non-POSIX and was written in Bournegol, there was a bug in some versions of sh that broke || expressions:
set -e
false || true
echo if this prints, your shell is OK
(The logic bug applied to && expressions internally as well, but was harmless there, since false && anything is itself false which means the whole expression fails anyway!) Ever since then, I've been wary of "-e".

Shell: How to call one shell script from another shell script?

I have two shell scripts, a.sh and b.sh.
How can I call b.sh from within the shell script a.sh?
There are a couple of different ways you can do this:
Make the other script executable with chmod a+x /path/to/file(Nathan Lilienthal's comment), add the #!/bin/bash line (called shebang) at the top, and the path where the file is to the $PATH environment variable. Then you can call it as a normal command;
Or call it with the source command (which is an alias for .), like this:
source /path/to/script
Or use the bash command to execute it, like:
/bin/bash /path/to/script
The first and third approaches execute the script as another process, so variables and functions in the other script will not be accessible.
The second approach executes the script in the first script's process, and pulls in variables and functions from the other script (so they are usable from the calling script).
In the second method, if you are using exit in second script, it will exit the first script as well. Which will not happen in first and third methods.
Check this out.
#!/bin/bash
echo "This script is about to run another script."
sh ./script.sh
echo "This script has just run another script."
There are a couple of ways you can do this. Terminal to execute the script:
#!/bin/bash
SCRIPT_PATH="/path/to/script.sh"
# Here you execute your script
"$SCRIPT_PATH"
# or
. "$SCRIPT_PATH"
# or
source "$SCRIPT_PATH"
# or
bash "$SCRIPT_PATH"
# or
eval '"$SCRIPT_PATH"'
# or
OUTPUT=$("$SCRIPT_PATH")
echo $OUTPUT
# or
OUTPUT=`"$SCRIPT_PATH"`
echo $OUTPUT
# or
("$SCRIPT_PATH")
# or
(exec "$SCRIPT_PATH")
All this is correct for the path with spaces!!!
The answer which I was looking for:
( exec "path/to/script" )
As mentioned, exec replaces the shell without creating a new process. However, we can put it in a subshell, which is done using the parantheses.
EDIT:
Actually ( "path/to/script" ) is enough.
If you have another file in same directory, you can either do:
bash another_script.sh
or
source another_script.sh
or
. another_script.sh
When you use bash instead of source, the script cannot alter environment of the parent script. The . command is POSIX standard while source command is a more readable bash synonym for . (I prefer source over .). If your script resides elsewhere just provide path to that script. Both relative as well as full path should work.
Depends on.
Briefly...
If you want load variables on current console and execute you may use source myshellfile.sh on your code. Example:
#!/bin/bash
set -x
echo "This is an example of run another INTO this session."
source my_lib_of_variables_and_functions.sh
echo "The function internal_function() is defined into my lib."
returned_value=internal_function()
echo $this_is_an_internal_variable
set +x
If you just want to execute a file and the only thing intersting for you is the result, you can do:
#!/bin/bash
set -x
./executing_only.sh
bash i_can_execute_this_way_too.sh
bash or_this_way.sh
set +x
You can use /bin/sh to call or execute another script (via your actual script):
# cat showdate.sh
#!/bin/bash
echo "Date is: `date`"
# cat mainscript.sh
#!/bin/bash
echo "You are login as: `whoami`"
echo "`/bin/sh ./showdate.sh`" # exact path for the script file
The output would be:
# ./mainscript.sh
You are login as: root
Date is: Thu Oct 17 02:56:36 EDT 2013
First you have to include the file you call:
#!/bin/bash
. includes/included_file.sh
then you call your function like this:
#!/bin/bash
my_called_function
Simple source will help you.
For Ex.
#!/bin/bash
echo "My shell_1"
source my_script1.sh
echo "Back in shell_1"
Just add in a line whatever you would have typed in a terminal to execute the script!
e.g.:
#!bin/bash
./myscript.sh &
if the script to be executed is not in same directory, just use the complete path of the script.
e.g.:`/home/user/script-directory/./myscript.sh &
This was what worked for me, this is the content of the main sh script that executes the other one.
#!/bin/bash
source /path/to/other.sh
The top answer suggests adding #!/bin/bash line to the first line of the sub-script being called. But even if you add the shebang, it is much faster* to run a script in a sub-shell and capture the output:
$(source SCRIPT_NAME)
This works when you want to keep running the same interpreter (e.g. from bash to another bash script) and ensures that the shebang line of the sub-script is not executed.
For example:
#!/bin/bash
SUB_SCRIPT=$(mktemp)
echo "#!/bin/bash" > $SUB_SCRIPT
echo 'echo $1' >> $SUB_SCRIPT
chmod +x $SUB_SCRIPT
if [[ $1 == "--source" ]]; then
for X in $(seq 100); do
MODE=$(source $SUB_SCRIPT "source on")
done
else
for X in $(seq 100); do
MODE=$($SUB_SCRIPT "source off")
done
fi
echo $MODE
rm $SUB_SCRIPT
Output:
~ ❯❯❯ time ./test.sh
source off
./test.sh 0.15s user 0.16s system 87% cpu 0.360 total
~ ❯❯❯ time ./test.sh --source
source on
./test.sh --source 0.05s user 0.06s system 95% cpu 0.114 total
* For example when virus or security tools are running on a device it might take an extra 100ms to exec a new process.
pathToShell="/home/praveen/"
chmod a+x $pathToShell"myShell.sh"
sh $pathToShell"myShell.sh"
#!/bin/bash
# Here you define the absolute path of your script
scriptPath="/home/user/pathScript/"
# Name of your script
scriptName="myscript.sh"
# Here you execute your script
$scriptPath/$scriptName
# Result of script execution
result=$?
chmod a+x /path/to/file-to-be-executed
That was the only thing I needed. Once the script to be executed is made executable like this, you (at least in my case) don't need any other extra operation like sh or ./ while you are calling the script.
Thanks to the comment of #Nathan Lilienthal
Assume the new file is "/home/satya/app/app_specific_env" and the file contents are as follows
#!bin/bash
export FAV_NUMBER="2211"
Append this file reference to ~/.bashrc file
source /home/satya/app/app_specific_env
When ever you restart the machine or relogin, try echo $FAV_NUMBER in the terminal. It will output the value.
Just in case if you want to see the effect right away, source ~/.bashrc in the command line.
There are some problems to import functions from other file.
First: You needn't to do this file executable. Better not to do so!
just add
. file
to import all functions. And all of them will be as if they are defined in your file.
Second: You may be define the function with the same name. It will be overwritten. It's bad. You may declare like that
declare -f new_function_name=old_function_name
and only after that do import.
So you may call old function by new name.
Third: You may import only full list of functions defined in file.
If some not needed you may unset them. But if you rewrite your functions after unset they will be lost. But if you set reference to it as described above you may restore after unset with the same name.
Finally In common procedure of import is dangerous and not so simple. Be careful! You may write script to do this more easier and safe.
If you use only part of functions(not all) better split them in different files. Unfortunately this technique not made well in bash. In python for example and some other script languages it's easy and safe. Possible to make partial import only needed functions with its own names. We all want that in next bush versions will be done the same functionality. But now We must write many additional cod so as to do what you want.
Use backticks.
$ ./script-that-consumes-argument.sh `sh script-that-produces-argument.sh`
Then fetch the output of the producer script as an argument on the consumer script.

Resources