I have a crontab where I issue two commands, and I would like to add a time sleep in between the two commands like so: (command1 ; sleep ; command2).
Is this possible? How is it formatted?
Help please!!
The ``sixth'' field (the rest of the line) specifies the command to be
run. The entire command portion of the line, up to a newline or %
character, will be executed by /bin/sh or by the shell specified in the
SHELL variable of the crontab file.
- the crontab(5) man page
Essentially you already have the right base form, like: cmd1 ; sleep 60 ; cmd2
Any command, even complicated commands with loops and other logic should work, although you should be careful about which environment variables you might be relying on.
It's useful to run job in the near future to email the output of "env" to yourself just to check ;-)
For more complex stuff, create a shell script and have the crontab refer to it, like:
42 0 * * * $HOME/bin/daemon/cron-tmp-preen
Any succession of valid shell commands will do. Keep all on on one line.
Related
I have a bash script that I am modifying. The script now also executes a binary. Say something like this
mybin arg1 arg1 The binary takes about 5 minutes to execute and when I execute it from bash directly, it does show the intermediate outputs. When I add it to my script as
`mybin arg1 arg1`
I get the output in the end and bash thinks the output is a command and tries to execute it. So I want to solve 2 things
Show the intermediate output on the screen when I execute the binary from the bash script.
And the output must not be treated to be a command for processing, just regular output
Remove the backticks.
`prog` means "collect the output of prog and interpolate it into the current command", so if `prog` is the only thing on the command line, its output will be executed as another command. This is known as command substitution.
In other words, the two things you don't want to happen are exactly what ` ` is designed to do.
Consider this little shell script.
# Save the first command line argument
cmd="$1"
# Execute the command specified in the first command line argument
out=$($cmd)
# Do something with the output of the specified command
# Here we do a silly thing, like make the output all uppercase
echo "$out" | tr -s "a-z" "A-Z"
The script executes the command specified as the first argument, transforms the output obtained from that command and prints it to standard output. This script may be executed in this manner.
sh foo.sh "echo select * from table"
This does not do what I want. It may print something like the following,
$ sh foo.sh "echo select * from table"
SELECT FILEA FILEB FILEC FROM TABLE
if fileA, fileB and fileC is present in the current directory.
From a user perspective, this command is reasonable. The user has quoted the * in the command line argument, so the user doesn't expect the * to be globbed. But my script astonishes the user by using this argument in a command substitution which causes globbing of * as seen in the above output.
I want the output to be the following instead.
SELECT * FROM TABLE
The entire text in cmd actually comes from command line arguments to the script so I would like to preserve any * symbol present in the argument without globbing them.
I am looking for a solution that works for any POSIX shell.
One solution I have come up with is to disable globbing with set -o noglob just before the command substitution. Here is the complete code.
# Save the first command line argument
cmd="$1"
# Execute the command specified in the first command line argument
set -o noglob
out=$($cmd)
# Do something with the output of the specified command
# Here we do a silly thing, like make the output all uppercase
echo "$out" | tr -s "a-z" "A-Z"
This does what I expect.
$ sh foo.sh "echo select * from table"
SELECT * FROM TABLE
Apart from this, is there any other concept or trick (such as a quoting mechanism) I need to be aware of to disable globbing only within a command substitution without having to use set -o noglob.
I am not against set -o noglob. I just want to know if there is another way. You know, globbing can be disabled for normal command line arguments just by quoting them, so I was wondering if there is anything similar for command substiution.
If I understand correctly, you want the user to provide a shell command as a command-line argument, which will be executed by the script, and is expected to produce an SQL string, which will be processed (upper-cased) and echoed to stdout.
The first thing to say is that there is no point in having the user provide a shell command that the script just blindly executes. If the script applied some kind of modification/preprocessing of the command before it executed it then perhaps it could make sense, but if not, then the user might as well execute the command himself and pass the output to the script as a command-line argument, or via stdin.
But that being said, if you really want to do it this way, then there are two things that need to be said. Firstly, this is the proper form to use:
out=$(eval "$cmd");
A fairly advanced understanding of the shell grammer and expansion rules would be required to fully understand the rationale for using the above syntax, but basically executing $cmd and executing eval "$cmd" have subtle differences that render the $cmd form inappropriate for executing a given shell command string.
Just to give some detail that will hopefully clarify the above point, there are seven kinds of expansion that are performed by the shell in the following order when processing input: (1) brace expansion, (2) tilde expansion, (3) parameter and variable expansion, (4) arithmetic expansion, (5) command substitution, (6) word splitting, and (7) pathname expansion. Notice that variable expansion happens somewhat in the middle of that sequence, and thus the variable-expanded shell command (which was provided by the user) will not receive the benefit of the prior expansion types. Other issues are that leading variable assignments, pipelines, and command list tokens will not be executed correctly under the $cmd form, because they are parsed and processed prior to variable expansion (actually prior to all expansions) as well.
By running the command through eval, properly double-quoted, you ensure that the full shell parsing/processing/execution algorithm will be applied to the shell command string that was given by the user of your script.
The second thing to say is this: If you try the above proper form in your script, you will find that it has not solved your problem. You will still get SELECT FILEA FILEB FILEC FROM TABLE as output.
The reason is this: Since you've decided you want to accept an arbitrary shell command from the user of your script, it is now the user's responsibility to properly quote all metacharacters that may be embedded in that piece of code. It does not make sense for you to accept a shell command as a command-line argument, but somehow change the processing rules for shell commands so that certain metacharacters will no longer be metacharacters when the given shell command is executed. Actually, you could do something like that, perhaps using set -o noglob as you discovered, but then that must become a contract between the script and the user of the script; the user must be made aware of exactly what the precise processing rules will be when the command is executed so that he can properly use the script.
Under this design, the user could call the script as follows (notice the extra layer of quoting for the shell command string evaluation; could alternatively backslash-escape just the asterisk):
$ sh foo.sh "echo 'select * from table'";
I'd like to return to my earlier comment about the overall design; it doesn't really make sense to do it this way. It makes more sense to take the text-to-process itself, not a shell command that is expected to produce the text-to-process.
Here is how that could be done:
## take the text-to-process via a command-line argument
sql="$1";
## process and echo it
echo "$sql"| tr a-z A-Z;
(I also removed the -s option of tr, which really doesn't make sense here.)
Notice that the script is simpler now, and usage is also simpler:
$ sh foo.sh 'select * from table';
I meant to run crontab -l | grep sh, but accidently ran crontab -l | sh. How likely is it that this actually ran any commands? I saw a lot of shell errors about command not found (because lines began with numbers), but only saw the tail end of the output. What did it likely do? How likely was it to have run a command?
I think that any redirections in the crontab actually created or truncated files, but I'm wondering if any of the commands might have run. The crontab contained comments (#), regular crontab formatted jobs, and blank lines.
It depends on what was in your crontab.
If you set any environment variables in it, you should probably check and fix them.
Apart from that, you should be okay. Your shell (should) have stopped attempting to execute each line at the first *, unless the expansion itself produced a valid command.
I want to run a shell script from cron, and have it answered automatically. How do I do that?
I tried to put on crontab a script. The system is Linux.
One command of the script prompt a question and must be answered ( Y/N )
Ex :
When the script is executed manually, the script execute the command.
The command prompt the question : Do you intend to delete ? [Y/N]
and the system wait for the response.
I answer "Y" and press ENTER, then the script execute the deletion.
I intend to put this script on crontab.
I wish the command will be answered automatically Yes.
How can I do on crontab ?
Many thanks
/Regards
You have two options:
The command may have a switch to not prompt the question. E.g. rm has the switch -f (force), so that using rm -f $file won't prompt anything. I'd prefer this option if possible.
Use the yes command, which exists exactly for this case. For example, yes | rm foo will automatically answer y to the prompt by rm if any.
Generic Cron with Pipes
Most shells have access to a yes command. You can pipe it into your script in the cron job.
* * * * * yes | myscript.sh
Vixie Cron and Standard Input
In addition to the above, if you are using vixie-cron, you can also use the percent sign to feed standard input into your commands. crontab(5) says:
Percent-signs (%) in the command,
unless escaped with backslash (\), will be changed into newline
characters, and all data after the first % will be sent to
the command as standard input. There is no way to split a
single command line onto multiple lines, like the shell's trailing
"\".
With vixie-cron, the following example would pass the letter "y" on standard input to your script:
* * * * * myscript.sh % y
You can add
--interactive=0
at the end of the command. This will automatically return yes for any cron command asking yes/no.
I want to run a command, for example
echo "foobar";
After each command, entered by the user.
Two scenarios:
When the user enters a command, my global command should be executed, and later his command should be executed
When the user enters a command, his command should be executed, and later my global command should be executed
How to accomplish the above two scenarios?
NB: I don't want to use the prompt for this purpose, (leave the PS1 variable as is).
As l0b0 suggests, you can use PROMPT_COMMAND to do your second request and you won't have to touch PS1.
To do your first request, you can trap the DEBUG pseudo-signal:
trap 'echo "foobar"' DEBUG
For the second part you could use declare -r PROMPT_COMMAND="echo 'foobar'": It is executed just before the prompt is displayed. Beware that it will not be run for each command in for example a pipe or command group.
Beware that any solution to this has the potential to mess things up for the user, so you should ideally only call commands which do not output anything (otherwise any output handling is virtually impossible) and which are not available to the user (to avoid them faking or corrupting the output).