How to generate an auto answer to a script running from cron - shell

I want to run a shell script from cron, and have it answered automatically. How do I do that?
I tried to put on crontab a script. The system is Linux.
One command of the script prompt a question and must be answered ( Y/N )
Ex :
When the script is executed manually, the script execute the command.
The command prompt the question : Do you intend to delete ? [Y/N]
and the system wait for the response.
I answer "Y" and press ENTER, then the script execute the deletion.
I intend to put this script on crontab.
I wish the command will be answered automatically Yes.
How can I do on crontab ?
Many thanks
/Regards

You have two options:
The command may have a switch to not prompt the question. E.g. rm has the switch -f (force), so that using rm -f $file won't prompt anything. I'd prefer this option if possible.
Use the yes command, which exists exactly for this case. For example, yes | rm foo will automatically answer y to the prompt by rm if any.

Generic Cron with Pipes
Most shells have access to a yes command. You can pipe it into your script in the cron job.
* * * * * yes | myscript.sh
Vixie Cron and Standard Input
In addition to the above, if you are using vixie-cron, you can also use the percent sign to feed standard input into your commands. crontab(5) says:
Percent-signs (%) in the command,
unless escaped with backslash (\), will be changed into newline
characters, and all data after the first % will be sent to
the command as standard input. There is no way to split a
single command line onto multiple lines, like the shell's trailing
"\".
With vixie-cron, the following example would pass the letter "y" on standard input to your script:
* * * * * myscript.sh % y

You can add
--interactive=0
at the end of the command. This will automatically return yes for any cron command asking yes/no.

Related

Command substitution in shell script without globbing

Consider this little shell script.
# Save the first command line argument
cmd="$1"
# Execute the command specified in the first command line argument
out=$($cmd)
# Do something with the output of the specified command
# Here we do a silly thing, like make the output all uppercase
echo "$out" | tr -s "a-z" "A-Z"
The script executes the command specified as the first argument, transforms the output obtained from that command and prints it to standard output. This script may be executed in this manner.
sh foo.sh "echo select * from table"
This does not do what I want. It may print something like the following,
$ sh foo.sh "echo select * from table"
SELECT FILEA FILEB FILEC FROM TABLE
if fileA, fileB and fileC is present in the current directory.
From a user perspective, this command is reasonable. The user has quoted the * in the command line argument, so the user doesn't expect the * to be globbed. But my script astonishes the user by using this argument in a command substitution which causes globbing of * as seen in the above output.
I want the output to be the following instead.
SELECT * FROM TABLE
The entire text in cmd actually comes from command line arguments to the script so I would like to preserve any * symbol present in the argument without globbing them.
I am looking for a solution that works for any POSIX shell.
One solution I have come up with is to disable globbing with set -o noglob just before the command substitution. Here is the complete code.
# Save the first command line argument
cmd="$1"
# Execute the command specified in the first command line argument
set -o noglob
out=$($cmd)
# Do something with the output of the specified command
# Here we do a silly thing, like make the output all uppercase
echo "$out" | tr -s "a-z" "A-Z"
This does what I expect.
$ sh foo.sh "echo select * from table"
SELECT * FROM TABLE
Apart from this, is there any other concept or trick (such as a quoting mechanism) I need to be aware of to disable globbing only within a command substitution without having to use set -o noglob.
I am not against set -o noglob. I just want to know if there is another way. You know, globbing can be disabled for normal command line arguments just by quoting them, so I was wondering if there is anything similar for command substiution.
If I understand correctly, you want the user to provide a shell command as a command-line argument, which will be executed by the script, and is expected to produce an SQL string, which will be processed (upper-cased) and echoed to stdout.
The first thing to say is that there is no point in having the user provide a shell command that the script just blindly executes. If the script applied some kind of modification/preprocessing of the command before it executed it then perhaps it could make sense, but if not, then the user might as well execute the command himself and pass the output to the script as a command-line argument, or via stdin.
But that being said, if you really want to do it this way, then there are two things that need to be said. Firstly, this is the proper form to use:
out=$(eval "$cmd");
A fairly advanced understanding of the shell grammer and expansion rules would be required to fully understand the rationale for using the above syntax, but basically executing $cmd and executing eval "$cmd" have subtle differences that render the $cmd form inappropriate for executing a given shell command string.
Just to give some detail that will hopefully clarify the above point, there are seven kinds of expansion that are performed by the shell in the following order when processing input: (1) brace expansion, (2) tilde expansion, (3) parameter and variable expansion, (4) arithmetic expansion, (5) command substitution, (6) word splitting, and (7) pathname expansion. Notice that variable expansion happens somewhat in the middle of that sequence, and thus the variable-expanded shell command (which was provided by the user) will not receive the benefit of the prior expansion types. Other issues are that leading variable assignments, pipelines, and command list tokens will not be executed correctly under the $cmd form, because they are parsed and processed prior to variable expansion (actually prior to all expansions) as well.
By running the command through eval, properly double-quoted, you ensure that the full shell parsing/processing/execution algorithm will be applied to the shell command string that was given by the user of your script.
The second thing to say is this: If you try the above proper form in your script, you will find that it has not solved your problem. You will still get SELECT FILEA FILEB FILEC FROM TABLE as output.
The reason is this: Since you've decided you want to accept an arbitrary shell command from the user of your script, it is now the user's responsibility to properly quote all metacharacters that may be embedded in that piece of code. It does not make sense for you to accept a shell command as a command-line argument, but somehow change the processing rules for shell commands so that certain metacharacters will no longer be metacharacters when the given shell command is executed. Actually, you could do something like that, perhaps using set -o noglob as you discovered, but then that must become a contract between the script and the user of the script; the user must be made aware of exactly what the precise processing rules will be when the command is executed so that he can properly use the script.
Under this design, the user could call the script as follows (notice the extra layer of quoting for the shell command string evaluation; could alternatively backslash-escape just the asterisk):
$ sh foo.sh "echo 'select * from table'";
I'd like to return to my earlier comment about the overall design; it doesn't really make sense to do it this way. It makes more sense to take the text-to-process itself, not a shell command that is expected to produce the text-to-process.
Here is how that could be done:
## take the text-to-process via a command-line argument
sql="$1";
## process and echo it
echo "$sql"| tr a-z A-Z;
(I also removed the -s option of tr, which really doesn't make sense here.)
Notice that the script is simpler now, and usage is also simpler:
$ sh foo.sh 'select * from table';

Cron job not "seeing" a file

I pass the file path, containing variables to be sourced, as an argument to my Bash script.
The file is created on Windows, in case that makes any difference.
The following check is performed:
CONFIG_FILE=$1
if [[ -f ${CONFIG_FILE} ]]; then
echo "Is a file"
. ${CONFIG_FILE}
else
echo "Not a file"
fi
When I run the script manually, from the command line, the check is fine and the variables get sourced.
However, when I set up a Cron job using
*/1 * * * * /full/path/to/script.sh /full/path/to/configfile
I get "Not a file" printed out.
I attempted every single setup I found online to solve this:
setting up environment variables both in crontab and script itself (PATH & SHELL)
sourcing the profile (both . /etc/profile and . /home/user/.bash_profile) both in crontab (before executing the script) and in the script itself.
trying to run crontab with the -u user parameter, but don't have permissions for this (and it doesn't make sense, as I am already logged in as the user who should setup the crontab)
I am setting up the crontab with the proper user under whom the script should be run. The user has access rights to the location of the files (as can be observed through running the script from the command line).
Looking for further advice on what can be attempted next.
What you're doing here is (I think) making sure that there is a separate argument behind your /path/config-file. Your original problem seems to be that on Unix your config file was stated as /path/config-file\r (note the trailing \r). You are doing it by adding an argument -q\r so that the config file itself is "clean" of the carriage return. You could add blabla\r for that matter instead of -q\r. Your script never interprets that extra argument; but if you put it on the cron line then your config file argument is "protected", because there's stuff following it, that's all.
What you also could do, is make sure that your cron defintion is Unix-styled (\n terminted lines) instead of DOS styled (\r\n terminated lines). There's probably a utility dos2unix on your Unix box to accomplish that.
Or you could remove the crontab on Unix using crontab -r and then re-create the crontab using crontab -e. Just don't upload files that were created on MS-DOS (or derived).
Found another attempt and it worked.
I added the -q flag in the cronjob line.
*/1 * * * * /path/script.sh /path/config-file -q
Source: Cron Job error "Could not open input file"
Can someone please explain to me what does it do?
I am not so literate in bash.

how to add a time wait between commands in a crontab

I have a crontab where I issue two commands, and I would like to add a time sleep in between the two commands like so: (command1 ; sleep ; command2).
Is this possible? How is it formatted?
Help please!!
The ``sixth'' field (the rest of the line) specifies the command to be
run. The entire command portion of the line, up to a newline or %
character, will be executed by /bin/sh or by the shell specified in the
SHELL variable of the crontab file.
- the crontab(5) man page
Essentially you already have the right base form, like: cmd1 ; sleep 60 ; cmd2
Any command, even complicated commands with loops and other logic should work, although you should be careful about which environment variables you might be relying on.
It's useful to run job in the near future to email the output of "env" to yourself just to check ;-)
For more complex stuff, create a shell script and have the crontab refer to it, like:
42 0 * * * $HOME/bin/daemon/cron-tmp-preen
Any succession of valid shell commands will do. Keep all on on one line.

How to run multiple Unix commands in one time?

I'm still new to Unix. Is it possible to run multiple commands of Unix in one time? Such as write all those commands that I want to run in a file, then after I call that file, it will run all the commands inside that file? or is there any way(or better) which i do not know?
Thanks for giving all the comments and suggestions, I will appreciate it.
Short answer is, yes. The concept is known as shell scripting, or bash scripts (a common shell). In order to create a simple bash script, create a text file with this at the top:
#!/bin/bash
Then paste your commands inside of it, one to a line.
Save your file, usually with the .sh extension (but not required) and you can run it like:
sh foo.sh
Or you could change the permissions to make it executable:
chmod u+x foo.sh
Then run it like:
./foo.sh
Lots of resources available on this site and the web for more info, if needed.
echo 'hello' && echo 'world'
Just separate your commands with &&
We can run multiple commands in shell by using ; as separator between multiple commands
For example,
ant clean;ant
If we use && as separator then next command will be running if last command is successful.
you can also use a semicolon ';' and run multiple commands, like :
$ls ; who
Yep, just put all your commands in one file and then
bash filename
This will run the commands in sequence. If you want them all to run in parallel (i.e. don't wait for commands to finish) then add an & to the end of each line in the file
If you want to use multiple commands at command line, you can use pipes to perform the operations.
grep "Hello" <file-name> | wc -l
It will give number of times "Hello" exist in that file.
Sure. It's called a "shell script". In bash, put all the commands in a file with the suffix "sh". Then run this:
chmod +x myfile.sh
then type
. ./myFile
or
source ./myfile
or just
./myfile
To have the commands actually run at the same time you can use the job ability of zsh
$ zsh -c "[command1] [command1 arguments] & ; [command2] [command2 arguments]"
Or if you are running zsh as your current shell:
$ ping google.com & ; ping 127.0.0.1
The ; is a token that lets you put another command on the same line that is run directly after the first command.
The & is a token placed after a command to run it in the background.

Input from within shell script

I have a script that calls an application that requires user input, e.g. run app that requires user to type in 'Y' or 'N'.
How can I get the shell script not to ask the user for the input but rather use the value from a predefined variable in the script?
In my case there will be two questions that require input.
You can pipe in whatever text you'd like on stdin and it will be just the same as having the user type it themselves. For example to simulating typing "Y" just use:
echo "Y" | myapp
or using a shell variable:
echo $ANSWER | myapp
There is also a unix command called "yes" that outputs a continuous stream of "y" for apps that ask lots of questions that you just want to answer in the affirmative.
If the app reads from stdin (as opposed to from /dev/tty, as e.g. the passwd program does), then multiline input is the perfect candidate for a here-document.
#!/bin/sh
the_app [app options here] <<EOF
Yes
No
Maybe
Do it with $SHELL
Quit
EOF
As you can see, here-documents even allow parameter substitution. If you don't want this, use <<'EOF'.
the expect command for more complicated situations, you system should have it. Haven't used it much myself, but I suspect its what you're looking for.
$ man expect
http://oreilly.com/catalog/expect/chapter/ch03.html
I prefer this way: If You want multiple inputs... you put in multiple echo statements as so:
{ echo Y; Y; } | sh install.sh >> install.out
In the example above... I am feeding two inputs into the install.sh script. Then... at the end, I am piping the script output to a log file to be archived and viewed for later.

Resources