How bad is running `crontab -l | sh` as root? - shell

I meant to run crontab -l | grep sh, but accidently ran crontab -l | sh. How likely is it that this actually ran any commands? I saw a lot of shell errors about command not found (because lines began with numbers), but only saw the tail end of the output. What did it likely do? How likely was it to have run a command?
I think that any redirections in the crontab actually created or truncated files, but I'm wondering if any of the commands might have run. The crontab contained comments (#), regular crontab formatted jobs, and blank lines.

It depends on what was in your crontab.
If you set any environment variables in it, you should probably check and fix them.
Apart from that, you should be okay. Your shell (should) have stopped attempting to execute each line at the first *, unless the expansion itself produced a valid command.

Related

Properly formating a crontab executable on a bash script

I’m having some serious issues with trying to get the proper format for my bash script to be able to run successfully in crontab. The bash script runs successfully when manually prompted from the command line.
Here is the bash script in question (the actual parameters themselves [$1 & $2] have been manually placed in the script):
#!/bin/bash
# Usage: ./s3DeleteByDateVirginia "bucketname" "file type"
past=$(date +"%F" -d "60 days ago")
aws s3api list-objects --bucket $1 --query 'Contents[?LastModified<=`'$past'`][].{Key:Key}' | grep $2 | while read -r line
do
fileName=`echo $line`
aws s3api delete-object --bucket $1 --key "$fileName"
done;
The script is in this bash file: /home/ubuntu/s3DeleteByDateVirginiaSoco1
To set up the script I use: sudo crontab –e
Now I see people online saying you need to give it the proper path which doesn’t make any sense to me especially when it comes to putting it in the right location because I’m seeing a number of various modifications of this online but it consists of this format: SHELL=/bin/sh sPATH=/bin:/sbin:/usr/bin:/usr/sbin but I don't know where to put it.
According to the syslog the cron functionalities that parts working but the script itself doesn’t execute:
In addition to this the script has all of the proper permissions to run.
All in all, I’m more confused that when I started and I’m not seeing that much documentation on how crontab works.
Crontab in question:
Additional Edits based on user's suggestions:
Here's my polished script:
Here's the crontab line:
# m h dom mon dow command
PATH=/usr/local/bin:/usr/bin:/bin:/root/.local/bin/aws
33 20 * * * /home/ubuntu/s3DeleteByDateSoco1
Updated syslog:
Ok, I see several problems here. First, you need to put this in the crontab file for the user you want the script to run as. If you want to run it under your user account, do not use just crontab -e instead of sudo crontab -e (with sudo, it edits the root user's crontab file).
Second, you need to use the correct path & name for the script; it looks like it's /home/ubuntu/s3DeleteByDateVirginiaSoco1, so that's what should be in the crontab entry. Don't add ".sh" if it's not actually part of the filename. It also looks like you tried adding "root" in front of the path; don't do that either, since crontab will try to execute "root" as a command, and it'll fail. bash -c doesn't hurt, but it doesn't help at all either, so don't use it.
Third, the PATH needs to be set appropriately for the executables you use in the script. By default, cron jobs execute with a PATH of just "/usr/bin:/bin", so when you use a command like aws, it'll look for it as /usr/bin/aws, not find it, look for it as /usr/aws, not find it, and give the error "aws: command not found" that you see in the last log entry. First, you need to find out where aws (and any other programs your script depends on) are; you can use which aws in your regular shell to find this out. Suppose it's /usr/local/bin/aws. Then you can either:
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the crontab file, before the line that says to run your script.
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the your script file, before the lines that use aws.
In your script, use an explicit path every time you want to run aws (something like /usr/local/bin/aws s3api list-objects ...)
You can use any (or all) of the above, but you must use at least one or it won't be able to find the aws command (or anything else that isn't in the set of core commands that come with the OS).
Fourth, I don't see where $1 and $2 are supplied. You say they've been manually placed in the script, but I don't know what you mean by that. Since the script expects them as parameters, you need to specify them in the crontab file (i.e. the command in crontab should be something like /home/ubuntu/s3DeleteByDateVirginiaSoco1 bucketname pattern).
Fifth, the script itself doesn't follow good quoting conventions. In general, all variable references should be in double-quotes. For example, use grep "$2" instead of grep $2. Without the double-quotes, variables that contain spaces or certain shell metacharacters can cause weird parsing problems.
Finally, why do you do fileName=echo $line (with backquotes I can't replicate here)? This mostly just copies the value of $line into the variable fileName, but can have those weird parsing problems I mentioned in the last point. If you want to copy a variable reliably, just use fileName="$line" (or fileName=$line -- this is one of the few cases where it's safe to leave the double-quotes off).
BTW, shellcheck.net is good at spotting common problems like bad quoting; I recommend running your scripts through it to see what it finds.

CMake's execute_process and arbitrary shell scripts

CMake's execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell. The thing is, I want to use pipes, file descriptor redirection, etc. - and that does not seem to be possible. The alternative would be very painful for me (I think)...
What should I do?
PS - CMake 2.8 and 3.x answer(s) are interesting.
You can execute any shell script, using your shell's support for taking in a script within a string argument.
Example:
execute_process(
COMMAND bash "-c" "echo -n hello | sed 's/hello/world/;'"
OUTPUT_VARIABLE FOO
)
will result in FOO containing world.
Of course, you would need to escape quotes and backslashes with care. Also remember that running bash would only work on platforms which have bash - i.e. it won't work on Windows.
execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell.
Yes, exactly this is written in documentation for that command:
All arguments are passed VERBATIM to the child process. No intermediate shell is used, so shell operators such as > are treated as normal arguments.
I want to use pipes
Different COMMAND within same execute_process invocation are actually piped:
Runs the given sequence of one or more commands with the standard output of each process piped to the standard input of the next.
file descriptor redirection, etc. - and that does not seem to be possible.
For complex things just prepare separate shell script and run it using execute_process. You can pass variables from CMake to this script using its parameters, or with prelimiary configure_file.
I needed to pipe two commands one after the other and actually learned that each COMMAND of the execute_process is piped already. So at least that much is resolved by simply adding commands one after the other:
execute_process(
COMMAND echo "Hello"
COMMAND sed -e 's/H/h/'
OUTPUT_VARIABLE GREETINGS
OUTPUT_STRIP_TRAILING_WHITESPACE)
Now the variable GREETINGS is set to hello.
If you indeed need a lot of file redirection (as you stated), you probably want to write an external script and then execute that script from CMakeLists.txt. It's really difficult to get all the escaping right in CMake.
If you can simplify your scripts to one command generating a file, then another handling that file, etc. then you can always use the INPUT_FILE and OUTPUT_FILE options. Or pass a filename to your command for the input.
It's often much cleaner to handle one file at a time. Although I understand that some commands may need multiple sources and destinations.

getting last executed command from script

I'm trying to get last executed command from command line from a script to be saved for a later reference:
Example:
# echo "Hello World!!!"
> Hello World!!!
# my_script.sh
> echo "Hello World!!!"
and the content of the script would be :
#!/usr/bin/ksh
fc -nl -1 | sed -n 1p
Now as you notices using here ksh and fc is a built in command which if understood correctly should be implemented by any POSIX compatible shells. [I understand that this feature is interactive and that calling same fc again will give different result but this is not the concern do discuss about]
Above works so far so good only if my_script.sh is being called from the shell which is as well ksh, or if calling from bash and changing 1st line of script as #!/bin/bash then it works too and it doesn't if shells are different.
I would like to know if there is any good way to achieve above without being constrained by the shell your script is called from. I understand that fc is a built in command and it works per shell thus most probably my approach is not good at all from what I want to achieve. Any better suggestions?
I actually attempted this, but it cannot be done between different shells consistently.
While
fc -l`,
is the POSIX standard command for showing $SHELL history, implementation details may be different.
At least bash and ksh93 both will report the last command with
fc -n -l -1 -1
However, POSIX does not guarantee that shell history will be carried over to a new instance of the shell, as this requires the presence of a $HISTFILE. If none is
present, the shell may default to $HOME/.sh_history.
However, this history file or Command History List is not portable between different shells.
The POSIX Shell description of the
Command History List says:
When the sh utility is being used interactively, it shall maintain a list of commands
previously entered from the terminal in the file named by the HISTFILE environment
variable. The type, size, and internal format of this file are unspecified.
Emphasis mine
What this means is that for your script needs to know which shell wrote that history.
I tried to use $SHELL -c 'fc -nl -1 -1', but this did not appear to work when $SHELL refers to bash. Calling ksh -c ... from bash actually worked.
The only way I could get this to work is by creating a function.
last_command() { (fc -n -l -1 -1); }
In both ksh and bash, this will give the expected result. Variations of this function can be used to write the result elsewhere. However, it will break whenever it's called
from a different process than the current.
The best you can do is to create these functions and source them into your
interactive shell.
fc is designed to be used interactively. I tested your example on cygwin/bash and the result was different. Even with bash everywhere the fc command didn't work in my case.
I think fc displays the last command of the current shell (here I don't speak about the shell interpretor, but shell as the "process box". So the question is more why it works for you.
I don't think there is a clean way to achieve what you want because (maybe I miss something) you want two different process (bash and your magic command [my_script.sh]) and by default OS ensure isolation between them.
You can rely on what you observe (not portable, depends on the shell interpretor etc.)
You cannot rely on BASH historic because it's in-memory (the file is updated only on exit).
You can use an alias or a function (edited: #Charles Duffy is right). In this case you won't be able to use your "magic command" from another terminal, but for an interactive use it does the job.
Edited:
Or you can provide two commands: one to save and another to look for. In this case you manage your own historic but you have to save explicitly each command that is painful...
So I look for a hook. And I found this other thread : https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command
# At the beginning of the Shell (.bashrc for example)
save(){ history 1 >>"$HOME"/myHistory ; }
trap 'save' DEBUG
# An example of use
rm -f "$HOME"/myHistory
echo "1 2 3"
cat "$HOME"/myHistory
14 echo "1 2 3"
15 cat "$HOME"/myHistory
But I observe it slows down the interpretor...
Little convoluted, but I was able to use this command to get the most recent command in zsh, bash, ksh, and tcsh on Linux:
history | tail -2 | head -1 | sed -r 's/^[ \t]*[0-9]*[ \t]+([^ \t].*$)/\1/'
Caveats: this uses GNU sed, so you'll need to install that if you're using BSD, OS X, etc; and also, tcsh will display the time of the command before the command itself. Regular csh doesn't seem to having a functioning history command when I tried it.

Bash: What is the effect of "#!/bin/sh" in a bash script with curl

I make a complex and long line command to successful login in a site. If I execute it in Console it work. But if I copy and paste the same line in a bash script it not work.
I tried a lot of thing, but accidentally discovery that if I NOT use the line
#!/bin/sh
it work! Why this happens in my mac OSX Lion? What this config line do in a bash script?
A bash script that is run via /bin/sh runs in sh compatibility mode, which means that many bash-specific features (herestrings, process substitution, etc.) will not work.
sh-4.2$ cat < <(echo 123)
sh: syntax error near unexpected token `<'
If you want to be able to use full bash syntax, use #!/bin/bash as your shebang line.
"#!/bin/sh" is a common idiom to insure that the correct interpreter is used to run the script. Here, "sh" is the "Bourne Shell". A good, standard "least common denominator" for shell scripts.
In your case, however, "#!/bin/sh" seems to be the wrong interpreter.
Here's a bit more info:
http://www.unix.com/answers-frequently-asked-questions/7077-what-does-usr-bin-ksh-mean.html
Originally, we only had one shell on unix. When you asked to run a
command, the shell would attempt to invoke one of the exec() system
calls on it. It the command was an executable, the exec would succeed
and the command would run. If the exec() failed, the shell would not
give up, instead it would try to interpet the command file as if it
were a shell script.
Then unix got more shells and the situation became confused. Most
folks would write scripts in one shell and type commands in another.
And each shell had differing rules for feeding scripts to an
interpreter.
This is when the "#! /" trick was invented. The idea was to let the
kernel's exec() system calls succeed with shell scripts. When the
kernel tries to exec() a file, it looks at the first 4 bytes which
represent an integer called a magic number. This tells the kernel if
it should try to run the file or not. So "#! /" was added to magic
numbers that the kernel knows and it was extended to actually be able
to run shell scripts by itself. But some people could not type "#! /",
they kept leaving the space out. So the kernel was exended a bit again
to allow "#!/" to work as a special 3 byte magic number.
So #! /usr/bin/ksh and
#!/usr/bin/ksh now mean the same thing. I always use the former since at least some kernels might still exist that don't understand the
latter.
And note that the first line is a signal to the kernel, and not to the
shell. What happens now is that when shells try to run scripts via
exec() they just succeed. And we never stumble on their various
fallback schemes.
The very first line of the script can be used to select which script interpreter to use.
With
#!/bin/bash
You are telling the shell to invoke /bin/bash interpreter to execute your script.
Assure that there are not spaces or empty lines before #!/bin/bash or it will not work.

run the output of a script as a standalone bash command

suppose you have a perl script "foobar.pl" that prints the following to stdout
date -R
and you want to run whatever that perl script outputs as a standalone bash command (don't worry about security problems as this is running in a trusted environment).
How do you get bash to recognize this as a standalone command?
I've tried using xargs, but that seems to want to pass arguments only to a pre-defined command.
I want the perl script to be able to output any arbitrary command.
$command = 'date -R'
system($command); ## in the perl script
the above does not work because I want it to run in an existing cygwin environment ...
foobar.pl | xargs bash -i {}
the above does not work because bash seems to be running a new process and thus the initialization and settings from bash_profile don't get instantiated.
`foobar.pl`
Bad:
`perl foo.pl`
$(perl foo.pl)
Why is this bad? Because of so many reasons; most notably:
Wordsplitting: What you're doing here is taking the output of the perl script, splitting it into chunks wherever there are spaces, tabs or newlines, and taking those chunks as arguments to the first chunk which is the command to run. In really extremely simplistic cases like $(echo 'date +%s') it might work; but that's just a really bad representation of what you're REALLY doing here.
You cannot do quoting or use any other bash shell features like parameter expansion, bash keywords, etc.
Good, but inconvenient:
perl foo.pl > mytmpfile; bash mytmpfile
Creating a temporary file to put your perl script's output into and then running that with bash works, but it's inconvenient as you need to create (and clean up!) your temporary file and have it in a portably writable (and secure!) location.
Also remember not to use . or source to execute the temporary file unless you really intend to run it all in the active shell. Moreover, when you use . or source, you won't be able to reliably clean up your temporary file afterward.
Probably the best solution:
perl foo.pl | bash
This is pretty safe all-round ("safe" in the context of, least bug-prone) assuming your perl script outputs correct bash syntax, of course.
Alternatives that do pretty much the same thing:
bash < <(perl foo.pl)
bash <(perl foo.pl)
Given the perl file:
print "date";
the following bash command will do it.
> $(perl qq.pl)
Mon Apr 6 11:02:07 WAST 2009
But that is run in a separate shell. If you really want to invoke it in the context of the current shell, do this:
$ perl qq.pl >/tmp/qq.$$ ; . /tmp/qq.$$ ; rm -f /tmp/qq.$$
Mon Apr 6 11:04:59 WAST 2009
Try:
foobar.pl | bash
I don't think this is exactly what you're looking for, but its what I've got :-)
perl foo.pl > /tmp/$$.script; bash /tmp/$$.script; rm /tmp/$$.script
Good luck!
Try with open($fh,"-|",$arg1,$arg2)

Resources