IO redirection of string-assembled command - bash

I'm writing a bash script that is intended to execute some command, and depending on some flag, this command should be either executed locally or remotely. This command's output should be redirected to some file, and this file should be on the box that executes the command, that is, on the remote box if the command is executed remotely.
I'm trying things like
#!/bin/bash
REMOTE=1
function f
{
CMD="$#"
if [ "${REMOTE}" == "1" ]
then
ssh some_host "$CMD"
else
$CMD
fi
}
# This executes "echo huhu" remotely and redirects the output into "out" on the remote box.
REMOTE=1 f echo huhu \> out
# This executes "echo haha > out" remotely (without redirection).
REMOTE=0 f echo haha \> out
When I don't escape the > sign, any output of f is redirected to "out" on the local box, of course.
How could I avoid this behavior?

Don't use eval; use arrays instead. And a solution for the SSH command.

Write eval $CMD instead of $CMD. When $CMD is expanded the interpretation of redirection has already happened and redirections operations will simply passed as ordinary arguments.

Related

Passing argument to script invoked by exec producing undesired result

I'm trying to pass an argument to a shell script via exec, within another shell script. However, I get an error that the script does not exist in the path - but that is not the case.
$ ./run_script.sh
$ blob has just been executed.
$ ./run_script.sh: line 8: /home/s37syed/blob.sh test: No such file or directory
For some reason it's treating the entire execution as one whole absolute path to a script - it isn't reading the string as an argument for blob.sh.
Here is the script that is being executed.
#!/bin/bash
#run_script.sh
blobPID="$(pgrep "blob.sh")"
if [[ -z "$blobPID" ]]
then
echo "blob has just been executed."
#execs as absolute path - carg not read at all
( exec "/home/s37syed/blob.sh test" )
#this works fine, as exepcted
#( exec "/home/s37syed/blob.sh" )
else
echo "blob is currently running with pid $blobPID"
ps $blobPID
fi
And the script being invoked by run_script.sh, not doing much, just emulating a long process/task:
#!/bin/bash
#blob.sh
i=0
carg="$1"
if [[ -z "$carg" ]]
then
echo "nothing entered"
else
echo "command line arg entered: $carg"
fi
while [ $i -lt 100000 ];
do
echo "blob is currently running" >> test.txt
let i=i+1
done
Here is the version of Bash I'm using:
$ bash --version
GNU bash, version 4.2.37(1)-release (x86_64-pc-linux-gnu)
Any advice/comments/help on why this is happening would be much appreciated!
Thanks in advance,
s37syed
Replace
exec "/home/s37syed/blob.sh test"
(which tries to execute a command named "/home/s37syed/blob.sh test" with no arguments)
by
exec /home/s37syed/blob.sh test
(which executes "/home/s37/syed/blob.sh" with a single argument "test").
Aside from the quoting problem Cyrus pointed out, I'm pretty sure you don't want to use exec. What exec does is replace the current shell with the command being executed (rather than running the command as a subprocess, as it would without exec). Putting parentheses around it makes it execute that section in a subshell, thus effectively cancelling out the effect of exec.
As chepner said, you might be thinking of the eval command, which performs an extra parsing pass before executing the command. But eval is a huge bug magnet. It's incredibly easy to use eval in unsafe ways (see BashFAQ #48). If you need to construct a command, see BashFAQ #50 for better ways to do it.

How can I easily log some specific command line commands into a file?

I often perform configuration changes using single line commands on Mac OS, Linux or even Windows and I want to easily log them in a file, so I can replay if I have to reconfigure the machine again.
Please not that I want to do these only for some commands, so the shell history is of not use.
Ideally I would like to be able to use some kind of shell extension that logs some of the commands.
As you know if you start your bash command with a space, this command is not logged into the history.
What if I can have another prefix that would do the opposite? Is there something there that can be used for this? A solution for bash would be more than enough and if there is an already existing solution it would much better than me writing a new one.
You could do your logging in PROMPT_COMMAND, extracting the specific commands from shell history and writing them to a file.
Something like:
log () {
last_command="$(history -p \!\!)"
if [[ $last_command == " "* ]] # save commands starting with *two* spaces
then
printf "%s\n" "$last_command" >> ~/special.log
fi
}
PROMPT_COMMAND="log; $PROMPT_COMMAND"
This has problems:
PROMPT_COMMAND is run each time the prompt is printed. Just pressing Enter multiple times could cause a command to be logged multiple times.
Marking with two spaces would, of course, need you to remove ignorespace or ignoreboth from HISTCONTROL so that commands starting spaces are logged at all.
AFAICT, history is updated when the next command is read, so the command is logged after the next command returns to the prompt, since that's when the correct history is available in PROMPT_COMMAND.
All this would be easier in zsh, with a preexec hook:
preexec () {
if [[ $1 == " "* ]]
then
printf "%s\n" "$1" >> ~/special.log
fi
}
The preexec function automatically gets the command as the first argument if history is enabled, saving us a deal of trouble. It is run when the command has been read, but before it begins execution, so the timing is perfect. From the documentation:
preexec
Executed just after a command has been read and is about to be
executed. If the history mechanism is active (regardless of whether
the line was discarded from the history buffer), the string that the
user typed is passed as the first argument, otherwise it is an empty
string. The actual command that will be executed (including expanded
aliases) is passed in two different forms: the second argument is a
single-line, size-limited version of the command (with things like
function bodies elided); the third argument contains the full text
that is being executed.
$ ls
$ echo foo | echo bar
bar
$ cat ~/special.log
ls
echo foo | echo bar
A function in .bashrc can be used like a prefix:
log_this_command () {
echo "$#" >> ~/a_log_file # log the command to file
"$#" # and run the command itself
}
Caveat: this only logs expanded arguments, rather than the raw input.
Source function with the same name function screencapture {echo "used parms: $#"; command screencapture $#}
appending to log file function screencapture {echo "$(date) screencapture " $# >> ~/log.txt; command screencapture $#}
as one runs screencapture command, log entry is created and command executes as uninterfered
you could automate in creating these functions, if the list of them is like .... all of them

bash invoked via ssh does not store variables

there is a problem with the invoked via ssh bash, although i have read mans about it i still can't explain the following:
Here is a script, very simple
#!/bin/bash
theUser=$1
theHost=$2
ssh -tt $theUser#$theHost 'bash' << EOF
a=1
echo 'dat '$a
exit
EOF
and here is the result:
victor#moria:~$ bash thelast.sh victor 10.0.0.8
victor#10.0.0.8's password:
a=1
echo 'dat '
exit
victor#mordor:~$ a=1
victor#mordor:~$ echo 'dat '
dat
victor#mordor:~$ exit
exit
Connection to 10.0.0.8 closed.
As you may see, the environment doesn't store the value of the variable "a" so it can't echo it, but any other commands like ls or date return the result.
So the question is what i am doing wrong and how to avoid such behavior?
p.s. i can't replace ssh -tt, but any other command may be freely replaced.
Thanks in advance
Inside the here document, the $a is expanded locally before feeding the input to the ssh command. You can prevent that by quoting the terminator after the << operator as in
ssh -tt $theUser#$theHost 'bash' << 'EOF'
$a is being expanded in the local shell, where it is undefined. In order to prevent this from happening, you should escape it:
echo "dat \$a"
Escaping the $ causes it to be passed literally to the remote shell, rather than being interpreted as an expansion locally. I have also added some double quotes, as it is good practice to enclose parameter expansions inside them.

How to save the command you are about to execute in bash?

Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs

BASH Variables with multiple commands and reentrant

I have a bash script that sources contents from another file. The contents of the other file are commands I would like to execute and compare the return value. Some of the commands are have multiple commands separated by either a semicolon (;) or by ampersands (&&) and I can't seem to make this work. To work on this, I created some test scripts as shown:
test.conf is the file being sourced by test
Example-1 (this works), My output is 2 seconds in difference
test.conf
CMD[1]="date"
test.sh
. test.conf
i=2
echo "$(${CMD[$i]})"
sleep 2
echo "$(${CMD[$i]})"
Example-2 (this does not work)
test.conf (same script as above)
CMD[1]="date;date"
Example-3 (tried this, it does not work either)
test.conf (same script as above)
CMD[1]="date && date"
I don't want my variable, CMD, to be inside tick marks because then, the commands would be executed at time of invocation of the source and I see no way of re-evaluating the variable.
This script essentially calls CMD on pass-1 to check something, if on pass-1 I get a false reading, I do some work in the script to correct the false reading and re-execute & re-evaluate the output of CMD; pass-2.
Here is an example. Here I'm checking to see if SSHD is running. If it's not running when I evaluate CMD[1] on pass-1, I will start it and re-evaluate CMD[1] again.
test.conf
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
So if I modify this for my test script, then test.conf becomes:
NOTE: Tick marks are not showing up but it's the key below the ~ mark on my keyboard.
CMD[1]=`date;date` or `date && date`
My script looks like this (to handle the tick marks)
. test.conf
i=2
echo "${CMD[$i]}"
sleep 2
echo "${CMD[$i]}"
I get the same date/time printed twice despite the 2 second delay. As such, CMD is not getting re-evaluate.
First of all, you should never use backticks unless you need to be compatible with an old shell that doesn't support $() - and only then.
Secondly, I don't understand why you're setting CMD[1] but then calling CMD[$i] with i set to 2.
Anyway, this is one way (and it's similar to part of Barry's answer):
CMD[1]='$(date;date)' # no backticks (remember - they carry Lime disease)
eval echo "${CMD[1]}" # or $i instead of 1
From the couple of lines of your question, I would have expected some approach like this:
#!/bin/bash
while read -r line; do
# munge $line
if eval "$line"; then
# success
else
# fail
fi
done
Where you have backticks in the source, you'll have to escape them to avoid evaluating them too early. Also, backticks aren't the only way to evaluate code - there is eval, as shown above. Maybe it's eval that you were looking for?
For example, this line:
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
Ought probably look more like this:
CMD[1]='`pgrep -u root -d , sshd 1>/dev/null; echo $?`'

Resources