Load output from bash time command to a variable - bash

Let's examine my problem in layers:
sleep 3; echo hello
sleeps for three seconds and echoes "hello"
ssh localhost "sleep3; echo hello"
does the same on a remote host over ssh
var=$(ssh localhost "sleep3; echo hello")
does the same but stores "hello" in var
(time var=$(ssh localhost "sleep3; echo hello")) 2>&1
does the same and times it, but since time outputs to stderr, I redirected stderr to stdout, so that I can do this
timer=$((time var=$(ssh localhost "sleep3; echo hello")) 2>&1)
which should make the remote machine wait for 3 seconds, echo "hello" which should be stored on a local machine's variable var, and time the entire process after which the timed value is stored in variable timer.
But unfortunately, var is EMPTY (it works fine in example #3)! For the purpose of my code, it is necessary to store output of ssh in one variable, and ssh execution time in the other, without the use of "more advanced" commands as /bin/time, date, etc... I am sure there is a clever (i.e. shell-only) way of doing this, but I'm not clever enough to figure it out by my self. Any help is greatly appreciated.
Here's a thread that might be useful: [How to store standard error in a variable in a Bash script

Commands run within parentheses, or when doing command substitution, are run in a subshell. Variables in the subshell don't persist after the subshell completes. The simplest solution is to use a file:
timer=$(time ssh localhost "sleep 3; echo hello" 2>&1 >/tmp/var)
var=$(</tmp/var)

Barmar's answer is good, but it's missing parentheses (timer leaks to stdout). The correct answer would be:
time=$((time ssh localhost "sleep 3; echo hello" &>/tmp/key) 2>&1); key=$(</tmp/key)
which makes the remote machine wait for 3 seconds, echo "hello" which is stored in the local machine's variable var, and time the entire process after which the timed value is stored in variable timer, while doing all that in complete silence.

Related

Logging into server (ssh) with bash script

I want to log into server based on user's choice so I wrote bash script. I am totally newbie - it is my first bash script:
#!/bin/bash
echo -e "Where to log?\n 1. Server A\n 2. Server B"
read to_log
if [ $to_log -eq 1 ] ; then
echo `ssh user#ip -p 33`
fi
After executing this script I am able to put a password but after nothing happens.
If someone could help me solve this problem, I would be grateful.
Thank you.
The problem with this script is the contents of the if statement. Replace:
echo `ssh user#ip -p 33`
with
ssh user#ip
and you should be good. Here is why:
Firstly, the use of back ticks is called "command substitution". Back ticks have been deprecated in favor of $().
Command substitution tells the shell to create a sub-shell, execute the enclosed command, and capture the output for assignment/use elsewhere in the script. For example:
name=$(whoami)
will run the command whoami, and assign the output to the variable name.
the enclosed command has to run to completion before the assignment can take place, and during that time the shell is capturing the output, so nothing will display on the screen.
In your script, the echo command will not display anything until the ssh command has completed (i.e. the sub-shell has exited), which never happens because the user does not know what is happening.
You have no need to capture the output of the ssh command, so there is no need to use command substitution. Just run the command as you would any other command in the script.

Capturing ssh output in bash script while backgrounding connection

I have a loop that will connect to a server via ssh to execute a command. I want to save the output of that command.
o=$(ssh $s "$#")
This works fine. I can then do what I need with the output. However I have a lot of servers to run this against and I'm trying to speed up the process by backgrounding the ssh connection, basically to do all of the requests at once. If I wasn't saving the output I could do something like
ssh $s "$#" &
and this works fine
I haven't been able to get the correct combination to do both.
o=$(ssh $s "$#")&
This doesn't give me any output. Other combinations I've tried appear to try to execute the output. Suggestions?
Thanks!
A process going to the background gets its own copies of the file descriptors. The stdout (o=..) will not be available in the calling process. However, you can bind the stdout to a file and access the file.
ssh $s "$#" >outfile &
wait
o=$(cat outfile)
If you don't like files, you could also use named pipes. This way the 'wait' is done by the 'cat' command. The pipe can be reused and consumes no space on the disk.
mkfifo testpipe
ssh $s "$#" >testpipe &
o=$(cat testpipe)
I would just use a temporary file. You can't set a variable in a background process and access it from the shell that started it.
ssh "$s" "$#" > output.txt & ssh_pid=$!
...
wait "$ssh_pid"
o=$(<output.txt)

Reuse variable in EOF bash script

I have a script doing something like this:
var1=""
ssh xxx#yyy<<'EOF'
[...]
var2=`result of bash command`
echo $var2 #print what I need
var1=$var2 #is there a way to pass var2 into global var1 variable ?
EOF
echo $var1 # the need is to display the value of var2 created in EOF block
Is there a way to do this?
In general, an executed command has three paths of delivering information:
By stating an exit code.
By making output.
By creating files.
It is not possible to change a (environment) variable of the parent process. This is true for all child processes, and your ssh process is no exemption.
I would not rely on ssh to pass the exit code of the remote process, though (because even if it works in current implementations, this is brittle; ssh could also want to state its own success or failure with its exit code, not the remote process's).
Using files also seems inappropriate because the remote process will probably have a different file system (but if the remote and the local machine share an NFS for instance, this could be an option).
So I suggest using the output of the remote process for delivering information. You could achieve this like this:
var1=$(ssh xxx#yyy<<'EOF'
[...]
var2=$(result of bash command)
echo "$var2" 1>&2 # to stderr, so it's not part of the captured output
# and instead shown on the terminal
echo "$var2" # to stdout, so it's part of the captured output
EOF
)
echo "$var1"

How to run a time-limited background command and read its output (without timeout command)

I'm looking at https://stackoverflow.com/a/10225050/1737158
And in same Q there is an answer with timeout command but it's not in all OSes, so I want to avoid it.
What I try to do is:
demo="$(top)" &
TASK_PID=$!
sleep 3
echo "TASK_PID: $TASK_PID"
echo "demo: $demo"
And I expect to have nothing in $demo variable while top command never ends.
Now I get an empty result. Which is "acceptable" but when i re-use the same thing with the command which should return value, I still get an empty result, which is not ok. E.g.:
demo="$(uptime)" &
TASK_PID=$!
sleep 3
echo "TASK_PID: $TASK_PID"
echo "demo: $demo"
This should return uptime result but it doesn't. I also tried to kill the process by TASK_PID but I always get. If a command fails, I expect to have stderr captures somehow. It can be in different variable but it has to be captured and not leaked out.
What happens when you execute var=$(cmd) &
Let's start by noting that the simple command in bash has the form:
[variable assignments] [command] [redirections]
for example
$ demo=$(echo 313) declare -p demo
declare -x demo="313"
According to the manual:
[..] the text after the = in each variable assignment undergoes tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal before being assigned to the variable.
Also, after the [command] above is expanded, the first word is taken to be the name of the command, but:
If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.
So, as expected, when demo=$(cmd) is run, the result of $(..) command substitution is assigned to the demo variable in the current shell.
Another point to note is related to the background operator &. It operates on the so called lists, which are sequences of one or more pipelines. Also:
If a command is terminated by the control operator &, the shell executes the command asynchronously in a subshell. This is known as executing the command in the background.
Finally, when you say:
$ demo=$(top) &
# ^^^^^^^^^^^ simple command, consisting ONLY of variable assignment
that simple command is executed in a subshell (call it s1), inside which $(top) is executed in another subshell (call it s2), the result of this command substitution is assigned to variable demo inside the shell s1. Since no commands are given, after variable assignment, s1 terminates, but the parent shell never receives the variables set in child (s1).
Communicating with a background process
If you're looking for a reliable way to communicate with the process run asynchronously, you might consider coprocesses in bash, or named pipes (FIFO) in other POSIX environments.
Coprocess setup is simpler, since coproc will setup pipes for you, but note you might not reliably read them if process is terminated before writing any output.
#!/bin/bash
coproc top -b -n3
cat <&${COPROC[0]}
FIFO setup would look something like this:
#!/bin/bash
# fifo setup/clean-up
tmp=$(mktemp -td)
mkfifo "$tmp/out"
trap 'rm -rf "$tmp"' EXIT
# bg job, terminates after 3s
top -b >"$tmp/out" -n3 &
# read the output
cat "$tmp/out"
but note, if a FIFO is opened in blocking mode, the writer won't be able to write to it until someone opens it for reading (and starts reading).
Killing after timeout
How you'll kill the background process depends on what setup you've used, but for a simple coproc case above:
#!/bin/bash
coproc top -b
sleep 3
kill -INT "$COPROC_PID"
cat <&${COPROC[0]}

How to use ping -f until file = false

I would like to execute the ping -f command until cat ~/.test = false, but
until [[ `cat ~/.test` == false ]]; do sudo ping -f 10.0.1.1; done
only checks one time. How to kill command automatically when the file changes?
This approach will not work for two reasons:
The ping command runs until it is interrupted. In other words: There will only be one loop iteration ever, because you will be stuck in the loop.
cat ~/.test will always be "true" (i.e. successful), as long as the file exists. It will only be "false" (i.e. exit with a non-zero error code), if the file does not exist (any more). cat is not suited for checking file changes - unless that change is creating or deleting the file.
With that in mind, you should probably try something along these lines:
#!/bin/bash
# launch the ping process and leave it running in the background
ping -f 10.0.1.1 &
# get the process ID of the previous command's process
PING_PID=$!
# until the file ~/.test does not exist any more,
# do the stuff in the loop ...
until ! test -f ~/.test; do
# sleep for one second
sleep 1
done
# kill the ping process with the previously stored process ID
kill $PING_PID
The script is untested and may not work completely, but it should give you an idea how to solve your problem.
Edit:
If it does not need to be a flood ping, you can use this simpler script:
#!/bin/bash
# As long as the file ~/.test exists,
# send one ping only to the target.
while test -f ~/.test; do
ping -c 1 10.0.1.1
done
This approach was suggested by twalberg.
Another advantage of this approach (besides the simpler script) is that you do not need to sudo the ping command any more, because unlike flood pings the "normal" pings do not need root privileges.

Resources