Reuse variable in EOF bash script - bash

I have a script doing something like this:
var1=""
ssh xxx#yyy<<'EOF'
[...]
var2=`result of bash command`
echo $var2 #print what I need
var1=$var2 #is there a way to pass var2 into global var1 variable ?
EOF
echo $var1 # the need is to display the value of var2 created in EOF block
Is there a way to do this?

In general, an executed command has three paths of delivering information:
By stating an exit code.
By making output.
By creating files.
It is not possible to change a (environment) variable of the parent process. This is true for all child processes, and your ssh process is no exemption.
I would not rely on ssh to pass the exit code of the remote process, though (because even if it works in current implementations, this is brittle; ssh could also want to state its own success or failure with its exit code, not the remote process's).
Using files also seems inappropriate because the remote process will probably have a different file system (but if the remote and the local machine share an NFS for instance, this could be an option).
So I suggest using the output of the remote process for delivering information. You could achieve this like this:
var1=$(ssh xxx#yyy<<'EOF'
[...]
var2=$(result of bash command)
echo "$var2" 1>&2 # to stderr, so it's not part of the captured output
# and instead shown on the terminal
echo "$var2" # to stdout, so it's part of the captured output
EOF
)
echo "$var1"

Related

Capturing ssh output in bash script while backgrounding connection

I have a loop that will connect to a server via ssh to execute a command. I want to save the output of that command.
o=$(ssh $s "$#")
This works fine. I can then do what I need with the output. However I have a lot of servers to run this against and I'm trying to speed up the process by backgrounding the ssh connection, basically to do all of the requests at once. If I wasn't saving the output I could do something like
ssh $s "$#" &
and this works fine
I haven't been able to get the correct combination to do both.
o=$(ssh $s "$#")&
This doesn't give me any output. Other combinations I've tried appear to try to execute the output. Suggestions?
Thanks!
A process going to the background gets its own copies of the file descriptors. The stdout (o=..) will not be available in the calling process. However, you can bind the stdout to a file and access the file.
ssh $s "$#" >outfile &
wait
o=$(cat outfile)
If you don't like files, you could also use named pipes. This way the 'wait' is done by the 'cat' command. The pipe can be reused and consumes no space on the disk.
mkfifo testpipe
ssh $s "$#" >testpipe &
o=$(cat testpipe)
I would just use a temporary file. You can't set a variable in a background process and access it from the shell that started it.
ssh "$s" "$#" > output.txt & ssh_pid=$!
...
wait "$ssh_pid"
o=$(<output.txt)

Declare variable on unix server

I am trying to login on one of the remote server(Box1) and trying to read one file on remote server(Box1).
That contain the another server(Box2) details, base upon that details I have to come back to the local server and ssh to another server(Box2) for some data crunching. and so on.....
ssh box1.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node1= `cat /home/rakesh/tomar.log`
fi
EOF
ssh box2.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node2= `cat /home/rakesh/tomar.log`
fi
EOF
but I am not getting value of "server_node1" and "server_node2" on local machine.
any help would be appreciated.
Just like bash -c 'export foo=bar' cannot declare a variable in the calling shell where you typed this, an ssh command cannot declare a variable in the calling shell. You will have to refactor so that the calling shell receives the information and knows what to do with it.
I agree with the comment that storing a log file in a variable is probably not a sane, or at least elegant, thing to do, but the easy way to do what you are attempting is to put the ssh inside the assignment.
server_node1=$(ssh box1.com cat tomar.log)
server_node2=$(ssh box2.com cat tomar.log)
A few notes and amplifications:
The remote shell will run in your home directory, so I took it out (on the assumption that /home/rt9419 is your home directory, obviously).
In case of an error in the cat command, the exit code of ssh will be the error code from cat, and the error message on standard error will be visible on your standard error, so the echo seemed quite superfluous. (If you want a custom message, variable=$(ssh whatever) || echo "Custom message" >&2 would do that. Note the redirection to standard error; it doesn't seem to matter here, but it's good form.)
If you really wanted to, you could run an arbitrarily complex command in the ssh; as outlined above, it didn't seem necessary here, but you could do assigment=$(ssh remote 'if [[ things ]]; then for variable in $(complex commands to drive a loop); do : etc etc; done; fi; more </dev/null; exit "$variable"') or whatever.
As further comments on your original attempt,
The backticks in the here document in your attempt would be evaluated by your local shell before the ssh command even ran. There are separate questions about how to fix that; see e.g. How have both local and remote variable inside an SSH command. but in short, unless you absolutely require the local shell to be able to modify the commands you send, probably put them in single quotes, like I did in the silly complex ssh example above.
The function of export is to make variables visible to child processes. There is no way to affect the environment of a parent process (short of having it cooperate and/or coordinate the change, as in the code above). As an example to illustrate the difference, if you set PERL5LIB to a directory with Perl libraries, but fail to export it, the Perl process you start will not see the variable; it is only visible to the current shell. When you export it, any Perl process you start as a child of this shell will also see this variable and the value you assigned. In other words, you export variables which are not private to the current shell (and don't export private ones; aside from making sure they are private, this saves the amount of memory which needs to be copied between processes), but that still only makes them visible to children, by the design of the U*x process architecture.
You should get back the file from box1and box2 with an scp:
scp box1.com:/home/rt9419/tomar.log ~/tomar1.log
#then you can cat!
export server_node1=`cat ~/tomar1.log`
idem with box2
scp box2.com:/home/rt9419/tomar.log ~/tomar2.log
#then you can cat!
export server_node2=`cat ~/tomar2.log`
There are several possibilities. In your case, you could on the remote system create a file (in bash syntax), containing the assignments of these variables, for example
echo "export server_node2='$(</home/rt9419/tomar.log)'" >>export_settings
(which makes me wonder why you want the whole content of your logfile be stored into a variable, but this is another question), then transfer this file to your host (for example with scp) and source it from within your bash script.

Capture output of remote command in variable inside of a shell script

I have a script I want to run on remote via ssh. It checks if there is a process running and should try to kill it, if it exists. Now, my code looks like this:
ssh my_prod_env << ENDSSH
...
pid=$(pgrep -f "node my_app.js")
echo $pid
# kill process with $pid
...
exit
ENDSSH
The problem lies here: I cannot capture output of pgrep command in variable. I tried with $(), backticks, pipe then read and maybe other approaches, but all without success.
I would like to do it all in one ssh session.
Now I am thinking the output of command goes to the output stream I cannot access in my script. I might be wrong, though.
Either way, help will be appreciated.
Ok, after you provided in comments more info what you want, I believe this is the correct answer to your question:
ssh my_prod_env -t 'pgrep -f "node my_app.js"'
This will call the command and leave you logged on the server
This is what fixes the thing - "escaping" the ENDSSH tag.
ssh my_prod_env << /ENDSSH
...
# capture output of remote commands in remote variables
...
ENDSSH
Problem was that my vars were local and I was trying to capture output of remote commands in them.
This question/answer helped me realize what is going on: How to assign local variable with a remote command result in bash script?
So, my question could be marked as duplicate or something similar, I guess.

Load output from bash time command to a variable

Let's examine my problem in layers:
sleep 3; echo hello
sleeps for three seconds and echoes "hello"
ssh localhost "sleep3; echo hello"
does the same on a remote host over ssh
var=$(ssh localhost "sleep3; echo hello")
does the same but stores "hello" in var
(time var=$(ssh localhost "sleep3; echo hello")) 2>&1
does the same and times it, but since time outputs to stderr, I redirected stderr to stdout, so that I can do this
timer=$((time var=$(ssh localhost "sleep3; echo hello")) 2>&1)
which should make the remote machine wait for 3 seconds, echo "hello" which should be stored on a local machine's variable var, and time the entire process after which the timed value is stored in variable timer.
But unfortunately, var is EMPTY (it works fine in example #3)! For the purpose of my code, it is necessary to store output of ssh in one variable, and ssh execution time in the other, without the use of "more advanced" commands as /bin/time, date, etc... I am sure there is a clever (i.e. shell-only) way of doing this, but I'm not clever enough to figure it out by my self. Any help is greatly appreciated.
Here's a thread that might be useful: [How to store standard error in a variable in a Bash script
Commands run within parentheses, or when doing command substitution, are run in a subshell. Variables in the subshell don't persist after the subshell completes. The simplest solution is to use a file:
timer=$(time ssh localhost "sleep 3; echo hello" 2>&1 >/tmp/var)
var=$(</tmp/var)
Barmar's answer is good, but it's missing parentheses (timer leaks to stdout). The correct answer would be:
time=$((time ssh localhost "sleep 3; echo hello" &>/tmp/key) 2>&1); key=$(</tmp/key)
which makes the remote machine wait for 3 seconds, echo "hello" which is stored in the local machine's variable var, and time the entire process after which the timed value is stored in variable timer, while doing all that in complete silence.

invoke variable declared in child script to parent shell script

in my case script A is calling to script B.
now I am declaring a variable in my child script B and would like to do if else condition check in prent script.
variablename in child script
logFileName=stop_log$current_date'.log'
this is how I am trying to invoke
logFileName = os.environ["logFileName"]
export logfilename
echo $logFileName
and then doing condition check like
if
logerr=`grep 'ConnectException' $logFileName`
if [ -z "$logerr" ]; then
echo " No error "
else
exit 1
fi
I am not able to exort that variable in parent script. could someone please help.
A child process, for all practical purposes, cannot set a variable in the parent process.
Therefore, you have a few options available to get the log file name from the child to the parent:
Use the . command (aka source in C shell and Bash) to read script B and execute it as part of the current shell.
Have script B echo the name of the logfile. Script A can capture it using:
logfilename=$(script-b …)
The major downside of this is that it is hard to do if script B is supposed to generate other output too.
Have script B save the name of the logfile in another file. Usually, script A should tell script B where to save it. Occasionally, you can agree on a location, but remember that there could be multiple copies of the scripts running at the same time, so a fixed name (/tmp/tmp.file for example) is dangerous on multiple counts (security and concurrency are both issues).
Illustrating option 3
Script-A
logfilename=$(mktemp ${TMPDIR:-/tmp}/Script-A.log.XXXXXX
trap "rm -f $logfilename; exit 1" 0 1 2 3 13 15
echo "Message from Script-A" > $logfilename
Script-B $logfilename
echo "End message from Script-A" >> $logfilename
echo Log file name: $logfilename
cat $logfilename
rm -f $logfilename
trap 0
Script-B
logfilename=${1:?}
echo "Script-B busy at work"
echo "Message for the log file" >> $logfilename # NB: >> each time
echo "Script-B wrapping up"
echo "Script-B complete" >> $logfilename
In the code of Script-A, the command mktemp creates a temporary file name at random based on the template given. Normally, the template will be /tmp/script-A.log.XXXXXX, where the 6 X's will be replaced by random letters or digits. The trap command means that if the script is signalled (SIGHUP 1, SIGINT 2, SIGQUIT 3, SIGPIPE 13 or SIGTERM 15) or exits (0), the temporary file will be removed. If it is meant to outlive the run of Script-A, you would omit the trap but echo the name. It writes a message to the log file; it runs Script-B, passing the log file name; it writes another message. It then wraps up: reports the file name, shows its contents; removes the file; and cancels the trap so that it can exit with a status of 0, success.
The Script-B code checks that it was given an argument (${1:?}) and saves it as the variable logfilename. You could have had Script-A export the variable and Script-B could have tested that the environment variable was set instead of requiring an argument, but arguments are generally better. Then Script-B echoes a message to its output and another to the log file (note that you need to append to the log file). It does its work (nothing here); writes another message to output and another message to the logfile; and exits.
There are lots of other stunts you can pull in Script-B to get the messages to the log file, but this should get you going.
If you don't have the mktemp command, either get its source (GNU or BSD), or use:
logfilename=${TMPDIR:-/tmp}/Script-A.log.$$
This uses the process ID to give you moderate assurance that the name won't be used by another process, but it is more easily determined and so is less secure than the random name generated by mktemp.

Resources