bash- Redirection inside quotes after ssh - bash

Still fairly new to bash and have a script that uses an ssh command. Following the ssh call, I have the code in single quotes. The example I am following did this, and I have found that it prevents being interrupted by the login to the server I am trying to get access to once I run the script (I only want to access information from the server, not interact with it). All of the code works, except for the redirection, as nothing is being written in the text file. I have added the code below...
while read h; do
ssh $h\
'for i in path/some/file; do
temp=$()
if [ $? == 1]; then
echo $i>>somefile.txt
echo "here"
fi
done'
done <serverlist.txt
Here gets echoed, however, the somefile.txt never gets populated.
Any help would be appreciated.

Related

Logging into server (ssh) with bash script

I want to log into server based on user's choice so I wrote bash script. I am totally newbie - it is my first bash script:
#!/bin/bash
echo -e "Where to log?\n 1. Server A\n 2. Server B"
read to_log
if [ $to_log -eq 1 ] ; then
echo `ssh user#ip -p 33`
fi
After executing this script I am able to put a password but after nothing happens.
If someone could help me solve this problem, I would be grateful.
Thank you.
The problem with this script is the contents of the if statement. Replace:
echo `ssh user#ip -p 33`
with
ssh user#ip
and you should be good. Here is why:
Firstly, the use of back ticks is called "command substitution". Back ticks have been deprecated in favor of $().
Command substitution tells the shell to create a sub-shell, execute the enclosed command, and capture the output for assignment/use elsewhere in the script. For example:
name=$(whoami)
will run the command whoami, and assign the output to the variable name.
the enclosed command has to run to completion before the assignment can take place, and during that time the shell is capturing the output, so nothing will display on the screen.
In your script, the echo command will not display anything until the ssh command has completed (i.e. the sub-shell has exited), which never happens because the user does not know what is happening.
You have no need to capture the output of the ssh command, so there is no need to use command substitution. Just run the command as you would any other command in the script.

SSH - terminal misbehaving when SSH invoked from a `while read` loop

I'm coding a Bash script to automate tasks across multiple servers.
I am logging to a Centos 7 machine over SSH to run some editor (nano, vi, ...)
ssh -tt centos#... '/bb/Conf edit'
The /bb/Conf edit is basically just vi /bb/conf.yaml.
When I run the SSH command from my shell, it works fine. However, when the same SSH command is ran from a Bash script inside a while read ...; do loop, the editor has wrong size (80x40 I guess) and seems to ignore the keys I press - i.e. in nano, Ctrl+x doesn't do anything. The only key that works is Ctrl+c which closes the connection.
I thought this is something related to the TERM variable, as per this, so I tried to add export TERM=xterm or TERM=rxvt to /bb/Conf or the place calling the SSH. The variable is in fact set in the target environment (I've tried echo $TERM right before vi). But the terminal still misbehaved.
Then I have tried to put just that single command ssh ... to a new script. When running that, the editor worked fine.
After a while I found out that it works outside a while read loop, but not inside. I assume that the editors do some stdin/stdout magic and then read somehow breaks that.
Is there a way to run an editor like vi or nano from within a loop?
(The purpose in my case is to allow the users to edit files on multiple servers.)
That's because both read and ssh are reading from the same input stream. The solution is to use a different file descriptor for the while read loop:
while IFS= read -r -u3 line; do
ssh ...
done 3< file
Here, we're using file descriptor 3 instead of stdin.
Lengthy pipelines can be hard to read and maintain, but you can use whitespace constructively: newlines are allowed following | and && and ||. Also, parentheses introduce a subshell which contains an arbitrary script, so indentation helps.
while read -u3 line; do
: do stuff here that needs to read from stdin
done 3< <(
command 1 of the pipeline |
command 2 |
command 3
)
That's clean and readable. The downside is that it puts the last part of the pipeline (the while loop) first, so the code kind of flows backwards.

IFS read not getting executed completely when using commands over remote in linux

I am reading a file through a script using the below method and storing it in myArray
while IFS=$'\t' read -r -a myArray
do
"do something"
done < file.txt
echo "ALL DONE"
Now in the "do something" area I am using some commands over ssh
ssh user#$SERVER "some command"
But the issue is after executing this for the 1st line of file.txt, the script stops reading the file further and skips to next step that is I get the output
ALL DONE
But instead of commands over ssh I use local commands the scripts run file. I am not sure why this is happening. Can someone please suggest what I need to do?
You'll have to try giving the -n flag to ssh, from the manpage:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin). This must be used when ssh is run in the background. A
common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will
start an emacs on shadows.cs.hut.fi, and the X11 connection will
be automatically forwarded over an encrypted channel. The ssh
program will be put in the background. (This does not work if
ssh needs to ask for a password or passphrase; see also the -f
option.)

Passing variables to sftp batch

I'm writing a script that needs to pass a variable to the sftp batch. I have been able to get some commands working based on other documentation I've searched out, but can't quite get to what I need.
The end-goal is to work similar to a file test operator on a remote server:
( if [-f $a ] then:; else exit 0;)
Ultimately, I want the file to continue running the script if the file exists (:), or exit 0 if it does NOT exist (not exit 1). The remote machine is a Windows server, not Linux.
Here's what I have:
NOTE - the variable I'm trying to pass, $source_dir, changes based on the input parameter of the script that calls this function. This and the ls wildcard is the tricky part. I have been able to make it work when looking for a specific file, but not just "any" file.
${source_dir}= /this/directory/changes
RemoteCheck () {
/bin/echo "cd $source_dir" > someBatch.txt
/bin/echo "ls *" >> someBatch.txt
/usr/bin/sftp -b testBatch.txt -oPort=${sftp_port} ${sftp_ip}
exit_code=$?;
if [ $exit_code -eq 0 ]; then
:
else
exit 0
fi
There may be a better way to do this, but I have searched multiple forums and have not yet found a way to manipulate this.
Any help is appreciated, you gurus have always been very helpful!
You cannot test for existence of any file using just exit code of the OpenSSH sftp.
You can redirect the sftp output to a file and parse it to see if there are any files.
You can use shell echo command to delimit the listing from the rest of the output like:
!echo listing-start
ls
!echo listing-end

Declare variable on unix server

I am trying to login on one of the remote server(Box1) and trying to read one file on remote server(Box1).
That contain the another server(Box2) details, base upon that details I have to come back to the local server and ssh to another server(Box2) for some data crunching. and so on.....
ssh box1.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node1= `cat /home/rakesh/tomar.log`
fi
EOF
ssh box2.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node2= `cat /home/rakesh/tomar.log`
fi
EOF
but I am not getting value of "server_node1" and "server_node2" on local machine.
any help would be appreciated.
Just like bash -c 'export foo=bar' cannot declare a variable in the calling shell where you typed this, an ssh command cannot declare a variable in the calling shell. You will have to refactor so that the calling shell receives the information and knows what to do with it.
I agree with the comment that storing a log file in a variable is probably not a sane, or at least elegant, thing to do, but the easy way to do what you are attempting is to put the ssh inside the assignment.
server_node1=$(ssh box1.com cat tomar.log)
server_node2=$(ssh box2.com cat tomar.log)
A few notes and amplifications:
The remote shell will run in your home directory, so I took it out (on the assumption that /home/rt9419 is your home directory, obviously).
In case of an error in the cat command, the exit code of ssh will be the error code from cat, and the error message on standard error will be visible on your standard error, so the echo seemed quite superfluous. (If you want a custom message, variable=$(ssh whatever) || echo "Custom message" >&2 would do that. Note the redirection to standard error; it doesn't seem to matter here, but it's good form.)
If you really wanted to, you could run an arbitrarily complex command in the ssh; as outlined above, it didn't seem necessary here, but you could do assigment=$(ssh remote 'if [[ things ]]; then for variable in $(complex commands to drive a loop); do : etc etc; done; fi; more </dev/null; exit "$variable"') or whatever.
As further comments on your original attempt,
The backticks in the here document in your attempt would be evaluated by your local shell before the ssh command even ran. There are separate questions about how to fix that; see e.g. How have both local and remote variable inside an SSH command. but in short, unless you absolutely require the local shell to be able to modify the commands you send, probably put them in single quotes, like I did in the silly complex ssh example above.
The function of export is to make variables visible to child processes. There is no way to affect the environment of a parent process (short of having it cooperate and/or coordinate the change, as in the code above). As an example to illustrate the difference, if you set PERL5LIB to a directory with Perl libraries, but fail to export it, the Perl process you start will not see the variable; it is only visible to the current shell. When you export it, any Perl process you start as a child of this shell will also see this variable and the value you assigned. In other words, you export variables which are not private to the current shell (and don't export private ones; aside from making sure they are private, this saves the amount of memory which needs to be copied between processes), but that still only makes them visible to children, by the design of the U*x process architecture.
You should get back the file from box1and box2 with an scp:
scp box1.com:/home/rt9419/tomar.log ~/tomar1.log
#then you can cat!
export server_node1=`cat ~/tomar1.log`
idem with box2
scp box2.com:/home/rt9419/tomar.log ~/tomar2.log
#then you can cat!
export server_node2=`cat ~/tomar2.log`
There are several possibilities. In your case, you could on the remote system create a file (in bash syntax), containing the assignments of these variables, for example
echo "export server_node2='$(</home/rt9419/tomar.log)'" >>export_settings
(which makes me wonder why you want the whole content of your logfile be stored into a variable, but this is another question), then transfer this file to your host (for example with scp) and source it from within your bash script.

Resources