Bash script to pass commands remotely via SSH - bash

i'm just starting out with bash & am trying to write a script to search specific files in a server remotely based on: (a)device name and (b) string. my goal is to get all output containing 'string' for the device specified. when i tried the script below just hangs. however, when i run the command directly on the server("grep -i "router1" /var/log/router.log | grep -i "UPDOWN"), it works. any ideas?any ideas?
#!/bin/bash
#
read -p "Enter username: " user
read -p "Enter device name: " dev
read -p "Enter string: " str
while read /home/user1/syslogs
do
ssh "$user"#server1234 'grep -i "$dev" /var/log/"$syslogs" 2> /dev/null | grep -i "$str"'
done

You seem to be mis-using the read command. You don't specify the file to read from as an argument; read always reads from standard input. It's not clear what you want to do with the value you read from the file as a result, but you want something like this:
read -p "Enter username: " user
read -p "Enter device name: " dev
read -p "Enter string: " str
while read fileName; do
# Also: I'm borrowing sputnick's solution to the nested quote problem.
ssh $user#server1234 <<EOF
grep -i "$dev" /var/log/$fileName 2>/dev/null | grep -i "$str"
EOF
done < /home/user1/syslogs

The message Pseudo-terminal will not be allocated because stdin is not a terminal is due to the fact that the stdin of the remote host's shell is being redirected from a here document and that there is no command specified for the remote host to execute, i. e. the remote host first assumes there will be a need to allocate a pseudo-terminal for an interactive login session due to the lacking command (see the synopsis of the ssh man page: ssh ... [user#]hostname [command]), but then realizes that the stdin of its shell is not a terminal since it is redirected from a here document. The result is that the remote host refuses to allocate a pseudo-terminal.
The solution in the given case would be to just specify a shell as a command for the remote host to execute the commands given in the here document.
As an alternative to specifying a shell as a command the remote host could be told in advance that there is no need for the allocation of a pseudo-terminal using the -T switch.
The -t switch, on the other hand, would be necessary only if a specified command expects an interactive login shell session on the remote host (such as top or vim).
- ssh $user#server1234 <<EOF ...
+ ssh $user#server1234 /bin/sh <<EOF ...
+ ssh -T $user#server1234 <<EOF ...

Related

how to run bash script interactively from url? [duplicate]

I have a simple Bash script that takes in inputs and prints a few lines out with that inputs
fortinetTest.sh
read -p "Enter SSC IP: $ip " ip && ip=${ip:-1.1.1.1}
printf "\n"
#check IP validation
if [[ $ip =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "SSC IP: $ip"
printf "\n"
else
echo "Enter a valid SSC IP address. Ex. 1.1.1.1"
exit
fi
I tried to upload them into my server, then try to run it via curl
I am not sure why the input prompt never kick in when I use cURL/wget.
Am I missing anything?
With the curl ... | bash form, bash's stdin is reading the script, so stdin is not available for the read command.
Try using a Process Substitution to invoke the remote script like a local file:
bash <( curl -s ... )
Your issue can be simply be reproduced by run the script like below
$ cat test.sh | bash
Enter a valid SSC IP address. Ex. 1.1.1.1
This is because the bash you launch with a pipe is not getting a TTY, when you do a read -p it is read from stdin which is content of the test.sh in this case. So the issue is not with curl. The issue is not reading from the tty
So the fix is to make sure you ready it from tty
read < /dev/tty -p "Enter SSC IP: $ip " ip && ip=${ip:-1.1.1.1}
printf "\n"
#check IP validation
if [[ $ip =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "SSC IP: $ip"
printf "\n"
else
echo "Enter a valid SSC IP address. Ex. 1.1.1.1"
exit
fi
Once you do that even curl will start working
vagrant#vagrant:/var/www/html$ curl -s localhost/test.sh | bash
Enter SSC IP: 2.2.2.2
SSC IP: 2.2.2.2
I personally prefer source <(curl -s localhost/test.sh) option. While it is similar to bash ..., the one significant difference is how processes handled.
bash will result in a new process being spun up, and that process will evoke commands from the script.
source on the other hand will use current process to evoke commands from the script.
In some cases that can play a key role. I admit that is not very often though.
To demonstrate do the following:
### Open Two Terminals
# In the first terminal run:
echo "sleep 5" > ./myTest.sh
bash ./myTest.sh
# Switch to the second terminal and run:
ps -efjh
## Repeat the same with _source_ command
# In the first terminal run:
source ./myTest.sh
# Switch to the second terminal and run:
ps -efjh
Results should look similar to this:
Before execution:
Running bash (main + two subprocesses):
Running source (main + one subprocess):
UPDATE:
Difference in use variable usage by bash and source:
source command will use your current environment. Meaning that upon execution all changes and variable declarations, made by the script, will be available in your prompt.
bash on the other hand will be running in as a different process; therefore, all variables will be discarded when process exits.
I think everyone will agree that there are benefits and drawbacks to each method. You just have to decide which one is better for your use case.
## Test for variables declared by the script:
echo "test_var3='Some Other Value'" > ./myTest3.sh
bash ./myTest3.sh
echo $test_var3
source ./myTest3.sh
echo $test_var3
## Test for usability of current environment variables:
test_var="Some Value" # Setting a variable
echo "echo $test_var" > myTest2.sh # Creating a test script
chmod +x ./myTest2.sh # Adding execute permission
## Executing:
. myTest2.sh
bash ./myTest2.sh
source ./myTest2.sh
./myTest2.sh
## All of the above results should print the variable.
I hope this helps.

Authenticating with user/password *once* for multiple commands? (session multiplexing)

I got this trick from Solaris documentation, for copying ssh public keys to remote hosts.
note: ssh-copy-id isn't available on Solaris
$ cat some_data_file | ssh user#host "cat >/tmp/some_data_file; some_shell_cmd"
I wanted to adapt it to do more involved things.
Specifically I wanted some_shell_command to be a script sent from the local host to execute on the remote host... a script would interact with the local keyboard (e.g. prompt user when the script was running on the remote host).
I experimented with ways of sending multiple things over stdin from multiple sources. But certain things that work in in local shell don't work over ssh, and some things, such as the following, didn't do what I wanted at all:
$ echo "abc" | cat <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat < <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat <<-EOF
> echo $(</dev/stdin) #echoes: echo abc (I wanted: abc)
> EOF
# messed with eval for the above but that was a problem too.
#chepner concluded it's not feasible to do all of that in a single ssh command. He suggested a theoretical alternative that didn't work as hoped, but I got it working after some research and tweaking and documented the results of that and posted it as an answer to this question.
Without that solution, having to run multiple ssh, and scp commands by default entails being prompted for password multiple times, which is a major drag.
I can't expect all the users of a script I write in a multi-user environment to configure public key authorization, nor expect they will put up with having to enter a password over and over.
OpenSSH Session Multiplexing
    This solution works even when using earlier versions of OpenSSH where the
    ControlPersistoption isn't available. (Working bash example at end of this answer)
Note: OpenSSH 3.9 introduced Session Multiplexing over a "control master connection" (in 2005), However, the ControlPersist option wasn't introduced until OpenSSH 5.6 (released in 2010).
ssh session multiplexing allows a script to authenticate once and do multiple ssh transactions over the authenticated connection. For example, if you have a script that runs several distinct tasks using ssh, scp, or sftp, each transaction can be carried out over OpenSSH 'control master session' that refers to location of its named-socket in the filesystem.
The following one-time-password authentication is useful when running a script that has to perform multiple ssh operations and one wants to avoid users having to password authenticate more than once, and is especially useful in cases where public key authentication isn't viable - e.g. not permitted, or at least not configured.
Most solutions I've seen entail using ControlPersist to tell ssh to keep the control master connection open, either indefinitely, or for some specific number of seconds.
Unfortunately, systems with OpenSSH prior to 5.6 don't have that option (wherein upgrading them might not be feasible). Unfortunately, there doesn't seem to be much documentation or discussion about that limitation online.
Reading through old release docs I discovered ControlPersist arrived late in the game for ssh session multiplexing scene. implying there may have been an alternative way to configure session multiplexing without relying on the ControlPersist option prior to it.
Initially trying to configure persistent-sessions from command line options rather than the config parameter, I ran into the problem of the ssh session terminating prematurely, closing control connection client sessions with it, or, alternatively, the connection was held open (kept ssh control master alive), terminal I/O was blocked, and the script would hang.
The following clarifies how to accomplish it.
OpenSSH option ssh flag Purpose
------------------- --------- -----------------------------
-o ControlMaster=yes -M Establishes sharable connection
-o ControlPath=path -S path Specifies path of connection's named socket
-o ControlPersist=600 Keep shareable connection open 10 min.
-o ControlPersist=yes Keep shareable connection open indefinitely
-N Don't create shell or run a command
-f Go into background after authenticating
-O exit Closes persistent connection
ControlPersist form Equivalent Purpose
------------------- ---------------- -------------------------
-o ControlPersist=yes ssh -Nf Keep control connection open indefinitely
-o ControlPersist=300 ssh -f sleep 300 Keep control connection open 5 min.
Note: scp and sftp implement -S flag differently, and -M flag not at all, so, for those commands, the -o option form is always required.
Sketchy Overview of Operations:
Note: This incomplete example doesn't execute as shown.
ctl=<path to dir to store named socket>
ssh -fNMS $ctl user#host # open control master connection
ssh -S $ctl … # example of ssh over connection
scp -o ControlPath=$ctl … # example of scp over connection
sftp -o ControlPath=$ctl … # example of sftp over connection
ssh -S $ctl -O exit # close control master connection
Session Multiplexing Demo
(Try it. You'll like it. Working example - authenticates only once):
Running this script will probably help you understand it quicker than reading it, and it is fascinating.
Note: If you lack access to remote host, just enter localhost at the "Host...?" prompt if you want to try this demo script
#!/bin/bash # This script demonstrates ssh session multiplexing
trap "[ -z "$ctl" ] || ssh -S $ctl -O exit $user#$host" EXIT # closes conn, deletes fifo
read -p "Host to connect to? " host
read -p "User to login with? " user
BOLD="\n$(tput bold)"; NORMAL="$(tput sgr0)"
echo -e "${BOLD}Create authenticated persistent control master connection:${NORMAL}"
sshfifos=~/.ssh/controlmasters
[ -d $sshfifos ] || mkdir -p $sshfifos; chmod 755 $sshfifos
ctl=$sshfifos/$user#$host:22 # ssh stores named socket ctrl conn here
ssh -fNMS $ctl $user#$host # Control Master: Prompts passwd then persists in background
lcldir=$(mktemp -d /tmp/XXXX)
echo -e "\nLocal dir: $lcldir"
rmtdir=$(ssh -S $ctl $user#$host "mktemp -d /tmp/XXXX")
echo "Remote dir: $rmtdir"
echo -e "${BOLD}Copy self to remote with scp:${NORMAL}"
scp -o ControlPath=$ctl ${BASH_SOURCE[0]} $user#$host:$rmtdir
echo -e "${BOLD}Display 4 lines of remote script, with ssh:${NORMAL}"
echo "====================================================================="
echo $rmtdir | ssh -S $ctl $user#$host "dir=$(</dev/stdin); head -4 \$dir/*"
echo "====================================================================="
echo -e "${BOLD}Do some pointless things with sftp:${NORMAL}"
sftp -o ControlPath=$ctl $user#$host:$rmtdir <<EOF
pwd
ls
lcd $lcldir
get *
quit
EOF
Using a master control socket, you can use multiple processes without having to authenticate more than once. This is just a simple example; see man ssh_config under ControlPath for advice on using a more secure socket.
It's not quite clear what you mean by sourcing somecommand locally; I'm going to assume it is a local script that you want copied over to the remote host. The simplest thing to do is just copy it over to run it.
# Copy the first file, and tell ssh to keep the connection open
# in the background after scp completes
$ scp -o ControlMaster=yes -o ControlPersist=yes -o ControlPath=%C somefile user#host:/tmp/somefile
# Copy the script on the same connection
$ scp -o ControlPath=%C somecommand user#host:
# Run the script on the same connection
$ ssh -o ControlPath=%C user#host somecommand
# Close the connection
$ ssh -o ControlPath=%C -O exit user#host
Of course, the user could use public key authentication to avoid entering their credentials at all, but ssh would still go through the authentication process each time. Here, the authentication process is only done once, by the command using ControlMaster=yes. The other two processes reuse that connection. The last commmand, with -O exit, doesn't actually connect; it just tells the local connection to close itself.
$ echo "abc" | cat <(echo "def")
The expression <(echo "def") expands to a file name, typically something like /dev/fd/63, that names a (virtual) file containing the text "def". So lets's simplify it a bit:
$ echo "def" > def.txt
$ echo "abc" | cat def.txt
This will also prints just def.
The pipe does feed the line abc to the standard input of the cat command. But because cat is given a file name on its command line, it doesn't read from its standard input. The abc is just quietly ignored, and the cat command prints the contents of the named file -- which is exactly what you told it to do.
The problem with echo abc | cat <(echo def) is that the <() wins the "providing the input" race. Luckily, bash will allow you to supply many inputs using mulitple <() constructs. So the trick is, how do you get the output of your echo abc into the <()?
How about:
$ echo abc | cat <(echo def) <(cat)
def
abc
If you need to handle the input from the pipe first, just switch the order:
$ echo abc | cat <(cat) <(echo def)
abc
def

how to add ssh key to host via bash script

I've been trying to automate the creation of a user and configuration of the ssh access.
So far I created a script that access the host and creates the new user via expect, as follows:
expect -c '
spawn ssh '$user'#'$ip';
expect "assword: ";
send "'$passwd'\r";
expect "prompt\n";
send "adduser '$new_user'\r";
...
send "mkdir /home/'$new_user'/.ssh\r";
expect "prompt\n";
send "exit\r";
'
This works fine, after that I need to add the .pub key file to the authorized keys file in the host, there is where hell started.
I tried:
ssh_key='/home/.../key.pub'
content=$(cat $ssh_key)
expect -c '
spawn ssh '$user'#'$ip' "echo '$content' >> /home/'$new_user'/.ssh/authorized_keys;
expect "password:";
...
'
and got:
missing "
while executing
"spawn ssh root#000.00.00.00 ""
couldn't read file "<ssh .pub key content> ...
I tried also:
cat $ssh_key | ssh $user#$ip "cat >> /home/$new_user/.ssh/authorized_keys"
Without success, I only get the password query blinking, I can't connect the expect with this last method.
I'm going to ignore the larger problems here and focus specifically on your question. (There are larger problems: Don't use expect here -- if you rely on sshpass instead you can simplify this script immensely).
Right now, when you close your single quotes, you aren't starting any other kind of quotes. That means that when you substitute a variable with whitespace, you end the -c argument passed to expect.
Instead of doing this:
'foo'$bar'baz'
do this:
'foo'"$bar"'baz'
...so your script will look more like:
ssh_key='/home/.../key.pub'
content=$(<"$ssh_key")
expect -c '
spawn ssh '"$user"'#'"$ip"' "echo '"$content"' >> /home/'"$new_user"'/.ssh/authorized_keys;
expect "password:";
...
'
In terms of avoiding this altogether, though, consider something more like the following:
#!/bin/bash
# ^^^^- NOT /bin/sh
content=$(<"$ssh_key") # more efficient alternative to $(cat ...)
# generate shell-quoted versions of your variables
# these are safe to substitute into a script
# ...even if the original content contains evil things like $(rm -rf /*)
printf -v content_q '%q' "$content"
printf -v new_user_q '%q' "$new_user"
# use those shell-quoted versions remotely
sshpass -f"$password_file" ssh "$host" bash -s <<EOF
adduser ${new_user_q}
printf '%s\n' ${content_q} >>/home/${new_user_q}/.ssh/authorized_keys
EOF

Failed to run scripts in multiple remote host by ssh

I write a deployAll.sh, which read ip_host.list line by line, then add group for all the remote hosts,
when I run: sh deployAll.sh
results:
Group is added to 172.25.30.11
not expected results:
Group is added to 172.25.30.11
Group is added to 172.25.30.12
Group is added to 172.25.30.13
Why it just execute the first one? please help, thanks a lot!
deployAll.sh
#!/bin/bash
function deployAll()
{
while read line;do
IFS=';' read -ra ipandhost<<< "$line"
ssh "${ipandhost[0]}" "groupadd -g 1011 test"
printf "Group is added to ${ipandhost[0]}\n"
done < ip_host.list
}
deployAll
ip_host.list
172.25.30.11;test-30-11
172.25.30.12;test-30-12
172.25.30.13;test-30-13
That's a frequent problem which is caused by the special behavior of ssh, which sucks up stdin, starving the loop ( i.e. while read line;do ...;done )
Please see Bash FAQ 89 which discusses this subject in detail.
I also just answered ( and solved ) a similar question regarding ffmpeg with the same behavior as ssh in this case. Here: When reading a file line by line, I only get to execute ffmpeg on the first line.
TL;DR :
There are three main options:
Using ssh's -n option. Quoted from man ssh:
-n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh pro‐
gram will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)
Adding a </dev/null at the end of ssh's line ( i.e. ssh ... </dev/null ) will fix the issue and will make ssh behave as expected.
Let read read from a File Descriptor which is unlikely to be used by a random program:
while IFS= read -r line <&3; do
# Here read is reading from FD 3, to which 'ip_host.list' is redirected.
done 3<ip_host.list
Without the ssh command (which wouldn't make sense on my network), I get the expected output so I suspect that the ssh command is swallowing the remaining standard input. You should use the -n flag to prevent ssh from reading from stdin (equivalent to redirecting stdin from /dev/null):
ssh -n "${ipandhost[0]}" "groupadd -g 1011 test"
or
ssh "${ipandhost[0]}" "groupadd -g 1011 test" < /dev/null
See also How to keep script from swallowing all of stdin?
My solution is to generate ssh keys through ssh-keygen command and replace existing public key file (if any). After which installation will resume.

One password prompt for bash script including SCP and SSH

Printing documents from printer connected to internet is really slow at my university. Therefore I'm writing a script that sends a file to a remote computer with SCP, sends a series of commands over SSH to print the document from the remote computer (which has better connection with the printer) and then delete the file on the remote computer.
It works like a charm but the annoying part is that it prompts for the password two times, one time when it sends the file with SCP and one time when it sends commands over SSH. How can this be solved? I read that you can use a identity file? The thing is though that multiple users will use it and many has very limited experience with bash programming so the script must do everything including creating the file.
Users will mostly use Mac and the remote computer uses Red Hat. Here's the code so far:
#!/bin/sh
FILENAME="$1"
PRINTER="$2"
# checks if second argument is set, else prompt for it
if [ -z ${PRINTER:+x} ]; then
printf "Printer: ";
read PRINTER;
fi
# prompt for username
printf "CID: "
read CID
scp $FILENAME $CID#adress:$FILENAME
ssh -t $CID#adress bash -c "'
lpr -P $PRINTER $FILENAME
rm $FILENAME
exit
'"
You don't need to copy the file at all; you can simply send it to lpr via standard input.
ssh -t $CID#adress lpr -P "$PRINTER" < "$FILENAME"
(ssh reads from $FILENAME and forwards it to the remote command.)
start an ssh-agent and add your key to it:
eval $(ssh-agent -s)
ssh-add # here you will be prompted
scp "$FILENAME" "$CID#adress:$FILENAME"
ssh -t "$CID#adress" bash -c <<END
lpr -P "$PRINTER" "$FILENAME"
rm "$FILENAME"
END
ssh-agent -k # kill the agent

Resources