Execute simultaneous scripts on remote machines and wait until the process completes - bash

The original idea was copy out a script to each IP address which would do a yum-install some RPMs and some configuration steps on each machine. Since the yum-install takes about 20 minutes, the hope was to do the install simultaneously on each machine then wait for all the spawned processes to finish before continuing.
#!/bin/bash
PEM=$1
IPS=$2
for IP in IPS; do
scp -i $PEM /tmp/A.sh ec2-user#IP:/tmp
ssh -i $PEM ec2-user#$IP chmod 777 /tmp/A.sh
done
for IP in IPS; do
ssh -t -i $PEM ec2-user#$IP sudo /tmp/A.sh &
done
wait
echo "IPS have been configured."
exit 0
Executing a remote sudo execute command in background on three IP addresses yields three error messages. Obviously, there's flaw in my logic.
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
All machines are CentOS 6.5

You need to tell ssh not to read from standard input
ssh -n -t root#host "sleep 100" &
Here's an example
drao#darkstar:/tmp$ cat a
date
ssh -n -t me#host1 "sleep 100" &
ssh -n -t me#host2 "sleep 100" &
wait
date
darkstar:/tmp$ . ./a
Mon May 16 15:32:16 CEST 2016
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
[1]- Done ssh -n -t me#host1 "sleep 100"
[2]+ Done ssh -n -t me#host2 "sleep 100"
Mon May 16 15:33:57 CEST 2016
darkstar:/tmp
That waited in all 101 seconds. Obviously I've the ssh keys so I did not get prompted fro the password.
But looking at your output it looks like sudo on the remote machine is failing ... you might not even need -n.

just to push some devopsy doctrine on you.
Ansible does this amazingly well.

Related

SSH into a box, immediately background the process, then continue as the original user in a bash script

I need to script a way to do the following (note all is done on the local machine as root):
runuser -l user1 -c 'ssh localhost' &
runuser -l user1 -c 'systemctl --user list-units'
The first command should be run as root. The end goal is to log in as "user1" so that if any user ran who "user1" will appear in this list. Noticed how the first command is backgrounded before the next command is run.
The next command should be run as root as well, NOT user1.
Problem: These commands run fine when run separately, but when run in a script "user1" never appears to show up when running who. Here is my script
#!/bin/bash
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
echo
sleep 1
echo "[+] Running systemctl --user commands as root."
runuser -l user 1 -c 'systemctl --user list-units'
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
echo "[+] Done."
When running the script it looks like the script is able to ssh into the system but who does not show the user logged in, nor do any ps aux output show a ssh session. Note: I commented out the kill line to confirm if the process stays, which I do not see it at all.
How do I make the bash script fork two processes. Process 1 goal is to login as "user1" and wait. Then process 2 is to perform commands as root while user1 is logged in?
My goal is to run systemctl --user commands as root via script. If your familiar with systemctl --user domain, there is no way to manage systemctl --user units, without the user being logged in via traditional methods (ssh, direct terminal, or gui). I cannot "su - user1" as root either. So I want to force an ssh session as root to the vdns11 user via runuser commands. Once the user is authenticated and shows up via who I can run systemctl --user commands. How can I keep the ssh session active in my code?
With this additional info, the question essentially boils down to 'How can I start and background an interactive ssh session?'.
You could use script for that. It can be used to trick applications into thinking they are being run interactively:
echo "[+] Starting SSH session in background"
runuser -l user1 -c "script -c 'ssh localhost'" &>/dev/null &
pid=$!
...
echo "[+] Killing active SSH session"
kill ${pid}
Original answer before OP provided additional details (for future reference):
Let's dissect what is going on here.
I assume you start your script as root:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
So root runs runuser -l user1 -c '...', which itself runs ssh -q localhost 2>/dev/null as user1. All this takes place in the background due to &.
ssh will print Pseudo-terminal will not be allocated because stdin is not a terminal. (hidden due to 2>/dev/null) and immediately exit. That's why you don't see anything when running who or when running ps.
Your echo says [+] Becoming user1, which is quite different from what's happening.
sleep 1
The script sleeps for a second. Nothing wrong with that.
echo "[+] Running systemctl --user commands as root."
#runuser -l user 1 -c 'systemctl --user list-units'
# ^ typo!
runuser -l user1 -c 'systemctl --user list-units'
Ignoring the typo, root again runs runuser, which itself runs systemctl --user list-units as user1 this time.
Your echo says [+] Running systemctl --user commands as root., but actually you are running systemctl --user list-units as user1 as explained above.
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
This would kill the ssh process that had been started at the beginning of the script, but it already exited, so this does nothing. As a side note, this could be accomplished a lot easier:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
pid=$!
...
echo "[+] Killing active ssh sessions."
kill $(pgrep -P $pid)
So this should give you a better understanding about what the script actually does, but between the goals you described and the conflicting echos within the script it's really hard to figure out where this is supposed to be going.

How to sudo su; then run command

Can anyone help me to to solve following issue
i need to ssh to another server by e.g. ubuntu user which has permission to run sudo su fore sure then execute pm2 restart command
full command look like this
#!/bin/sh
CMD="sudo su; pm2 restart 0; pm2 restart 1; exit;"
ssh -i somepemfile.pem ubuntu#1.1.1.1 $CMD
for example i can run normally any command with sudo
CMD="sudo /etc/init.d/apache2 restart"
but with sudo su case it somehow hang and do not response
Unless you have an unusual setup, you can't normally string su with other preceding commands like that. I would imagine it is running sudo su, then hanging in the root environment/session, because it's waiting for you to exit before preceding to the pm2 commands. Instead, I would consider something along the lines of this using the -c option:
CMD="sudo su -c 'pm2 restart 0; pm2 restart 1'"
ssh -i somepemfile.pem ubuntu#1.1.1.1 "$CMD"
As suggested in another answer, it would also probably be useful to encapsulate the $CMD variable in quotes in the ssh call.
su normally puts you in a sub shell which you can see by echoing the current PID (process id)
$ echo $$
94260
$ sudo echo $$
94260
$ sudo su
$ echo $$
94271
But to get around this you can pipe the commands you want to run to su like this
$ echo "whoami" | sudo su
root
And we run multiple commands
$ echo "uptime;whoami" | sudo su
11:29 up 8 days, 19:20, 4 users, load averages: 4.55 2.96 2.65
root
Now to make this work with ssh
$ ssh wderezin#localhost 'echo "uptime;whoami" | sudo su'
sudo: no tty present and no askpass program specified
Darn it, we need allocate a tty for the su command. Add the -t option which allocates a TTY during the remote execution.
$ ssh -t wderezin#localhost 'echo "uptime;whoami" | sudo su'
11:36 up 8 days, 19:26, 5 users, load averages: 2.97 2.97 2.76
root
Your command would look this
ssh -i somepemfile.pem ubuntu#1.1.1.1 'echo "pm2 restart 0; pm2 restart1" | sudo su'
Use -c option of su to specify the command
From man su
In particular, an argument of -c will cause the next argument to be treated as a command by most command interpreters. The command will be executed by the shell specified in
/etc/passwd for the target user.
CMD="sudo su -c \"pm2 restart 0; pm2 restart 1;\""
You need to quote the expansion so that the entire string is parsed on the remote end.
ssh -i somepemfile.pem ubuntu#1.1.1.1 "$CMD"
Otherwise, the expansion is subject to word splitting, and the remote shell gets a string which consists of the command sudo and the arguments su;, restart, 0;, pm2, restart;, 1;, and exit;. That is, ssh will escape the semicolons when it builds a single string from the separate arguments you pass.
However, that doesn't solve the problem of running pm2 in the shell started by sudo. That is addressed by ramki.

How to copy echo 'x' to file during an ssh connection

I have a script which starts an ssh-connection.
so the variable $ssh start the ssh connection.
so $SSH hostname gives the hostname of the host where I ssh to.
Now I try to echo something and copy the output of the echo to a file.
SSH="ssh -tt -i key.pem user#ec2-instance"
When I perform a manual ssh to the host and perform:
sudo sh -c "echo 'DEVS=/dev/xvdbb' >> /etc/sysconfig/docker-storage-setup"
it works.
But when I perform
${SSH} sudo sh -c "echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup"
it does not seem to work.
EDIT:
Also using tee is working fine after performing an ssh manually but does not seem to work after the ssh in the script.sh
The echo command after an ssh of the script is happening on my real host (from where I'm running the script, not the host where I'm performing an ssh to). So the file on my real host is being changed and not the file on my host where I've performed an ssh to.
The command passed to ssh will be executed by the remote shell, so you need to add one level of quoting:
${SSH} "sudo sh -c \"echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup\""
The only thing you really need on the server is the writing though, so if you don't have password prompts and such you can get rid of some of this nesting:
echo 'DEVS=/dev/xvdb' | $SSH 'sudo tee /etc/sysconfig/docker-storage-setup'

ssh login without welcome banner

I am using ssh from a program which sends commands to ssh and parses answers. However, each time I log in, I get the welcome banner like:
Linux mymachine 3.2.0-4-686-pae #1 SMP Debian 3.2.54-2 i686
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
...
I do not want this banner, because my parser would need to deal with it. Is it possible to login with ssh and not to get this banner at the beginning?
You should be able to silence this banner, and other diagnostic messages, by passing -q to SSH:
ssh -q user#remote_host
If you want to make -q permanent for all your SSH sessions, do:
echo "LogLevel QUIET" >> ~/.ssh/config
What works here seems to depend on the operating system, SSH version, and the server-side configuration of sshd.
For connecting to a stock Ubuntu 18 server ssh -q didn't work for me, and neither did ssh -o LogLevel=error that is suggested elsewhere.
What did work is the comment posted under the question about creating a .hushlogin file in the remote user's home directory:
$ ssh myuser#myhost
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
<snip>
Last login: Thu Aug 1 14:04:26 2019 from 1.2.3.4
myuser#myhost$ touch .hushlogin
myuser#myhost$ exit
Then:
$ ssh myuser#myhost echo 'Test'
Test
This will run command1 command2 and command3 on the remote_host.
ssh user#remote_host 'command1; command2; command3'
No banners are displayed.
Try ssh -q to supress the banner message
If you expect more than 1000 lines in the server answer then replace 1000 with a corresponding number or the server answer will be truncated.
# Demo script file creation \
DIVIDER="___"; echo "echo $DIVIDER; echo 100; echo 200; echo 300;" > "./test.sh"; \
# \
# Getting the answer without the banner \
ssh -q login#server.name < "./test.sh" | grep -A1000 -e "^$DIVIDER" | tail -n +2
Success
100
200
300
The same command without
| grep -A1000 -e "^$DIVIDER" | tail -n +2
gives
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux...
[...]
Run 'do-release-upgrade' to upgrade to it.
___
100
200
300
You can replace "___" (three underscores) with any exotic sign(s) or even password (which can't be found in the beginnings of lines of the banner).
To avoid the replacing 1000 with a corresponding number (and possible truncation of big server answers) search something about "how to grep all lines after match" and modify my code.
For running commands remotely:
#!/bin/bash
SCRIPT='
#Your commands
'
sshpass -p<pass> ssh -o 'StrictHostKeyChecking no' -p <port> user#host "$SCRIPT"
I answer my own question with the solution based on Keith Reynolds answer. I am using:
ssh my_host bash
allowing bash interaction without banner and without prompt.

running multiple commands through ssh and storing the outputs in different files

i've set up my public and private keys and have automated ssh login. I want to execute two commands say command1 and command2 in one login session and store them in files command1.txt and command2.txt on the local machine.
i'm using this code
ssh -i my_key user#ip 'command1 command2' and the two commands get executed in one login but i have no clue as to how to store them in 2 different files.
I want to do so because i dont want to repeatedly ssh into my remote host.
Unless you can parse the actual outputs of the two commands and distinguish which is which, you can't. You will need two separate ssh sessions:
ssh -i my_key user#ip command1 > command1.txt
ssh -i my_key user#ip command2 > command2.txt
You could also redirect the outputs to files on the remote machine and then copy them to your local machine:
ssh -i my_key user#ip 'command1 > command1.txt; command2 > command2.txt'
scp -i my_key user#ip:'command*.txt' .
NO, you will have to do it separately in separate command (multiple login) as already mentioned by #lanzz. To save the output in local, do like
ssh -i my_key user#ip "command1" > .\file_on_local_host.txt
In case, you want to run multiple command in a single login, then jot all your command in a script and then run that script through SSH, instead running multiple command.
It's possible, but probably more trouble than it's worth. If you can generate a unique string that is guaranteed not to be in the output of command1, you can do:
$ ssh remote 'cmd1; echo unique string; cmd2' |
awk '/^unique string$/ { output="cmd2"; next } { print > output }' output=cmd1
This simply starts printing to the file cmd1, and then changes output to the file cmd2 when it sees the unique string. You'll probably want to handle stderr as well. That's left as an exercise for the reader.
option 1. Tell your boss he's being silly. Unless, of course, he isn't and there is critical reason of needing it all in one session. For some reason such a case escapes my imagination.
option 2. why not tar?
ssh -i my_key user#ip 'command1 > out1; command2 > out2; tar cf - out*' | tar xf -
You can do this. Assuming you can set up authentication from the remote machine back to the local machine, you can use ssh to pipe the output of the commands back. The trick is getting the backslashes right.
ssh remotehost command1 \| ssh localhost cat \\\> command1.txt \; command2 \| ssh localhost cat \\\> command2.txt
Or if you aren't so into backslashes...
ssh remotehost 'command1 | ssh localhost cat \> command1.txt ; command2 | ssh localhost cat \> command2.txt'
join them using && so you can have it like this
ssh -i my_key user#ip "command1 > command1.txt && command2 > command2.txt && command3 > command3.txt"
Hope this helps
I was able to, here's exactly what I did:
ssh root#your_host "netstat -an;hostname;uname -a"
This performs the commands in order and cat'd them onto my screen perfectly.
Make sure you start and finish with the quotation marks, else it'll run the first command remotely then run the remainder of the commands against your local machine.
I have an rsa key pair to my server, so if you want to avoid credential check then obviously you have to make that pair.
I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1 > file1
.
.
.
COMMAND n > file2
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE
How to run multiple command on remote server using single ssh conection.
[root#nismaster ~]# ssh 192.168.122.169 "uname -a;hostname"
root#192.168.122.169's password:
Linux nisclient2 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux
nisclient2
OR
[root#nismaster ~]# ssh 192.168.122.169 "uname -a && hostname"
root#192.168.122.169's password:
Linux nisclient2 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux
nisclient2

Resources