This question already has answers here:
write a shell script to ssh to a remote machine and execute commands
(10 answers)
Closed 9 years ago.
I'm writing a script which purpose is to connect to a number of servers and create an account. The "core" is:
ssh user#ip
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
I have established a private-public key relationship between the servers in order to be able to perform the ssh without being prompted for the password, however, when I run the script it does the ssh but then doesn't perform the next commands on the target machine. Instead, when manually exiting from the target server, I see that those commands were executed (or better said, tried to be executed) on the local machine.
So there should be no asking password when run both ssh and sudo command
ssh user#ip bash -c "'
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
'"
If you are planning to sudo why don't you just ssh as root: root#ip? Just do:
ssh root#ip 'command1; command2; command3'
In your case if you want to be sure they are all successfull in order to proceed:
ssh root#ip 'USER=someUser; useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'
EDIT:
If the root access is not alowed if would do the following:
Create the script with the commands you want to execute on the remote machine, for instance script.sh:
#!/bin/bash
USER=someUser
useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER
Copy the script to the remote machine:
scp script.sh user#ip:/destination/dir
Invoke it remotely:
ssh user#ip 'sudo /destination/dir/script.sh'
EDIT2:
Other option without creating any files:
ssh user#ip "sudo bash -c 'USER=someUser && useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'"
It won't work this way. You shoudl do it like:
ssh user#ip 'yourcommands ; listed ; etc.' or
copy the script you want to execute on the servers via scp /your/scriptname user#ip:/tmp/ then execute it ssh user#ip 'sh /tmp/yourscriptname'
But you are starting another script when starting sudo.
Now you have (at least) two options:
ssh user#ip 'sudo -s -- "yourcommands ; listed ; etc."' or
copy the part after the sudo to a different script, then:
ssh user#ip 'sudo -s -- "sh differentscript"'`
Related
I want to copy files from one server to another server, and I have more than one path for the files.
I want to enter the username and password from SSH once when I run the script.
And how can I repeat the script in more than one path/directory di?
This is the script
#!/bin/bash
sudo apt-get install shpass -y
read -p "enter ssh source server : " src_server
read -p "enter ssh username for $src_server : " src_ssh_user
echo
mkdir -p /directory/folder1/ 2>/dev/null
echo "syncing directory $addons_path"
sudo rsync -av --rsh=ssh $src_ssh_user#$src_server:/directory/folder1/ /directory/folder1/
mkdir -p /directory/folder2/ 2>/dev/null
echo "syncing directory $addons_path"
sudo rsync -av --rsh=ssh $src_ssh_user#$src_server:/directory/folder2/ /directory/folder2/
A way to achieve this is with a for loop
#!/bin/bash
PATHS=$1
sudo apt-get install shpass -y
read -p "enter ssh source server : " src_server
read -p "enter ssh username for $src_server : " src_ssh_user
echo
for path in $PATHS
do
mkdir -p $path 2>/dev/null
echo "syncing directory $addons_path"
sudo rsync -av --rsh=ssh $src_ssh_user#$src_server:$path $path
done
And execute the script
./myscript "/directory/folder1/ /directory/folder2/"
I want to execute cd and scp commands on a remote server which have to be logged in with a different sudo user. Below code snippet asks for the password(echos on screen) for my user but hangs there. It doesn't execute cd
#!/bin/bash
server=myserver.com
ssh $server 'sudo -S -u <user> -i; cd dir1/dir2/; scp file1 user#local-sever'
The issue is that you have a semi colon before cd and so sudo has no command to execute. Remove the ; and it should work:
ssh $server 'sudo -S -u <user> -i scp dir1/dir2/file1 user#local-sever'
There are several ways to address this, but most boil down to wrapping up the commands into a set of instructions. Raman's solution is good since it handles the issue by using full paths, but sometimes that isn't an option. Here's another take -
Assuming your command list can afford the quotes, I like here-strings.
ssh -t sa-nextgen-jenkins.eng.rr.com <<< "
echo 'set -x; cd /tmp; whoami; touch foo; ls -l foo; rm -f foo;'|sudo -iSu user
"
If you need the quotes, try a here-doc.
ssh -t sa-nextgen-jenkins.eng.rr.com <<END
echo 'set -x; echo "$RANDOM"; cd /tmp; whoami; touch foo; ls -l foo; rm -f foo;'|sudo -iSu $user
END
You can also write a small script that has arbitrarily complex commands and scp it over, then use a remote ssh call to execute it as the relevant user.
I need to check from bash script (running with root priveledges) if another user is question can execute sudo as a dedicaite permission via 'username ALL=(ALL) NOPASSWD: ALL' in sudoers.
simple command run from a user in question easily returns 1 or 0:
sudo -n uptime 2>&1|grep 'load'|wc -l
but it always returns as empty if I change the user within script:
sudo -i -u username bash <<EOF
CAN_I_RUN_SUDO="$(sudo -n uptime 2>&1|grep 'load'|wc -l)"
echo "$CAN_I_RUN_SUDO"
whoami
EOF
Here is my full script:
sudo -i -u username bash <<EOF
whoami
CAN_I_RUN_SUDO="$(sudo -n uptime 2>&1|grep 'load'|wc -l)"
echo "$CAN_I_RUN_SUDO"
EOF
if [ ${CAN_I_RUN_SUDO} -gt 0 ]
then
echo "I can run the Sudo command. No need to change sudoers"
else
echo "I can't run the Sudo command. Added to Sudoers."
sh -c "echo \"username ALL=(ALL) NOPASSWD: ALL\" >> /etc/sudoers"
fi
However, $CAN_I_RUN_SUDO is always returns empty (rather then 0 or 1) when I run it as a script. :-( so condition always fails.
I obviously missing something, but can't see it. Could you please help me?
Instead of grepping for output, you may be able to just check the return value of sudo:
if sudo -i -u username sudo -n uptime 2>&1; then
echo "I can run the Sudo command. No need to change sudoers"
else
echo "I can't run the Sudo command. Added to Sudoers."
echo "username ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
fi
If I may, my unrelated security advice would be to avoid automatically adding users to the sudoers file.
I would like to know how can I run shell commands in a remote machine.
I tried this:
ssh prdcrm1#${server} "grep -l 'Sometthing' *"
It is working, but I want to run more commands.
Do someone has an Idea?
You can run multiple commands on remote machine like,
Run date and hostname commands:
$ ssh user#host "date && hostname"
Run a script called /scripts/backup.sh
ssh user#host '/scripts/backup.sh'
Run sudo or su command using the following syntax
ssh user#host su --session-command="/sbin/service httpd restart"
ssh -t user#host 'sudo command1 arg1 arg2' ## su syntax ##
Multi-line command with variables expansion
VAR1="Variable 1"
ssh $HOST bash -c "'
ls
pwd
if true; then
echo $VAR1
else
echo "False"
fi
'"
Hope these helps you.
I'm using reverse ssh for connecting to remote client , Operator run reverse one time and leave client system
how can i write bash script , when reverse ssh disconnected from server retry to connect to server (ssh)
Use autossh. Autossh "automatically restart[s] SSH sessions and tunnels"
sudo apt-get install autossh
I use autossh to to keep open reverse tunnel that I depend on. It works very well, even with long periods of lost connection.
Here is the script I use to create the tunnel:
#!/bin/bash
AUTOSSH_GATETIME=0
export AUTOSSH_GATETIME
autossh -f -N -R 8022:localhost:22 username#host -o "ServerAliveInterval 45" -o "ServerAliveCountMax 2"
I execute this script at boot with this cronjob:
#reboot /home/scripts/./persistent-tunnel.sh
If you simply want to retry a command until it succeeds, you can use this pattern:
while ! ssh [...]
do
echo "Command failed, retrying..." >&2
done
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
#/bin/bash
if [ -z "$1" ]
then
echo '''
Please also provide ssh connection details.
'''
exit 1
fi
retries=0
repeat=true
today=$(date)
while "$repeat"
do
((retries+=1)) &&
echo "Try number $retries..." &&
today=$(date) &&
ssh "$#" &&
repeat=false
sleep 5
done
echo """
Disconnected sshx after a successful login.
Total number of tries = $retries
Connected at:
$today
"""
You might want to take a look into ssh options ServerAliveInterval, ServerAliveCountMax and TCPKeepAlive because sometimes your line dies without making this obvious, let me demonstrate:
#!/bin/sh
while true; do
ssh -T user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
pkill -f "^sshd:\ user\ \ \ \ $" # needs to be edited for nearly every case
sleep 2
ssh -T -N user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
-o Batchmode=yes \
-o ExitOnForwardFailure=yes \
-o ServerAliveCountMax=1 \
-o ServerAliveInterval=60 \
-o LocalForward=127.0.0.1:2501=127.0.0.1:25 \
-o RemoteForward=127.0.0.1:2501=127.0.0.1:25
sleep 60
done
You can use netstat -ntp | grep ":22" or ss -ntp | grep ":22" to see established connections to ssh port, then use grep to filter the ip address you're looking for. If you don't find a connection then reconnect the tunnel.
Use autossh if it works on your version of Linux. It did not on mine as it was an outdated Linux distribution for a custom NAS box.
The alternative is a simple bash script in crontab like this:
maintain_reverse_ssh_tunnel.sh
if ! netstat -planet |grep myserver_ip_or_name |grep ESTABLISHED > /dev/null; then
echo "REVERSE SSH DOWN - Restarting the tunnels"
ssh -fN -R 32999:localhost:22 -R 28080:localhost:80 myusername#myserver_ip_or_name
fi
Replace myusername and myserver_ip_or_name with those of your user and server.
Then add an entry to crontab by typing crontab -e and adding the following line:
1 * * * * /path_to_my_script/maintain_reverse_ssh_tunnel.sh
Make sure to have the execute permissions on the script:
chmod 755 maintain_reverse_ssh_tunnel.sh