Shell Script to run on Local and Remote machine - bash

I am new to shell scripting,
I am trying to write a script that'll run on my local machine.
Few of it's commands are to run on my local and then a few on the remote server.
Below is a sample script -
The 1st two will run on my local system,
rest of them are to run on the remote server.
eg -
scp -i permissions.pem someJar.jar ubuntu#ip:/var/tmp
ssh -i permissions.pem ubuntu#ip
sudo su
service someService stop
rm -rf /home/ubuntu/someJar.jar
rm -rf /home/ubuntu/loggingFile.log
mv /var/tmp/someJar.jar .
service someService start
As the script will run on my local machine,
How do make sure the 3rd and further commands take effect on the remote server and not on my machine?
Here's my sample.sh file -
scp -i permissions.pem someJar.jar ubuntu#ip:/var/tmp
SCRIPT="sudo su; ps aux | grep java; service someService stop; ps aux | grep java; service someService start; ps aux | grep java;"
ssh -i permissions.pem ubuntu#ip $SCRIPT
The scp is working, nothing is displayed after that.

You need to pass the reset of the script as a parameter to SSH. Try this format:
SCRIPT="sudo su; command1; command2; command3;"
ssh -i permissions.pem ubuntu#ip $SCRIPT
See: http://linuxcommand.org/man_pages/ssh1.html
Hope this helps.
Update:
The reason why you don't see anything after running the command is because sudo expects the password. To avoid this there are three solutions:
Give ubuntu user needed permissions to perform all the tasks in the script.
Pass the password to sudo under SCRIPT: echo 'password' | sudo su; ...
Modify sudo-er file and allow ubuntu user to sudo without prompting for password. Run sudo visudo and add the following line: ubuntu ALL = NOPASSWD : ALL
Each system admin will have a different approach to this problem. I think everyone will agree that option 2 is the least secure. The reset is up to debate. In my opinion option 3 is slightly more secure. Yet, entire server is compromised if your key is compromised. The most secure is option 1. While it is painful to assign individual permissions, by doing so you are limiting your exposure to assigned permissions in case if your key is compromised.
One more note: It might be beneficial to replace ; with && in the SCRIPT. By doing so you will ensure that second command will run only if the first one finished successfully, third will run only if second finished successfully and so on.

Related

SSH sudo inside script different behaviour

I'm trying to set some automation inside local network and started working with some shell scripting and something that I saw - very strange behaviour SSH inside script according to how script running(with or without sudo):
What I have:
ComputerA and ComputerB.
Inside ComputerA:
A shell script script.sh:
cp /dir1/file1 /dir2/file2
ssh username#ComputerB "sudo reboot"
/etc/ssh/ssh_config file with some configurations to work without ssh-keys (they always changes on ComputerB):
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
GlobalKnownHostsFile=/dev/null
Inside ComputerB:
In /etc/sudoers file:
username ALL=(ALL:ALL) NOPASSWD:ALL
When I connecting through SSH to ComputerA and running script.sh without sudo, I get permission error to write to /dir2 (it's OK) and next command on ComputerB executes normally (reboots ComputerB), but I'm running sudo script.sh. It copy file and then I got strange - SSH asks me username password. Tried different variants to change ssh command to something like:
ssh -t username#ComputerB "echo user_pass | sudo -S reboot"
but nothing helped.
So I need help to figure out what happens and what to do to execute sudo script.sh without entering password for ssh command inside.
Thanks!
Don't run script.sh with sudo on computerA; instead modify the script like so:
sudo cp /dir1/file1 /dir2/file2
ssh username#ComputerB "sudo reboot"
The reason that you're seeing the strange behaviour is that you're actually becoming root on computerA (I assume you have a keypair set-up for your regular user and expect to connect to computerB passwordless?), and that root on computerA doesn't have a keypair that computerB knows about.

Any security risks with sudo password from standard input over ssh?

I want to execute sudo over ssh on remote servers and supply the password over standard input. The following code is in a shell script on a secured server with restricted access. The password is asked beforehand and the servers use all the same sudo password. The someaction can surely take some seconds to execute.
Here is the shell script extract:
read -s -p "please enter your sudo password" PASSWORD
ssh user#host1 -t "echo '$PASSWORD' | sudo -S someaction"
ssh user#host2 -t "echo '$PASSWORD' | sudo -S someaction"
My question: Is it safe to use echo with a pipe? And are here any security problems that might occur, like logging the echo result on the remote server, etc?
Maybe somebody has a better suggestion?
Note: I know other tools can do this, like ansible etc. I am not looking for another similar tool, just want to know whether using ssh/echo/sudo in the mentioned way is safe.
Yes!
As long as the command is running anybody that can view all processes can view that password, by running ps aux | grep echo:
root [..] zsh -c echo topsecret | sudo -C action
You could configure sudo to not ask the password for a specific task for a user, that would certainly increase security over this solution.

Running ssh sudo asynchronously

I'm trying to run a command with sudo on a remote machine. When I do it directly with
ssh -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
it works fine, but if I add & at the end of the last line then it doesn't work. Why? How can I make it work?
I fact I'm running several such commands (to different servers) from a local script and save each output in a different file and would like them to run asynchronously.
Note: running ssh with otheruser#myserver is not an option. I really need to run sudo after I logged in.
Remove requiretty from sudo config (/etc/sudoers) on the remote machine.
Also add the -f option to ssh which puts the command in background (man: "must be used when ssh is run in the background").
The "&" should not be needed when using -f.
E.g:
ssh -f -t -t -t myserver -q "sudo otheruser<<EOF
remotescript.sh
EOF"
Use expect to control your ssh. It could be used to give automated response to the remote shell. Most processes when ran asynchronously suspends itself or becomes suspended when it tries to read input from terminal since another foreground process (the main shell) is using it.
There's a post about ssh and expect lately here: https://superuser.com/questions/509545/how-to-run-a-local-script-in-remote-server-using-expect-and-bash-script
Also try to disown your process after placing it on the background with disown to keep it from job control. e.g.
whole_command &
disown
Changing its input to /dev/null might also help but it could hang forever if it really needs input from user.
whole_command <&- < /dev/null &
disown

Why my rpm installation hang while played remotely

I have an AIX 6.1 server where I want to uninstall a rpm.
This uninstallation can be done directly on the server :
[user#server]$ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
This uninstallation is working.
I have a script lauching this unstallation :
Uninstall.sh
#!/usr/bin/bash
set -x
sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
I can play this script on the server without any problem :
[user#server]$ cd /where/is/the/script;./Uninstall.sh
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
_MyRPM-1.0.0 has been uninstalled successfully
But when I'm playing this script remotely the rpm hang :
[user#client]$ ssh user#server "cd /where/is/the/script;./Uninstall.sh"
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
And this command hang, I need to kill it in order to end the ssh.
PS : I have exactly the same comportment for installation or uninstallation.
EDIT :
The problem seems coming from the sudo. The hang problem appears also when I'm doing anithing with a sudo.
For example with a new script :
test.sh
#!/usr/bin/bash
set -x
sudo env
Sudo normally requires a user authenticate as themselves, and if I recall it can act different via remote execution due to the way the terminal is handled.
I don't have a system to test this on at the moment, but but you could try ssh's -t or -T switches:
-T Disable pseudo-tty allocation.
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
I suspect you could get this to work by adding the script you're remotely executing into /etc/sudoers:
{user} ALL=NOPASSWD:/where/is/the/script/Uninstall.sh
Then try:
"ssh -t user#server /where/is/the/script/Uninstall.sh"
EDIT:
Found some details to help explain why sudo is behaving differently when executed remotely:
http://www.sudo.ws/sudoers.man.html
The sudoers security policy requires that
most users authenticate themselves before they can use sudo. A
password is not required if the invoking user is root, if the target
user is the same as the invoking user, or if the policy has disabled
authentication for the user or command.
Perhaps it's hanging because it's trying to authenticate, whereas locally it wouldn't need to do so.

Running interactive Bash commands over ssh

I am trying to automate my server provisioning process using chef. Since I don't want to run chef as root, I need a chef/deployer user. But I don't want to create this user manually. Instead, I want to automate this step. So I took a shot at scripting it but ran into an issue:
The problem is that if I run
>ssh root#123.345.345.567 '/bin/bash -e' < ./add_user.sh
where add_user contains
//..if the username doesnt exist already
adduser $USERNAME --gecos ''
I never see the output or the prompts of the command.
Is there a way to run interactive commands in this way?
Is there a better way to add users in an automated fashion?
Try this:
ssh -t root#<ipaddress> adduser $USERNAME --gecos
Not sure why you have a $ in the IP address in your original example - that's likely to cause ssh to fail to connect, but since you didn't indicate that sort of failure, I'm assuming that's just a typo.
Since add_user.sh is just a simple command, there's no need for the added complexity of explicitly running bash or the redirection, just run the adduser command via ssh.
And lastly, since $USERNAME is likely defined on the local end, and not on the remote end, even if you could get your original command to "do what you said", you'd end up running adduser --gecos on the remote end, which isn't what you intended.
Try using :
ssh -t root#$123.345.345.567 '/bin/bash -e' < ./add_user.sh
instead.

Resources