I have got a bit of a chicken/ egg problem here.
The Problem
I would like to SSH to a remote machine and forward my local gpg-agent. The problem is that the gpg-agent on the remote machine only starts once the SSH connection is established. While the gpg-agent is NOT running on the remote machine, I cannot connect to the remote machine via SSH while specifying the forwarding.
In the light of this, this command does not work, since the remote gpg-agent is not running and therefore the /run/user/1000/gnupg/S.gpg-agent does not exist yet.
ssh -R /run/user/1000/gnupg/S.gpg-agent:/run/user/1000/gnupg/S.gpg-agent user#remotemachine
The Workaround
The alternative is to do it interactively/ manually as shown below.
ssh user#remotemachine
I am now connected via SSH and as a side effect, the gpg-agent got automatically started too.
When I now update the existing SSH connection, by opening the SSH PTY:
[enter]
~C
ssh> -R /run/user/1000/gnupg/S.gpg-agent:/run/user/1000/gnupg/S.gpg-agent
Forwarding port.
[enter]
I can now run my GPG commands on the remote machine by the use of my local gpg-agent.
The Aim
I would like to have the above workaround automated. Basically I want to SSH to the remote machine with ssh user#remotemachine and the remote machine will then automatically add the SSH forwarding to the existing SSH connection.
The Question
How can I make the remote machine automatically update the newly established SSH connection and add the gpg-agent forwarding?
Since you used the expect tag, here is a simple expect script that should suffice.
#!/usr/bin/expect -f
spawn ssh user#remotemachine
expect {$ }
send "date\r"
expect {$ }
send "~C"
expect "ssh> "
send -- "-R /run/user/1000/gnupg/S.gpg-agent:/run/user/1000/gnupg/S.gpg-agent\r"
expect -timeout 5 timeout exit "\r\n"
send "echo ok\r"
expect "\nok\r"
expect {$ }
interact
Put this in a file and do chmod +x on it. For some reason, I found I didn't get the Forwarding port message from the -R command unless I enabled ssh -v, so I just check for \r\n, but you can change this. Also, I assumed the shell prompt from the remote was $, so you might need to change that too, but if your prompt is [enter] then you need to say
expect -ex {[enter]}
to avoid the string being interpreted as a regular expression.
Related
What I am trying to do is to login to remote Linux server using SSH from my Windows machine by running a shell script via git bash.
I would like to write a script which will be used by an user with basic IT knowledge. This script will execute a bunch of commands on the remote machine, so it does need to establish a SSH connection.
What I have tried to write in this script so far is:
ssh username#ip <password>
EDIT: You should consider installing Jenkins on the remote system.
Running a command on a remote system can be done via:
ssh user#host command arg1 arg2
If you omit the password, a password prompt will appear, to get around the password prompt, you should consider setting passwordless SSH login. https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Rephrasing what you said, you want to write a script (namely script1.sh) which does the following (in this order):
Start a ssh connection with a remote
Execute some commands, possibly written in another script (namely script2.sh)
Close the connection / Keep it open for additional command line commands
If you want to put the remote commands on script2.sh instead of listing them in script1.sh, you need to consider that script2.sh must be on the remote server. If not, then you may consider to delegate to script1.sh the creation/copy of script2.sh on the remote machine. You may copy it in a temporary folder.
In this case, my suggestion is to write script1.sh as follows:
Copy of script2.sh with
scp /path/to/local/script2.sh user#host:/path/to/remote/script2.sh
Switch bash sheel to remote shell with
ssh user#host
Execution of script2.sh
sh /path/to/remote/script2.sh
Instead, if you prefer to list everything in just one script file, you may want to write:
Switch bash sheel to remote shell with
ssh user#host
Execution of commands
echo "This command has been executed on the remote server"
echo "This command has also been executed on the remote server"
..
Possibly closing the SSH connection to prevent that the user execute additional commands
You may consider to copy the ssh-keys on the remote server so to avoid password prompts. More information here.
Essentially what I want to do is run a Bash script I created that switches WiFi SSIDs before starting the SSH connection, and after the SSH connection closes.
I have added this to ~/.ssh/config by setting ProxyCommand to ./run-script; ssh %h:%p but by doing this, I feel like it would ignore any parameters I passed when I run the ssh command. Also, I have no idea how to get the script to run again when the SSH connection closes.
For OpenSSH you can specify a LocalCommand in your ssh config (~/.ssh/config).
But for that to work you also need the system-wide option (in /etc/ssh/ssh_config) PermitLocalCommand to yes. (By default it is set to no.)
It gets executed on the local machine after authenticating but before the remote shell is started.
There appears to be no (easy) way of executing something after the connection has been closed, though.
Assuming that it is not possible to implement a wrapper to 'ssh' (using alias, or some other method), it is possible to implement the following in the proxyCommand.
Important to note that there is no protection against multiple invocation of 'ssh' - possible that during a specific invocation that WIFI is already connected. Also, it is possible that when a specific ssh is terminated, the WIFI has to stay active because of other pending conditions.
Possible implementation of the proxy script is
ProxyCommand /path/to/run-script %h %p
#! /bin/sh
pre-command # connect to WIFI
nc -N "$1" "$2" # Tunnel, '%h' and '%p' are passed in
post-command # Disconnect WIFI
You do not want to use simple ssh in the proxy script, as this will translate into another call to the 'run-script'. Also note that all options provided to the original ssh will be handled by the initial 'ssh' session that will be leveraging the proxy 'nc' tunnel.
I am working on a script which will be used to transfer a file (using rsync) from a remote location and then perform some basic operations on the retrieved content.
When I initially connect to the remote location (not running an rsync daemon, I'm just using rsync to retrieve the files) I am placed in a non-standard shell. In order to enter the bash shell I need to enter "run util bash". Is there a way to execute "run util bash" before rsync begins to transfer the files over?
I am open to other suggestions if there is a way to do this using scp/ftp instead of rsync.
One way is to exectue rsync from the server, instead of from the client. An ssh reverse tunnel allows us to temporarily access the local machine from the remote server.
Assume the local machine has an ssh server on port 22
Shh into the remote host while specifying a reverse tunnel that maps a port in the remote machine (in this example let us use 2222) to port 22 in our local machine
Execute your rsync command, replacing any reference to your local machine with the reverse ssh tunnel address: my-local-user#localhost
Add a port option to rsync's ssh to have it use the 2222 port.
The command:
ssh -R 2222:localhost:22 remoteuser#remotemachine << EOF
# we are on the remote server.
# we can ssh back into the box running the ssh client via ${REMOTE_PORT}
run utils bash
rsync -e "ssh -p 2222" --args /path/to/remote/source remoteuser#localhost:/path/to/local/machine/dest
EOF
Reference to pass complicated commands to ssh:
What is the cleanest way to ssh and run multiple commands in Bash?
You can achieve it using --rsync-path also. E.g rsync --rsync-path="run util bash && rsync" -e "ssh -T -c aes128-ctr -o Compression=no -x" ./tmp root#example.com:~
--rsync-path is normally used to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell’s path (e.g. –rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you’d care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate.
For more details refer
I am new to world of scripting. I am getting problem while executing local shell script on remote server using expect script.
my script is following
VAR=$(/home/local/RD/expect5.45/expect -c "
spawn -noecho ssh -q -o StrictHostKeyChecking=no $USER#$HOST $CMD
match_max 100000
expect \"*?assword:*\"
send -- \"$PASS\r\"
send -- \"\r\"
send \"exit\n\r\"
expect eof
")
It is working fine if CMD is basic commands like df -kh;top.
But I need to collect several stats on remote server for which i have created a shell script.
I have tried following with no luck
spawn -noecho ssh -q -o StrictHostKeyChecking=no $USER#$HOST 'bash -s' < localscript.sh
its not able to pick and execute localscript on remote server.
Please help to resolve this issue.
The last time I tried something like this, I quickly grew weary of using expect(1) to try to respond to the password prompts correctly. When I finally spent the ten minutes to learn how to create an ssh key, copy the key to the remote system, and set up the ssh-agent to make key-based logins easier to automate, I never had trouble running scripts remotely:
ssh remotehost "commands ; go ; here"
First, check if you need to create the key or if you already have one:
ls -l ~/.ssh/id_*
If there are no files listed, then run:
ssh-keygen
and answer the prompts.
Once your key is generated, copy it to the remote system:
ssh-copy-id remote
Most modern systems run ssh-agent(1) as part of the desktop start up; to determine if you've got the agent started already, run:
ssh-add -l
If you see "The agent has no identities.", then you're good to go. If you see "Could not open a connection to your authentication agent." then you'll have to do some research about the best place to insert the ssh-agent(1) into your environment. Or, forgo the agent completely, it is just a nice convenience.
Add your key, perhaps with a timeout so it is only valid for a short while:
ssh-add -t 3600
Now test it:
ssh remote "df -hk ; ps auxw ; ip route show ; free -m"
expect(1) is definitely a neat tool, but authentication on remote systems is easier (and more safely) accomplished with SSH keys.
When you're using pscp to send files to a single machine is not a big deal because you will get the rsa fingerprint prompt once and never again after. But if you want to connect to 200 machines, you definitely don't want to type "yes" 200 times....
I'm using pscp on a Windows machine and I really don't care about the fingerprint, I only want to accept it. I'm using Amazon EC2 and the finger print change every time i restart the machines....
If there is a way to avoid it using pscp or a different tool please let me know!!!
Thanks!
See Putty won't cache the keys to access a server when run script in hudson
On Windows you can use prefix echo y | in front of your command which will blindly accept any host key every time. However, a more secure solution is to run interactively the first time, or generate a .reg file that can be run on any client machine.
I do not completely agree with the last answer. The first time you accept an SSH key, you know nothing about the remote host, so automatically accepting it makes no difference.
What I would do is auto accept the key the first time you connect to a host. I've read that doing something like yes yes | ssh user#host works, but it doesn't, because SSH does not read from stdin, but from a terminal.
What does work is to pass, that first time you connect, the following ssh option (it works for both scp and ssh:
scp -oStrictHostKeyChecking=no user#host1:file1 user#host2:file2
This command would add the key the first time you run it, but if, as Eric says, doing this once you have accepted the key is dangerous (man in the middle is uncool). If I were you I'd add it to a script that checked in ~/.ssh/known_hosts if there's already a line for that host, in which case I wouldn't add that option. On the other hand, if there was no line, I'd do so ;).
If you are dealing with an encrypted version of known_hosts, try with
ssh-keygen -F hostname
Here's something I'm actually using (function receiving the following arguments: user, host, source_file)
deployToServer() {
echo "Deployng to $1#$2 from $3"
if [ -z "`cat ~/.ssh/known_hosts | grep $2`" ] && [ -z "`ssh-keygen -F $2`" ]
then
echo 'Auto accepting SSH key'
scp -oStrictHostKeyChecking=no $3* $1#$2:.
else
scp $3* $1#$2:.
fi
}
Hope this helped ;)
The host ssh key fingerprint should not change if you simply reboot or stop/start an instance. If it does, then the instance/AMI is not configured correctly or something else (malicious?) is going on.
Good EC2 AMIs are set up to create a random host ssh key on first boot. Most popular AMIs will output the fingerprint to the console output. For security, you should be requesting the instance console output through the EC2 API (command line tool or console) and comparing that to the fingerprint in the ssh prompt.
By saying you "don't care about the fingerprint" you are saying that you don't care about encrypting the traffic between yourself and the instance and it's ok for anybody in between you and the instance to see that communication. It may even be possible for a man-in-the-middle to take over the ssh session and gain access to control your instance.
With ssh on Linux you can turn off the ssh fingerprint check with a command line or config file option. I hesitate to publish how to do this as it is not recommended and seriously reduces the safety of your connections.
A better option is to have your instances set up their own host ssh key to a secret value that you know. You can save the public side of the host ssh key in your known hosts file. This way your traffic is encrypted and safe, and you don't have to continually answer the prompt about the fingerprints when connecting to your own machine.
I created a expect file with following commands in it:
spawn ssh -i ec2Key.pem ubuntu#ec2IpAddress
expect "Are you sure you want to continue connecting (yes/no)?" { send "yes\n" }
interact
I was able to ssh into the ec2 console without disabling the rsa fingerprint. My machine was added to the known hosts of this ec2.
I hope it helps.