SSH: Run command through sub server - shell

My goal is to be able to send a command to an old server that can only be
reached by going through the new server.
I want to be able to automate this as much as possible.
I want to be able to just run a script and it will do the work for me so
that I don't have to type.
Meaning I would have to do the following:
ssh user#newserver
and then
ssh user#oldserver
Once I reach the old server I need to be able to run
curl icanhazip.com
and
cat /var/spool/cron/user
So far I was only able to do the following:
ssh -t -t root#newserver "ssh root#oldserver"
That would only allow me to reach the server, but I would have to manually send other commands.
I would Ideally want to be able to run something like this:
ssh -t -t root#newserver 'ssh root#oldserver "cat /var/spool/cron/user"'

ssh -t -t root#newserver 'ssh root#oldserver "cat /var/spool/cron/user"'
This Actually worked. Not sure why it didn't before.

Related

SSH from Local to A, A to B and run multiple commands on B

Im currently using the line of script below to ssh from my local machine to a server (lets call it ip-address1) then from that machine i want to ssh to another machine (lets call this machine ip-address2). The script i use is as follows:
sshpass -p mypassword ssh -tt user#ip-address1 ssh -tt -i /root/.ssh/vm_private_key user#ip-address2 "pwd; ls;"
The problem is only the first command (pwd) executes on ip-address2 then it closes and the ls command executes on ip-address1 before it then closes. I want both commands to execute on ip-address2. The output in my terminal is something like the following:
/home/user (pwd command executing here)
Connection to ip-address2 closed.
//files then get outputted here (ls command executes after ip-address2 has
closed)
Connection to ip-address1 closed.
I think there may be something wrong with my quotation but i cant figure out what. Please help.
Thanks.
I don't have any way to test this, but try the following:
sshpass -p mypassword ssh -tt user#ip-address1 \
"ssh -tt -i /root/.ssh/vm_private_key user#ip-address2 'pwd; ls;'"
You definitely need to quote the entire command you want to run on the ip_address1, including the command you'll pass to ip_address2.
Edit
I'm in an environment where I have multiple machines to test; the following command works for me:
ssh snewell#<host sanitized> \
"ssh <host2 sanitized> 'hostname; ls -a <path sanitized>;'"
hostname definitely displays the result of the final server (host2), and ls is listing a directory that the first host doesn't have.

SSH command in a bash script isn't run when the script is executed by an application

I'm using PG pool, which executes a failover script when a database node goes down. The script needs to touch a certain file on the new master and make some changes on the old master. It works fine when I run it, but it doesn't when run by the application. I know the script is being executed appropriately as it sends me an email with the host details. Keys are set-up so passwords aren't required.
The script is as follows:
#! /bin/sh
OLD_HOST=$1
NEW_HOST=$2
# new host: touch trigger file
/usr/bin/ssh -T root#$NEW_HOST /bin/touch /mirror/pg_trigger/trigger
# old host: remove trigger file
/usr/bin/ssh -T root#$OLD_HOST /bin/rm /mirror/pg_trigger/trigger -f
# old host: rename recovery.done to recovery.conf
/usr/bin/ssh -T root#$OLD_HOST /bin/mv /opt/postgres/9.1/data/recovery.done /opt/postgres /9.1/data/recovery.conf -f
It doesn't even work if the old/new host is the local machine. I have a feeling this has to do with it being run via the pgpool user, but I'm really not sure. Any ideas?
When you run it manually, do you run as the pgpool user? SSH keys are per user so if you are running as a different account, you will get different results.
You could also try the -i <keypath> flag with SSH to explicitly pass the path to your key.

execution of a local shell script on remote server (without copying it on remote server) using expect scripting

I am new to world of scripting. I am getting problem while executing local shell script on remote server using expect script.
my script is following
VAR=$(/home/local/RD/expect5.45/expect -c "
spawn -noecho ssh -q -o StrictHostKeyChecking=no $USER#$HOST $CMD
match_max 100000
expect \"*?assword:*\"
send -- \"$PASS\r\"
send -- \"\r\"
send \"exit\n\r\"
expect eof
")
It is working fine if CMD is basic commands like df -kh;top.
But I need to collect several stats on remote server for which i have created a shell script.
I have tried following with no luck
spawn -noecho ssh -q -o StrictHostKeyChecking=no $USER#$HOST 'bash -s' < localscript.sh
its not able to pick and execute localscript on remote server.
Please help to resolve this issue.
The last time I tried something like this, I quickly grew weary of using expect(1) to try to respond to the password prompts correctly. When I finally spent the ten minutes to learn how to create an ssh key, copy the key to the remote system, and set up the ssh-agent to make key-based logins easier to automate, I never had trouble running scripts remotely:
ssh remotehost "commands ; go ; here"
First, check if you need to create the key or if you already have one:
ls -l ~/.ssh/id_*
If there are no files listed, then run:
ssh-keygen
and answer the prompts.
Once your key is generated, copy it to the remote system:
ssh-copy-id remote
Most modern systems run ssh-agent(1) as part of the desktop start up; to determine if you've got the agent started already, run:
ssh-add -l
If you see "The agent has no identities.", then you're good to go. If you see "Could not open a connection to your authentication agent." then you'll have to do some research about the best place to insert the ssh-agent(1) into your environment. Or, forgo the agent completely, it is just a nice convenience.
Add your key, perhaps with a timeout so it is only valid for a short while:
ssh-add -t 3600
Now test it:
ssh remote "df -hk ; ps auxw ; ip route show ; free -m"
expect(1) is definitely a neat tool, but authentication on remote systems is easier (and more safely) accomplished with SSH keys.

Shell questions

In the following context : VisualVM over ssh
I try to execute the 2 following commands in a single script:
ssh -D 9696 john.doe#121.122.123.124
/usr/bin/jvisualvm -J-Dnetbeans.system_socks_proxy=localhost:9696 \
-J Djava.net.useSystemProxies=true
Having the 2 command like this does not work because the ssh command starts in an interactive mode, so the VisualVM is started after the ssh is closed (explicitly with an 'exit').
What could be a good way to solve that issue?
PS. I am running MacOS X.
try:
ssh john.doe#121.122.123.124 '/usr/bin/jvisualvm -J-Dnetbeans.system_socks_proxy=localhost:9696 -J Djava.net.useSystemProxies=true'
If I understand your use case properly, you want to setup port-forwarding with the ssh connection then the second command is run on the localhost which uses the forwarded port on the localhost. I think you could try the -f or -n options to ssh to achieve this. It does however require a command to be run on the remotehost. You could use a bogus command like echo &> /dev/null for that.
EDIT:
Something like this seemed to work in a naïve test:
ssh -f -D <port> remotehost <dummy_program_that_doesnt_quit>
This is best done using an SSH key and screen, so that we interact with and can close the SSH session.
I'm also presuming jvisualvm takes control of the terminal so that when it exits, we clean up the screen session. If jvisualvm detaches from the terminal, the script immediately jumps to cleaning up the screen session while jvisualvm is running.
ssh-add .ssh/key
screen -dmS sshproxy ssh -i .ssh/key -D 9696 john.doe#121.122.123.124
/usr/bin/jvisualvm -J-Dnetbeans.system_socks_proxy=localhost:9696 \
-J Djava.net.useSystemProxies=true
screen -r -d sshproxy -X quit

Help with ec2-api-tools for Ubuntu

I'm following this tutorial: https://help.ubuntu.com/community/EC2StartersGuide
To start an instance, you run:
ec2-run-instances ami-xxxxx -k ec2-keypair
Then run:
ec2-describe-instances
which gets you the external host name of the instance.
And later, to ssh, you run:
ssh -i /path/to/ec2-keypair.pem ubuntu#<external-host-name>
This works fine, but here is my question:
How can I automate this in a bash script? Can I somehow parse the response returned from "ec2-describe-instances"?
I don't know what the output of ec2-describe-instances looks like, but if it's simply the hostname, then you should be able to do:
host=$(ec2-describe-instances)
ssh -i /path/to/ec2-keypair.pem ubuntu#$host

Resources