Help with ec2-api-tools for Ubuntu - bash

I'm following this tutorial: https://help.ubuntu.com/community/EC2StartersGuide
To start an instance, you run:
ec2-run-instances ami-xxxxx -k ec2-keypair
Then run:
ec2-describe-instances
which gets you the external host name of the instance.
And later, to ssh, you run:
ssh -i /path/to/ec2-keypair.pem ubuntu#<external-host-name>
This works fine, but here is my question:
How can I automate this in a bash script? Can I somehow parse the response returned from "ec2-describe-instances"?

I don't know what the output of ec2-describe-instances looks like, but if it's simply the hostname, then you should be able to do:
host=$(ec2-describe-instances)
ssh -i /path/to/ec2-keypair.pem ubuntu#$host

Related

Command results work locally but not over ssh

I run this command on my remote machine and it gives the desired results:
local# /usr/local/sbin/i2c_eeprom show-serial
serial = 5070045
When I run it from a remote server it doesn't work:
server# sshpass -f pass.out ssh 192.168.1.1 -n -o "StrictHostKeyChecking=no" "i2c_eeprom show-serial"
serial = TBD Serial
Why isn't the result being display correctly? I've tried creating a script file first and redirecting the output to a remote file, then reading the file but I don't get the same results. I always get TBD Serial. Any suggestions on how to run this command remotely and behave as it does locally?
I solved this problem by creating a bash script on the server as follows:
#!/bin/bash -l
/usr/local/sbin/i2c_eeprom show-serial
I copy this to the client and execute it via ssh. The key is the "-l" in the shebang line. I found this solution here. What does this command do? "exec bash -l"

SSH from Local to A, A to B and run multiple commands on B

Im currently using the line of script below to ssh from my local machine to a server (lets call it ip-address1) then from that machine i want to ssh to another machine (lets call this machine ip-address2). The script i use is as follows:
sshpass -p mypassword ssh -tt user#ip-address1 ssh -tt -i /root/.ssh/vm_private_key user#ip-address2 "pwd; ls;"
The problem is only the first command (pwd) executes on ip-address2 then it closes and the ls command executes on ip-address1 before it then closes. I want both commands to execute on ip-address2. The output in my terminal is something like the following:
/home/user (pwd command executing here)
Connection to ip-address2 closed.
//files then get outputted here (ls command executes after ip-address2 has
closed)
Connection to ip-address1 closed.
I think there may be something wrong with my quotation but i cant figure out what. Please help.
Thanks.
I don't have any way to test this, but try the following:
sshpass -p mypassword ssh -tt user#ip-address1 \
"ssh -tt -i /root/.ssh/vm_private_key user#ip-address2 'pwd; ls;'"
You definitely need to quote the entire command you want to run on the ip_address1, including the command you'll pass to ip_address2.
Edit
I'm in an environment where I have multiple machines to test; the following command works for me:
ssh snewell#<host sanitized> \
"ssh <host2 sanitized> 'hostname; ls -a <path sanitized>;'"
hostname definitely displays the result of the final server (host2), and ls is listing a directory that the first host doesn't have.

SSH: Run command through sub server

My goal is to be able to send a command to an old server that can only be
reached by going through the new server.
I want to be able to automate this as much as possible.
I want to be able to just run a script and it will do the work for me so
that I don't have to type.
Meaning I would have to do the following:
ssh user#newserver
and then
ssh user#oldserver
Once I reach the old server I need to be able to run
curl icanhazip.com
and
cat /var/spool/cron/user
So far I was only able to do the following:
ssh -t -t root#newserver "ssh root#oldserver"
That would only allow me to reach the server, but I would have to manually send other commands.
I would Ideally want to be able to run something like this:
ssh -t -t root#newserver 'ssh root#oldserver "cat /var/spool/cron/user"'
ssh -t -t root#newserver 'ssh root#oldserver "cat /var/spool/cron/user"'
This Actually worked. Not sure why it didn't before.

Remote sudo, execute a command and write output in local terminal

What I try to do, I connect to a remote server as a normal user with sudo right, then sudo to root, and execute a command & see output in my local terminal. I wrote a small script like this:
#!/bin/bash
my_argument=$1
ssh -t username#hostname 'sudo su -; /path_to_my_script $1'
I type the password twice (one for ssh, the other for sudo), but I see nothing in my local terminal, and script looks terminated in remote host. I believe second problem could be resolved by using exit, but I am a little bit confused how I can get this output to my local terminal.
Thanks
String inside '' is taken literally. So, you are passing the dollar sign and 1 as a parameter to the script. If you want the string to be interpreted, place it inside "", like:
ssh -t username#hostname "sudo /path_to_my_script $1"

bash script to ssh into a box and get me to a python shell

I want to write a script that will get me straight to a python shell on another box so that i don't have to first run ssh and second run python.
When I do "ssh hostname python" it just hangs - it's something to do with the fact that python is interactive. "ssh hostname cat x" works fine.
Is there some ssh option that will make this work?
ssh -t user#host python
The -t flag forces ssh to allocate a pseudo-terminal to the connection. Normally it won't do this if a command is given on the ssh command line, which results in python running in a non-interactive mode.
actually figured it out, i needed to do ssh -t hostname python
You need the -t option to force the allocation of a pseudo-tty
ssh -t host python

Resources