Help Accessing Amazon EC2 Instance - macos

Trying to set up first EC2 instance for simple (currently) php app, using osx 10.6. When i try to access my new instance in the command line i can only get ssh: connect to host xx.xxx.xxx.xxx port 22: Operation timed out.
i'm typing this at command line:
ssh -i <MYPEMNAME>.pem ec2-user#<PRIVATEIP/PUBLICDNS/ELASTICIP>
i have this as a security rule in the management console:
rule name: web_access
22(SSH) 0.0.0.0
80(HTTP) 0.0.0.0
i have ssh completely open just to test this, i'll get a more appropriate ip when it works.
i created an elastic IP, which was one option i tried after 'ec2-user#...'
i also generated a .pem when i created the instance which i have saved to a folder .ec2 on my machine, named as referenced in .pem above.
the management console says the instance is running. i think im just doing the ssh access wrong at this point.
any help tremendously appreciated!
thanks

Yeah, comments were pretty correct. It was an ssh issue and the main thing was that i was trying to add a custom security rule that allowed ssh but the default didn't. and for whatever reason the custom rule wasn't being applied so i just edited the default rule to allow port 22 (ssh) and I was pretty much up and running. Also needed to run chmod. !! Anddd, if you add a new keypair like I did, you may need to go into the ssh/known_hosts file and delete reference to your old keypair. that was hanging me up for a while with a an error out a middleman attack.
thanks

Related

Can't connect using SSH while using Ansible, but command prompt works just fine

Problem
I'm trying to connect to a few of my Linux EC2 Instances and I'm getting this weird behavior depending on how I'm connecting to it.
Terminal
If I try to connect to it from the terminal using the following command:
ssh -i "<PATH_TO_PRIVATE_KEY_FILE>" ec2-user#<PRIVATE_IP_ADDRESS>
I'm able to connect successfully.
Visual Studio Code Remote Explorer
I am able to connect to the instance successfully.
Paramiko
# Create a new connection.
ssh_conn = paramiko.SSHClient()
ssh_conn.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Load the Private Key.
my_rsa_key = paramiko.RSAKey.from_private_key_file(key_file)
# Connect to the server.
_session = ssh_conn.connect(
hostname=host,
port=port,
username=username,
pkey=my_rsa_key,
timeout=5
)
Here I get a timeout error. Here I'm confident the code is fine because it works with some instances but not others and the issue seem to always be the connection portion.
Ansible
When I try to connect to the same EC2 instance using Ansible I get Permission denied (publickey). Now I can confidently say it's not a syntax error inside of the Ansible code because when I run the same code on a few different EC2 instances it runs fine without problem. The issue is only related to the connection process.
Thoughts?
The behavior is limited to a few instances and it's always the same issue. What would cause behavior like this or how could I go about trying to diagnose the problem? I'm happy to add more detail but I wanted to start here and see what people thought.
This error (publickey.) basically means that Ansible has no idea which remote user it should connect to. That's why it works when you explicitly call ssh command and doesn't using Ansible or other tools.
Regarding Ansible, add this line of code into the inventory file:
your_host_group:
vars:
ansible_user: ec2-user
hosts:
your_hostname:

How to access SSH of AWS EC2 Instance without keypair/pem file

My client had installed AWS Marketplace Bitnami WordPress and he do not have any .pem file & credentials associated with that EC2 instance. We need to change something in wp-config.php & .htaccess file. Now we are not able to do this.
I googled but not found anything fruitful.
Hey not sure if this helps because you need to have the .pem file to be able to use this technique and it is not best practice but, you can set a password for root/user to ssh into that server. This is an example of how to ssh into root user:
1) Login into your instance with the .pem file
2) Update
3) sudo su
4) cd / (just incase)
5) Edit, vim /etc/ssh/sshd_config and edit or do the equivilent of uncommenting these lines:
Port 22
PasswordAuthentication yes
PermitRootLogin yes
6) Restart sshd service, service sshd restart or systemctl restart sshd or equivilent
7) Set password, passwd
8) Log out and log back in without .pem file ssh root#12.345.67.890
run sudo vi /etc/ssh/sshd_config look for "PasswordAuthentication No" and change it to "PasswordAuthentication Yes" then save the file and exit :wq
restart ssh with "sudo service sshd restart", logout & then connect to test if all is well.
Sorry, I am posting an answer to my own question. As after 1-week discussion with AWS support they share an option to me where I can use my client's EC2 instance (that is not associated with any keypair/.pem file). They reset my setting and shared lines of code that I need to add in textarea under "View/Change User Data".
And this lines of code had user:password string. With those credentials, I connected to SSH and completed my job... :)
Sorry for security reason I can not share the lines of code. But I answered my because I am sure this answer will help someone in future. Actually needy will get a hint from my answer (i.e. "View/Change User Data") and he/she can directly contact AWS Support.
In looking at Get Started with Bitnami Applications in the AWS Marketplace, it appears that a keypair needs to be selected when launching the instance.
The article No Keypair for Bitnami Wordpress Instance - WordPress - Bitnami Community suggests that you could use a plugin file manager to get a key onto the instance, but it is probably easier to launch a new instance and migrate the WordPress configuration across.

Install Chef-Server 11 on EC2 Instance

I am using hosted Chef for quite some time. Wanted to explore the opensource chef server. hence I am trying to setup my Chef-Server 11 on EC2 instance.
I have Chef-server running and I can access the web GUI for the same. I have the chef-workstation configured on another ec2 instance that is also working fine.
Problem: I am not able to upload any cookbook.
I get below error when I try uploading the cookbook:
# knife cookbook upload getting-started
Uploading getting-started [0.4.0]
/opt/chef/embedded/lib/ruby/1.9.1/net/http.rb:763:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED)
However, other list commands of knife are working fine.
I did my home work and bumped on below links:
http://www.opscode.com/blog/2013/03/11/chef-11-server-up-and-running/
http://www.curvve.com/blog/servers/2013/script-to-configure-and-set-your-hostname-and-fqdn-on-ec2-instances/
So,
It is mentioned that the chef-server needs a working FQDN to work. I set the my public ec2 host name as the hostname of the server as well as set it up in /etc/hosts. Rebooted the instance. Ran chef-server-ctl reconfigure again. And still facing the same error.
QUESTION: How to figure out the FQDN part of the EC2 instance for chef-server to work? if anyone has set up chef-server successfully on EC2 and was able to upload the cookbooks, then please share your steps for FQDN workout.
I was having a hard time with this but this solution worked!
Edit /etc/chef-server/chef-server.rb and add these lines (create the file if it doesn't exist):
server_name = "THE PUBLIC IP OF YOUR INSTANCE"
api_fqdn server_name
nginx['url'] = "https://#{server_name}"
nginx['server_name'] = server_name
lb['fqdn'] = server_name
bookshelf['vip'] = server_name
I found the solution here
http://sahebjade.blogspot.com/2013/05/check-your-knife-configuration-and.html
This is how i got it working. updated the public DNS name of my ec2 instance (chef-server) in /etc/sysconfig/network and service network restart. Now I am able to upload the cookbooks fine.
Need to think about elastic IP as potential option for my chef-server.
Edit /etc/chef-server/chef-server.rb and add these lines (create the file if it doesn't exist):
bookshelf["vip"] = node["ipaddress"]
bookshelf["url"] = "https://#{node["ipaddress"]}"
erchef['s3_url_ttl'] = 3600
The first two lines will point your chef-server URL to the machine's IP and the third will solve a timeout issue that apparently always exist when the Chef Server is on EC2.
I wanted to expand some on the answers since they don't give a complete picture. This applies to Chef 11 (hopefully Chef 12 is smarter)
In my case I rolled a master up under VPC #1 which gave it an internal address like this
ip-10-0-0-10.ec2.internal
Because I was only playing with the VPC initially, I had misconfigured some things I needed so I had to drop it and I created a new scheme. Thankfully, I was able to snapshot the old Chef master and bring it up under the new VPC but I found that I couldn't log into Chef anymore. It took some digging but I found in my /var/log/chef-server/chef-server-webui/current log that the install had glommed onto the old hostname and set that as the internal URL for... everything. This caused problems after the internal hostname change
2014-12-24_16:19:09.46680 SocketError: Error connecting to https://ip-10-0-0-10.ec2.internal/users/admin - getaddrinfo: Name or service not known
Now, to the OP answer
Need to think about elastic IP as potential option for my chef-server
In my case, I just added a CNAME to CloudFlare and set that as my permanent address. Since I can set CloudFlare to a low TTL on that one address it makes it easy to move it around between IP changes (I don't need an Elastic IP while I'm just getting it configured). This way I could then tell Chef to always look for the same URL and not worry about an EIP.
Once that was done, I had to update Chef. I don't know what changed (this is 11.16.4) but I found the configs live in /var/opt/chef-server/chef-server-webui/etc/chefserver.rb as opposed to some of the other answers listing chef-server.rb. Not sure if that's a YMMV thing or not.
I changed the following towards the bottom of that file
# Environment specific application configuration.
# These values override the ones set in 'RAILS_ROOT/config/application.rb'
#config.chef_server_url = "https://ip-10-0-0-10.ec2.internal"
config.chef_server_url = "https://chef.mydomain.com"
I also changed /var/opt/chef-server/nginx/etc/chef_https_lb.conf
server_name chef.mydomain.com;
Finally I restarted Chef
chef-server-ctl restart
That seems to have done the trick. Logins work again.

rsub with sublime and ssh connection refusual

I am trying to use rsub to create tunnel in ssh to sublime text, I run the command rmate .profile but i get the following response. I am using wateroof to open the ports 52968 on 1p4 and ip6, I followed the insturctions here and its just not working
I am running osx on my local machine and ubuntu 12.04 on my remote machine I am ssh into on digitalocean
root#anderskitson:~# rmate .profile
/usr/local/bin/rmate: connect: Connection refused
/usr/local/bin/rmate: line 186: /dev/tcp/localhost/52698: Connection refused
Unable to connect to TextMate on localhost:52698
I was having the same problem.
Let remoteHost = the IP or hostname of the machine you're attempting to ssh to.
I ran ssh -R 52698:localhost:52698 remoteHost from my local machine, after whice rmate .profile on remoteHost worked.
That led me to determine that ~/.ssh/config on my local machine was incorrect.
I set ~/.ssh/config to look like this:
Host remoteHost
RemoteForward 52698 localhost:52698
It's been working solidly since I made that change.
For anyone getting this same error using PuTTy on Windows, this commenter gives great instructions:
In PuTTy's config window, nagivate to the Connection > SSH > Tunnels pane
In the "Source Port" field, type 52698
In the "Destination" field, type 127.0.0.1:52698
Select the "Remote" and "Auto" radio buttons
Click the "Add" button
Go to the Session pane and save if you want to preserve these settings.
Here's an image which does the explaining visually:
I had the same issue and here is what works for me. If you have multiple servers you want this to work for, do the following as exactly shown here:
Host *
RemoteForward 52698 localhost:52698
I consulted this link: configure SSH config file and realized you can use * in config file.
Wildcards are also available to allow for options that should have a
broader scope.
I was trying to set this up for the first time using VS Code and got the generic "Connection refused" error even though my configuration seemed fine. It turned out to be because I hadn't reloaded the IDE after installing the rmate extension (Remote VSCode). Make sure that the rmate server is active on your local machine, whatever IDE you're using.
I had the same problem and fixed with replacing the HOSTNAME with the actual IP-Address when connecting:
e.g.: ssh pi#raspberrypi.local to ssh pi#192.168.1.1
I had the same problem and gone through most of the blogs, I did everything that was told.
At last, I found myself that textmate or submile editors are closed(force quit), this caused the problem.
For example my SSH config ~/.ssh/config file to connect with DigitalOcean with Remote Forward looks like:
Host DigitalOcean
Hostname xxx.xxx.xxx.xxx
User username
RemoteForward 52698 localhost:52698
and is called in a terminal
ssh DigitalOcean
rmate then connects fine with my local Atom editor
rmate stopped working for no apparent reason. Turns out I had tripped the 'man in the middle' check. I saw this warning when doing ssh --
\###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
In my case, this warning was expected so I ignored it. This may not apply for you, so verify host identity. Didn't realize this line at the end of the warning --
Port forwarding is disabled to avoid man-in-the-middle attacks.
No wonder rmate stopped working. Verifying host identity and then clearing out offending entry from ~/.ssh/known_hosts made the warning go away and rmate started working again.
I run into this issue occasionally, and at least for my setup (which might be quite particular), I have found that killing zombie instances of ssh sessions does the job.
My particular setup :
I run Linux through a VM (VMWare Fusion) on my OSX host. Then I ssh into the the Linux host from OSX, and launch sublime from the Linux side. I usually have several ssh sessions running.
I recently rebooted my Mac (without first shutting down the VM, which was probably bad), and once I got back into the VM, was unable to launch Sublime - got the "connection refused" error mentioned by the OP.
So I did a ps aux on the Linux side, and looked for all instances of :
root 657399 0.0 0.1 13956 9332 ? Ss 14:52 0:00 sshd: user [priv]
user 657461 0.0 0.0 14088 5420 ? S 14:52 0:00 sshd: user#pts/1
(where user is my username). I killed the user jobs, e.g. 657461 above, and Voila! Every thing works now. Of course, in the process of killing these jobs, you are likely to kill the ssh session you are currently in, so you will have to log back into your session.
This might not work for users who don't have the necessary kill privileges on their remote machine, so don't know how useful this is, but thought I would put it out there.

SSH Connection from MAC to Amazon EC2 not working

I am trying to connect to Amazon EC2 via:
ssh -i ~/.ssh/YOUR_KEYPAIR_FILE.pem ec2-user#YOUR_IP_ADDRESS
The terminal takes 1 or 2 mins and then prints:
ssh: connect to host XXX port 22: Operation timed out
Any ideas?
Login to AWS
Go to the Instances section
Click on the security group associated with your EC2 instance
Down the bottom click on the inbound tab and then click edit
Create this rule
TYPE SSH
PROTOCOL TCP
PORT RANGE 22
SOURCE Anywhere
You should now be able to connect to the instance on port 22 via ssh with your key.
You need to open port 22 in your security group. All ports are closed by default.
Can you try changing permissions to YOUR_KEYPAIR_FILE.pem like this
chmod 600 YOUR_KEYPAIR_FILE.pem
Then shoot the command
ssh -i YOUR_KEYPAIR_FILE.pem ec2-user#YOUR_IP_ADDRESS
I had a similar problem. I checked all my networking time and time again from the ec2 instance all the way through the VPC and out to the internet. Security groups were allowing all sources through ports 22 and 80. My NACL was allowing the right permissions. I knew AWS was all ok yet everytime I went to try ssh into an instance I would still get an operation timeout, indicating that problem must be with my local machine instead.
First to check that the ssh port was open I ran the following:
ssh localhost
This worked fine!
Afte doing some research on the net, in the end it all boiled down to java and my terminal not recognising that java was installed on my machine.
Supporting Document:
AWS Documentation
No Java means that your .pem will not be recognised
Start by running the follwing:
java -version
If you get no hits then install relevant java SDK for your OS and once installed run
which java
You should get something like this:
/usr/bin/java
Now we can try connect to an instance again and hopefully you should have success this time!
ssh -v -i ~/Downloads/labamikey.pem ec2-user#ec2-34-200-217-2.compute-
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
[ec2-user#ip-10-0-0-54 ~]$

Resources