SCP EC2 instance to local - amazon-ec2

I have searched through the site looking for a solution, but I'm yet to be successful. I am attempting to get a file from my AWS ec2 instance onto my local desktop for viewing purposes. I have seen several times that the correct way is to use this command from the local computer:
scp -i ec2key.pem username#ec2ip:/path/to/file .
and I have been careful about all syntax especially including the "." so it saves to the current directory. However when I run this command I get the following error:
ssh: Could not resolve hostname [ec2 ip]: nodename nor servname provided, or not known
Any help to resolve this issue would be greatly appreciated.

Removed "ip-" in front of my private ip address. i.e
#ip-000-00-000-00 ...
Where it should be
#000-00-000-00

Related

Can't connect using SSH while using Ansible, but command prompt works just fine

Problem
I'm trying to connect to a few of my Linux EC2 Instances and I'm getting this weird behavior depending on how I'm connecting to it.
Terminal
If I try to connect to it from the terminal using the following command:
ssh -i "<PATH_TO_PRIVATE_KEY_FILE>" ec2-user#<PRIVATE_IP_ADDRESS>
I'm able to connect successfully.
Visual Studio Code Remote Explorer
I am able to connect to the instance successfully.
Paramiko
# Create a new connection.
ssh_conn = paramiko.SSHClient()
ssh_conn.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Load the Private Key.
my_rsa_key = paramiko.RSAKey.from_private_key_file(key_file)
# Connect to the server.
_session = ssh_conn.connect(
hostname=host,
port=port,
username=username,
pkey=my_rsa_key,
timeout=5
)
Here I get a timeout error. Here I'm confident the code is fine because it works with some instances but not others and the issue seem to always be the connection portion.
Ansible
When I try to connect to the same EC2 instance using Ansible I get Permission denied (publickey). Now I can confidently say it's not a syntax error inside of the Ansible code because when I run the same code on a few different EC2 instances it runs fine without problem. The issue is only related to the connection process.
Thoughts?
The behavior is limited to a few instances and it's always the same issue. What would cause behavior like this or how could I go about trying to diagnose the problem? I'm happy to add more detail but I wanted to start here and see what people thought.
This error (publickey.) basically means that Ansible has no idea which remote user it should connect to. That's why it works when you explicitly call ssh command and doesn't using Ansible or other tools.
Regarding Ansible, add this line of code into the inventory file:
your_host_group:
vars:
ansible_user: ec2-user
hosts:
your_hostname:

Bad Configuration Option: ServerAliveInternal

I was trying to connect my ubuntu device to aws ec2 instance using a pem file, but while performing the ssh -i file.pem user#ipadress a bad configuration error called ServerAliveInternal happened (not ServerAliveInterval). I solved it by purging the openssh and reinstalling it, however I was wondering what might have happened? I searched here but did not find anything about it.

Install Chef-Server 11 on EC2 Instance

I am using hosted Chef for quite some time. Wanted to explore the opensource chef server. hence I am trying to setup my Chef-Server 11 on EC2 instance.
I have Chef-server running and I can access the web GUI for the same. I have the chef-workstation configured on another ec2 instance that is also working fine.
Problem: I am not able to upload any cookbook.
I get below error when I try uploading the cookbook:
# knife cookbook upload getting-started
Uploading getting-started [0.4.0]
/opt/chef/embedded/lib/ruby/1.9.1/net/http.rb:763:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED)
However, other list commands of knife are working fine.
I did my home work and bumped on below links:
http://www.opscode.com/blog/2013/03/11/chef-11-server-up-and-running/
http://www.curvve.com/blog/servers/2013/script-to-configure-and-set-your-hostname-and-fqdn-on-ec2-instances/
So,
It is mentioned that the chef-server needs a working FQDN to work. I set the my public ec2 host name as the hostname of the server as well as set it up in /etc/hosts. Rebooted the instance. Ran chef-server-ctl reconfigure again. And still facing the same error.
QUESTION: How to figure out the FQDN part of the EC2 instance for chef-server to work? if anyone has set up chef-server successfully on EC2 and was able to upload the cookbooks, then please share your steps for FQDN workout.
I was having a hard time with this but this solution worked!
Edit /etc/chef-server/chef-server.rb and add these lines (create the file if it doesn't exist):
server_name = "THE PUBLIC IP OF YOUR INSTANCE"
api_fqdn server_name
nginx['url'] = "https://#{server_name}"
nginx['server_name'] = server_name
lb['fqdn'] = server_name
bookshelf['vip'] = server_name
I found the solution here
http://sahebjade.blogspot.com/2013/05/check-your-knife-configuration-and.html
This is how i got it working. updated the public DNS name of my ec2 instance (chef-server) in /etc/sysconfig/network and service network restart. Now I am able to upload the cookbooks fine.
Need to think about elastic IP as potential option for my chef-server.
Edit /etc/chef-server/chef-server.rb and add these lines (create the file if it doesn't exist):
bookshelf["vip"] = node["ipaddress"]
bookshelf["url"] = "https://#{node["ipaddress"]}"
erchef['s3_url_ttl'] = 3600
The first two lines will point your chef-server URL to the machine's IP and the third will solve a timeout issue that apparently always exist when the Chef Server is on EC2.
I wanted to expand some on the answers since they don't give a complete picture. This applies to Chef 11 (hopefully Chef 12 is smarter)
In my case I rolled a master up under VPC #1 which gave it an internal address like this
ip-10-0-0-10.ec2.internal
Because I was only playing with the VPC initially, I had misconfigured some things I needed so I had to drop it and I created a new scheme. Thankfully, I was able to snapshot the old Chef master and bring it up under the new VPC but I found that I couldn't log into Chef anymore. It took some digging but I found in my /var/log/chef-server/chef-server-webui/current log that the install had glommed onto the old hostname and set that as the internal URL for... everything. This caused problems after the internal hostname change
2014-12-24_16:19:09.46680 SocketError: Error connecting to https://ip-10-0-0-10.ec2.internal/users/admin - getaddrinfo: Name or service not known
Now, to the OP answer
Need to think about elastic IP as potential option for my chef-server
In my case, I just added a CNAME to CloudFlare and set that as my permanent address. Since I can set CloudFlare to a low TTL on that one address it makes it easy to move it around between IP changes (I don't need an Elastic IP while I'm just getting it configured). This way I could then tell Chef to always look for the same URL and not worry about an EIP.
Once that was done, I had to update Chef. I don't know what changed (this is 11.16.4) but I found the configs live in /var/opt/chef-server/chef-server-webui/etc/chefserver.rb as opposed to some of the other answers listing chef-server.rb. Not sure if that's a YMMV thing or not.
I changed the following towards the bottom of that file
# Environment specific application configuration.
# These values override the ones set in 'RAILS_ROOT/config/application.rb'
#config.chef_server_url = "https://ip-10-0-0-10.ec2.internal"
config.chef_server_url = "https://chef.mydomain.com"
I also changed /var/opt/chef-server/nginx/etc/chef_https_lb.conf
server_name chef.mydomain.com;
Finally I restarted Chef
chef-server-ctl restart
That seems to have done the trick. Logins work again.

Error in Cloudera Cluster installation process?

I have installed Cloudera manager successfully. It shows Currently managed hosts as 127.0.0.1 and it is active.
When I search and install cluster using the cloudera manager after the loads it shows the following error.
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server (check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added (some of the logs can be found in the installation details).
The following image clearly shows the problem while installing my cluster on cloudera manager.
I had a similar problem and it turned out the issue was conveniently skipping (unfortunately) the ...password-less SSH key ... step
After several hours breaking my head over it, I realised this.
At the terminal do,
ls -al ~/.ssh
You must see files like,
abc
abc.pub
These are you public/private key pairs. [Not necessarily the same names as mine above].The file name you used in Setting up SSH public/private keys step for your machine.
You need to copy the data in abc.pub to a file authorized_keys in this same folder. If its not there, create authorized_keys.
Incase you don't have you public/private key pair see here
For ubuntu, the problem is usually because of the association of "ubuntu 127.0.1.1." in your /etc/hosts file. For me, after changing it to "ubuntu 127.0.0.1", which is the standard local loopback, I can add the cluster successfully. Hope this helps!
I was struggling with this problem for two days. Fixing /etc/hosts as suggested by "khoadoan" worked for me.
/etc/hosts was looking like this when I had the problem
127.0.0.1 localhost
127.0.1.1 ubuntu
I changed it like this:
127.0.0.1 localhost
127.0.0.1 ubuntu
Restarted the machine.
sudo init 6
Launched the Cloudera Manager Admin page. This time the host status was already showing up "Managed = Yes". And I got an additional tab "Currently Managed Hosts(1)", where the local host was listed.

Help Accessing Amazon EC2 Instance

Trying to set up first EC2 instance for simple (currently) php app, using osx 10.6. When i try to access my new instance in the command line i can only get ssh: connect to host xx.xxx.xxx.xxx port 22: Operation timed out.
i'm typing this at command line:
ssh -i <MYPEMNAME>.pem ec2-user#<PRIVATEIP/PUBLICDNS/ELASTICIP>
i have this as a security rule in the management console:
rule name: web_access
22(SSH) 0.0.0.0
80(HTTP) 0.0.0.0
i have ssh completely open just to test this, i'll get a more appropriate ip when it works.
i created an elastic IP, which was one option i tried after 'ec2-user#...'
i also generated a .pem when i created the instance which i have saved to a folder .ec2 on my machine, named as referenced in .pem above.
the management console says the instance is running. i think im just doing the ssh access wrong at this point.
any help tremendously appreciated!
thanks
Yeah, comments were pretty correct. It was an ssh issue and the main thing was that i was trying to add a custom security rule that allowed ssh but the default didn't. and for whatever reason the custom rule wasn't being applied so i just edited the default rule to allow port 22 (ssh) and I was pretty much up and running. Also needed to run chmod. !! Anddd, if you add a new keypair like I did, you may need to go into the ssh/known_hosts file and delete reference to your old keypair. that was hanging me up for a while with a an error out a middleman attack.
thanks

Resources