EC2 non root user login - amazon-ec2

Is there a way to log into an EC2 ubuntu ami or a way to set up an ubuntu ami so that non-root users can log in? I tried creating a user and logging in with the associated password. I also tried using the private key, copied the authorized-keys file into the .ssh directory of the non-root user's home directory and tried to log in to the box with that user account id. Neither method worked.
Thanks in advance.

So, this works, but the missing high-order bit of information here has to do with setting the right permission on the authorized-keys file in the home directory for the user. So, I copied /root/.ssh/authorized-key to /home/user, then did with
cp -r /root/.ssh /home/user
chown -R user /home/user/.ssh
This allowed me to use the keypair.pem file to log in.

Make sure you are sending your AWS keypair as the identity file, i.e.
ssh -i ~/.ssh/keypair.pem user#ec2-174-129-xxx-xx.compute-1.amazonaws.com
Also check that SSH is enabled in your security group

Assuming you would like to have users log in with a password so they need not supply a key every time, all you must do is turn on the ability to SSH in with a password. This option is turned off by default in all Linux AMIs.
vi, nano, pico, etc. into the following file with root privileges:
sudo vi /etc/ssg/sshd_config
Change the following setting to yes:
PasswordAuthentication = yes
Finally you must restart SSH (Since you are SSHed onto a remote machine, a simple reboot is fine.)
That's it! Of course, you must still add users with the adduser command and give them passwords with the passwd command for them to be able to login to your AMI. Checkout this link for more info on the OpenSSH SSH client configuration files.

Related

macOS terminal asking for password every time I run copy command

I'm running a bash command on mac that moves a file to private/etc/app_name/.
sudo cp my_file.cpp private/etc/app_name/
Every time the I want to run the bash file, the OS asks for my system password.
> ./run_copy.sh
Password: *******
Is there a way to by-pass this or configure in such way that I only have to enter the password once.
Apparently, on my Macbook, I see /etc directory having symlinks with the /private/etc directory which is owned by the wheel group & root is part of that group. So, you would need to use sudo to copy to that directory.
With that said on a Linux machine, you can work around this by adding your group to a new file in the /etc/sudoers.d/<group-name> path.
<grp-name> ALL=(ALL) NOPASSWD:ALL
I've just tried this on my mac, I could copy files onto /private/etc directory without entering the sudo password prompt.
Unfortunately, this comes up with some risks as users of your group get privileged access without entering any password prompt. You might accidentally delete important system files etc.,
A more niche approach could be to allow selectively like <group> ALL = (ALL) NOPASSWD: /usr/local/bin/copy-script. This way, they can't run all scripts/commands with sudo privileges.

Why does ec2 asks for password when i use an identity file?

I use the following command and i got the code from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
ssh -i my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
I'm not sure if it's because I lost the .pem file and recreated it or what is going on here, but no matter if I use the user ubuntu, root, or ec2-user the terminal asks me for a password.
Your local private key must be shrouded as it should be. It can be disabled with key management tools if you really want, but not advised.
Double-check the file permissions on your key file. Do:
chmod 400 my-key-pair.pem
and try again.
It is also likely that the key file is just the wrong one.
You have to terminate the instance and copy it with a new ssh key. If a key is lost then access to the server is also lost.

Transferring a file to an amazon ec2 instance using scp always gives me permission denied (publickey,gssapi-with-mic)

I am trying to transfer a file to an ec2 instance. I followed the Amazon's documentation, this is what my command looked like:
scp -i [the key's location] Documents/[the file's location] ec2-user#[public dns]:[home/[destination]]
where I replaced all the variables with the proper things, I am sure it's the correct key and it has permission 400. When I call the command, it tells me the RSA key fingerprint, asks me if I want to continue connecting. I type yes and it replies with
Permission denied (publickey,gssapi-with-mic)
lost connection
I have looked at many of the other similar questions on stack overflow and can't find a correct way to do it.
Also ssh traffic is enabled on port 22.
The example amazon provided is correct. It sounds like a folder permissions issue. If you created the folder you are trying to copy to with another user or another user created it, chances are you don't have permissions to copy to it or edit it.
If you have sudo abilities, you can try opening access for yourself. Though not recommended to be left this way, you could try this command:
sudo chmod 777 /folderlocation
That gives complete read/write/executable permissions to anyone (hence why you shouldn't leave it at 777) but it will give you the chance to test your scp command to rule out permissions.
Afterwards if you aren't familiar with permissions, I suggest you read up on it. this is an example: http://www.tuxfiles.org/linuxhelp/filepermissions.html It is generally suggested you lock down the folder as much as possible depending on the type of information held within.
If that was not the cause some other things you might want to check:
are you in the directory of your key when executing the 'scp -i keyname' command?
do you have permissions to use the folder you are transferring from?
Best of luck.
The problem may be the user name. I copied a file to my Amazon instance and first tried to use the command:
scp -r -i ../.ssh/Amazon_server_key_pair.pem ./empty.test ec2-user#ec2-xx-yy-zz-tt.compute-1.amazonaws.com:~
and got the error:Permission denied (publickey).
I then realized that my instance is an Ubuntu environment and the user user is then "ubuntu" the correct command that worked for me is then:
scp -r -i ../.ssh/Amazon_server_key_pair.pem ./empty.test ubuntu#ec2-xx-yy-zz-tt.us-west-2.compute.amazonaws.com:~
The file "empty.test" is a text file containing the text "testing ...". Replace the address of your virtual server with the correct address to your instance's Public DNS. I have replaced the ip to my instance with xx.yy.zz.tt.
I have to use ubuntu# instead of ec2-user# because when i ssh i was seeing ubuntu# in my terminal, try changing to the name you see at your terminal
Also you have to set permission for pem file in your computer
chmod 400 /path/my-key-pair.pem
The below code will copy file from your computer to Ec2 instance.
scp -i ~/location_of_your_ec2_key_pair.pem ~/location_of_transfer_file/sample.txt ubuntu#ec2_your_ec2_instance.compute.amazonaws.com:~/folder_to_which_it_needs_to_be_copied
The below code will copy file from Ec2 instance to your computer
scp -i ~/location_of_your_ec2_key_pair.pem ubuntu#ec2_your_ec2_instance.compute.amazonaws.com:~/location_of_transfer_file/sample.txt ~/folder_to_which_it_needs_to_be_copied
I was facing the same problem. Hope this will work for you.
scp -rp -i yourfile.pem ~/local_directory username#instance_url:directory
Permission should also be correct to make this work.
Might be ones uses wrong username. Happened to me, was the same error msg -> Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

How do I connect to my ec2 instance using Cyberduck with privileges?

I try to login using the ec2-user but for some reason the login fails:
Using the username: ubuntu I am able to login just fine, however, I don't have any privileges and I can't sudo su for the privileges to write to my files. I tried using the cyberduck terminal and send command options but sudo su doesn't work with them. Cyberduck just spins.
I don't think the ec2-user account works on recent Ubuntu AMIs, which may explain the failed login.
You can approach this in a few ways. The first is to create a new user account specifically for FTP and give it permissions only to the necessary folders. First create the user, then create a public/private key pair for non-interactive login. This will allow you to operate your FTP client like normal.
My preferred solution is to upload the files to the ubuntu home directory and then run a script as root that moves the files to the correct location. You won't have to modify the system configuration this way, but you will have to do the file transfer in two steps.
Create a staging folder in /home/ubuntu and copy the files there. Create a /home/ubuntu/copy.sh script on the server like this:
#!/bin/bash
sudo su #this will only work if sudo doesn't prompt for a password
cp -r /home/ubuntu/stage/* /var/www/html/
Then from your dev machine, call the script:
$ ssh -i ~/path/to/key.pem ubuntu#ec2.hostname.com /home/ubuntu/copy.sh
If you want to get really fancy, you could set up a git repository and use a post-receive hook to handle this all for you when you push. No need for an FTP client at all.

WinSCP connect to Amazon AMI EC2 Instance changing user after login to "root"

I followed instructions here carefully however I haven't get this working right. Here is what I did:
Run WinSCP enter Hostname (Elastic IP of my Instance)
enter username "ec2-user"
enter public key file
chose SCP for the protocol
Under SCP/Shell settings I chose "sudo su -"
Hit Login
WinSCP asks me for passphrase key, Hit OK
Shows up this error
Error skipping startup message. Your
shell is probably incompatible with
the application (BASH is recommended).
NOTE: This works on Putty
With credit to this post and this AWS forum thread, it seems the trick is to
comment out Defaults requiretty in sudoers. My procedure now:
Log in to your EC2 instance using Putty.
Run sudo visudo, a special command to edit /etc/sudoers.
Press the Insert key to start Insert mode.
Find the line Defaults requiretty. Insert a hash symbol (#) before that line to comment it out:
#Defaults requiretty
Press the Esc key to exit Insert mode.
Type :wq to write the file and quit visudo.
In WinSCP:
Under Advanced > Environment > SCP/Shell, change the Shell to sudo su -.
Under SSH > Authentication, choose your Private key file (.ppk file).
WinSCP does not support commands that require terminal emulation or user input.
See: http://winscp.net/eng/docs/remote_command#limitations
Since sudo su - expects a password, it wouldn't work.
There is a way around it: make root logon without being prompted for a password. You can do this by editing your sudoers file usually located at /etc/sudoers and adding:
root ALL=NOPASSWD: ALL
Needless to say, this is Not a Very Good Thing To Do - for reasons which should be obvious :)
I was having the same problem and solved it using the steps in this tutorial. I would have posted it here, but I don't have enough rep for images/screens.
http://cvlive.blogspot.de/2014/03/how-to-login-in-as-ssh-root-user-from.html
The following tutorial worked for me and provides helpful screenshots. Logging in as a regular user with sudo permissions just required tweaking a few WinSCP options.
http://cvlive.blogspot.de/2014/03/how-to-login-in-as-ssh-root-user-from.html
Set Session/File protocol to: SCP, enter host/instance ip, port - usually 22, and regular username. Enter password credentials if the login requires it.
Add Advanced/SSH/Authentication/Private key file.
Unchecking Advanced/SSH/Authentication/attempt "keyboard interactive" authentication should allow Advanced/Environment/SCP Shell/Shell/Shell: sudo su - to provide sudo permissions for accessing webserver directories as a non-owner user.
Update 08/03/2017
WinSCP logging can be helpful to troubleshoot issues:
https://winscp.net/eng/docs/logging
[WinSCP] Logging can be enabled from Logging page of Preferences dialog.
Logging can also be enabled from command-line using /log and /xmllog
parameters respectively, what is particularly useful with scripting.
In .NET assembly, session logging is enabled using
Session.SessionLogPath1).
Depending on WinSCP connection errors, some server installations may need a directive added to the (Ubunto, CentOS, other-Linux-Server) /etc/sudoers file to not require TTY for a specified user. Creating a file in /etc/sudoers.d/ (using a tool such as Amazon Command Line Interface or PuTTY) may be a better option than editing /etc/sudoers. Some /etc/sudoers versions recommend it:
This file MUST be edited with the 'visudo' command as root.
Please consider adding local content in /etc/sudoers.d/ instead of
directly modifying this file.
See the man page for details on how to write a sudoers file.
When editing a sudoers file (as root) through the command-line, the 'visudo' command should be used to open the file as it will parse the file for syntax errors. /etc/sudoers.d/ files are typically owned by root and chmoded with minimal permissions. The default /etc/sudoers file may be referenced as it should automatically have recommended chmod permissions on installation. e.g.: 0440 r--r----- .
https://superuser.com/a/869145 :
visudo -f /etc/sudoers.d/somefilename
Defaults:username !requiretty
Helpful Links:
Stackoverflow: cloud-init how to add default user to sudoers.d
https://www.digitalocean.com/community/tutorials/how-to-edit-the-sudoers-file-on-ubuntu-and-centos
WinSCP Forum:
https://winscp.net/forum/viewtopic.php?t=3046
https://winscp.net/forum/viewtopic.php?t=2109
WinSCP Doc: https://winscp.net/eng/docs/faq_su
With SCP protocol, you can specify following command as custom shell
on the SCP/Shell page of Advanced Site Settings dialog:
sudo -s
[...]
Note that as WinSCP cannot implement terminal emulation, you need to
have sudoers option requiretty turned off.
Instructions in Ubuntu Apache /etc/sudoers recommend adding directives to /etc/sudoers.d rather than editing /etc/sudoers directly. Depending on the installation, adding directive to /etc/sudoers.d/cloud-init may work as well.
It may be helpful to create an SSH test user with sudo permissions by following the steps provided in instance documentation to ensure that the user has recommended instance settings and any updates to server sudoer files can be effected and removed without affecting other users.
I enabled SSH root login on Debian Linux Server:
To enable SSH login for a root user on Debian Linux system you need to first configure SSH server. Open /etc/ssh/sshd_config and change the following line:
FROM:
PermitRootLogin without-password
TO:
PermitRootLogin yes
Once you made the above change restart your SSH server:
/etc/init.d/ssh restart
Source
Then i used SCP File protocol with root user name in winscp.
Under SCP/Shell settings, instead of "sudo su -", choose /bin/bash.
It should work.

Resources