WinSCP connect to Amazon AMI EC2 Instance changing user after login to "root" - amazon-ec2

I followed instructions here carefully however I haven't get this working right. Here is what I did:
Run WinSCP enter Hostname (Elastic IP of my Instance)
enter username "ec2-user"
enter public key file
chose SCP for the protocol
Under SCP/Shell settings I chose "sudo su -"
Hit Login
WinSCP asks me for passphrase key, Hit OK
Shows up this error
Error skipping startup message. Your
shell is probably incompatible with
the application (BASH is recommended).
NOTE: This works on Putty

With credit to this post and this AWS forum thread, it seems the trick is to
comment out Defaults requiretty in sudoers. My procedure now:
Log in to your EC2 instance using Putty.
Run sudo visudo, a special command to edit /etc/sudoers.
Press the Insert key to start Insert mode.
Find the line Defaults requiretty. Insert a hash symbol (#) before that line to comment it out:
#Defaults requiretty
Press the Esc key to exit Insert mode.
Type :wq to write the file and quit visudo.
In WinSCP:
Under Advanced > Environment > SCP/Shell, change the Shell to sudo su -.
Under SSH > Authentication, choose your Private key file (.ppk file).

WinSCP does not support commands that require terminal emulation or user input.
See: http://winscp.net/eng/docs/remote_command#limitations
Since sudo su - expects a password, it wouldn't work.
There is a way around it: make root logon without being prompted for a password. You can do this by editing your sudoers file usually located at /etc/sudoers and adding:
root ALL=NOPASSWD: ALL
Needless to say, this is Not a Very Good Thing To Do - for reasons which should be obvious :)

I was having the same problem and solved it using the steps in this tutorial. I would have posted it here, but I don't have enough rep for images/screens.
http://cvlive.blogspot.de/2014/03/how-to-login-in-as-ssh-root-user-from.html

The following tutorial worked for me and provides helpful screenshots. Logging in as a regular user with sudo permissions just required tweaking a few WinSCP options.
http://cvlive.blogspot.de/2014/03/how-to-login-in-as-ssh-root-user-from.html
Set Session/File protocol to: SCP, enter host/instance ip, port - usually 22, and regular username. Enter password credentials if the login requires it.
Add Advanced/SSH/Authentication/Private key file.
Unchecking Advanced/SSH/Authentication/attempt "keyboard interactive" authentication should allow Advanced/Environment/SCP Shell/Shell/Shell: sudo su - to provide sudo permissions for accessing webserver directories as a non-owner user.
Update 08/03/2017
WinSCP logging can be helpful to troubleshoot issues:
https://winscp.net/eng/docs/logging
[WinSCP] Logging can be enabled from Logging page of Preferences dialog.
Logging can also be enabled from command-line using /log and /xmllog
parameters respectively, what is particularly useful with scripting.
In .NET assembly, session logging is enabled using
Session.SessionLogPath1).
Depending on WinSCP connection errors, some server installations may need a directive added to the (Ubunto, CentOS, other-Linux-Server) /etc/sudoers file to not require TTY for a specified user. Creating a file in /etc/sudoers.d/ (using a tool such as Amazon Command Line Interface or PuTTY) may be a better option than editing /etc/sudoers. Some /etc/sudoers versions recommend it:
This file MUST be edited with the 'visudo' command as root.
Please consider adding local content in /etc/sudoers.d/ instead of
directly modifying this file.
See the man page for details on how to write a sudoers file.
When editing a sudoers file (as root) through the command-line, the 'visudo' command should be used to open the file as it will parse the file for syntax errors. /etc/sudoers.d/ files are typically owned by root and chmoded with minimal permissions. The default /etc/sudoers file may be referenced as it should automatically have recommended chmod permissions on installation. e.g.: 0440 r--r----- .
https://superuser.com/a/869145 :
visudo -f /etc/sudoers.d/somefilename
Defaults:username !requiretty
Helpful Links:
Stackoverflow: cloud-init how to add default user to sudoers.d
https://www.digitalocean.com/community/tutorials/how-to-edit-the-sudoers-file-on-ubuntu-and-centos
WinSCP Forum:
https://winscp.net/forum/viewtopic.php?t=3046
https://winscp.net/forum/viewtopic.php?t=2109
WinSCP Doc: https://winscp.net/eng/docs/faq_su
With SCP protocol, you can specify following command as custom shell
on the SCP/Shell page of Advanced Site Settings dialog:
sudo -s
[...]
Note that as WinSCP cannot implement terminal emulation, you need to
have sudoers option requiretty turned off.
Instructions in Ubuntu Apache /etc/sudoers recommend adding directives to /etc/sudoers.d rather than editing /etc/sudoers directly. Depending on the installation, adding directive to /etc/sudoers.d/cloud-init may work as well.
It may be helpful to create an SSH test user with sudo permissions by following the steps provided in instance documentation to ensure that the user has recommended instance settings and any updates to server sudoer files can be effected and removed without affecting other users.

I enabled SSH root login on Debian Linux Server:
To enable SSH login for a root user on Debian Linux system you need to first configure SSH server. Open /etc/ssh/sshd_config and change the following line:
FROM:
PermitRootLogin without-password
TO:
PermitRootLogin yes
Once you made the above change restart your SSH server:
/etc/init.d/ssh restart
Source
Then i used SCP File protocol with root user name in winscp.

Under SCP/Shell settings, instead of "sudo su -", choose /bin/bash.
It should work.

Related

Why is .pgpass file not supplying a password for the pg_dump, vacuumdb, or reindexdb commands?

I'm trying to execute several different PostgreSQL commands inside of different bash scripts. I thought I had the .pgpass file properly configured, but when I try to run pg_dump, vacuumdb, or reindexdb, I get errors about how a password isn't being supplied. For my bash script to execute properly, I need these commands to return an exit code of 0.
I'm running PostgreSQL 9.5.4 on macOS 10.12.6 (16G1408).
In an admin user account [neither root nor postgres], I have a .pgpass file in ~. The .pgpass file contains:
localhost:5432:*:postgres:DaVinci
The user is indeed postgres and the password is indeed DaVinci.
Permissions on the .pgpass file are 600.
In the pg_hba.conf file, I have:
# pg_hba.conf file has been edited by DaVinci Project Server. Hence, it is recommended to not edit this file manually.
# TYPE DATABASE USER ADDRESS METHOD
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
So, for example, from a user account [neither root nor postgres], I run:
/Library/PostgreSQL/9.5/pgAdmin3.app/Contents/SharedSupport/pg_dump --host localhost --username postgres testworkflow13 --blobs --file /Users/username/Desktop/testdestination1/testworkflow13_$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password
And I get the following error:
pg_dump: [archiver (db)] connection to database "testworkflow13" failed: fe_sendauth: no password supplied
I get the same result if I run this with sudo as well.
Curiously, pg_dump does execute, and does export out a .backup file to the testdestination1 directory, but since it throws an error, if it's in a bash script, the script is halted.
Where am I going wrong? How can I make sure that the .pgpass file is being properly read so that the --no-password flag in the command works?
Please start with a read to official docs.
Also, even this topic is more than 2 years also, i strongly suggest to update to at least to version 10, anyhow nothing relevant has been changed around .pgpass
.pgpass need to be chmod 600, fine, the user that uses that must can read, so that must be the owner of that file.
Please remove the --no-password that just confuse and is not needed.
Using 127.0.0.1 instead of localhost clarify where you are going, "usually" are the same.
... from a user account [neither root nor postgres] ...
The user you are using for must have read access to .pgpass, as said, so you have to clarify that and provide that file to that user, maybe using the PGPASSFILE env variable could be useful for you.
Another way is the use of .pg_service.conf file with or without the .pgpass, for what you have written it looks like that may be more appropriate
Also you could set the PGPASSWORD in the env of the user.
Think about security, some choices look the simpliest but can expose accesses .. and as DBA I'm frankly tired about peoples that store password in visible places, printed in logs or on github or set "trust" in pg_hba and finally comes to me to say "postgreSQL is insecure".. hahaha!
Final note, you do not have a pg_hba error, in case you will have a "pg_hba" error message.
Turns out that changing all three lines in the pg_hba.conf file to the trust method of authentication solved this.
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
Since the method is trust, the .pgpass file may be entirely irrelevant--I'm not sure, but at least I got it working.

How to call superuser command from script without sudo

I need to call postfix reload from a script accessible from php webpage. postfix reload requires superuser privileges. I can do it using echo "password" | sudo ... But I don't want to give superuser privileges to a script accesible from an apache, nor to write the password there in a plaintext. How do you call such command without creating security problem? How can software like the ISPConfig solve this need?
The user under which Apache is running (ex. apache) must be allowed to execute "sudo postfix reload" without a password. To do that you need to add the
following line in the '/etc/sudoers' file:
apache ALL = NOPASSWD: /path/to/postfix reload
I recommend in the script to use 'sudo /path/to/postfix reload' since the postfix file might not be in the default path of apache user.
Regarding security, you need to make sure that this command will not be launched to often since it might cause performance issues.
Since the command has specified an argument the even if your site would be compromised, the postfix reload will only perform a specific action without possibility to alter that behavior (as long sudo and postfix are up2date).

How can i run a sudo command in Bash script?

I want to run the following sample bash script which needs sudo password for a command
#!/bin/bash
kinit #needs sudo password
vi hello.txt
while running the above script it is asking for password.
How can i pass the username and password in the command itself or is there any better way i can skip passing my password in the script ?
TL;DR
You can't—at least, not the way you think.
Longer Answer with Alternatives
You have a couple of options:
Authenticate interactively with sudo before running your script, e.g. sudo -v. The credentials will be temporarily cached, giving you time to run your script.
Add a specific command such as /usr/lib/klibc/bin/kinit to your sudoers file with the NOPASSWD option. See sudoers(5) and and visudo(8) for syntax.
Use gksudo(1) or kdesu(1) with the appropriate keyring to cache your credentials if you're using a desktop environment.
One or more of these will definitely get you where you want to go—just not the way you wanted to get there.
So if you have access to your full system, you can change your sudoers file to allow certain sudo commands to be run w/o a password.
On the command line run visudo
Find your user and change the line to look something like this:
pi ALL=(ALL) NOPASSWD: /path/to/kinit, /path/to/another/command
That should do it. Give it another shot!
Hope that helps
You shouldn't pass username and password. This is not secure and it is not going to work if the password is changed.
You can use this:
gksudo kinit # This is going to open a dialog asking for the password.
#sudo kinit # or this if you want to type your password in the terminal
vi hello.txt
Or you can run your script under root. But note that vi is going to be ran as root as well, which means that it will probably create files that belong to root, that might be not what you want.

How do I connect to my ec2 instance using Cyberduck with privileges?

I try to login using the ec2-user but for some reason the login fails:
Using the username: ubuntu I am able to login just fine, however, I don't have any privileges and I can't sudo su for the privileges to write to my files. I tried using the cyberduck terminal and send command options but sudo su doesn't work with them. Cyberduck just spins.
I don't think the ec2-user account works on recent Ubuntu AMIs, which may explain the failed login.
You can approach this in a few ways. The first is to create a new user account specifically for FTP and give it permissions only to the necessary folders. First create the user, then create a public/private key pair for non-interactive login. This will allow you to operate your FTP client like normal.
My preferred solution is to upload the files to the ubuntu home directory and then run a script as root that moves the files to the correct location. You won't have to modify the system configuration this way, but you will have to do the file transfer in two steps.
Create a staging folder in /home/ubuntu and copy the files there. Create a /home/ubuntu/copy.sh script on the server like this:
#!/bin/bash
sudo su #this will only work if sudo doesn't prompt for a password
cp -r /home/ubuntu/stage/* /var/www/html/
Then from your dev machine, call the script:
$ ssh -i ~/path/to/key.pem ubuntu#ec2.hostname.com /home/ubuntu/copy.sh
If you want to get really fancy, you could set up a git repository and use a post-receive hook to handle this all for you when you push. No need for an FTP client at all.

EC2 non root user login

Is there a way to log into an EC2 ubuntu ami or a way to set up an ubuntu ami so that non-root users can log in? I tried creating a user and logging in with the associated password. I also tried using the private key, copied the authorized-keys file into the .ssh directory of the non-root user's home directory and tried to log in to the box with that user account id. Neither method worked.
Thanks in advance.
So, this works, but the missing high-order bit of information here has to do with setting the right permission on the authorized-keys file in the home directory for the user. So, I copied /root/.ssh/authorized-key to /home/user, then did with
cp -r /root/.ssh /home/user
chown -R user /home/user/.ssh
This allowed me to use the keypair.pem file to log in.
Make sure you are sending your AWS keypair as the identity file, i.e.
ssh -i ~/.ssh/keypair.pem user#ec2-174-129-xxx-xx.compute-1.amazonaws.com
Also check that SSH is enabled in your security group
Assuming you would like to have users log in with a password so they need not supply a key every time, all you must do is turn on the ability to SSH in with a password. This option is turned off by default in all Linux AMIs.
vi, nano, pico, etc. into the following file with root privileges:
sudo vi /etc/ssg/sshd_config
Change the following setting to yes:
PasswordAuthentication = yes
Finally you must restart SSH (Since you are SSHed onto a remote machine, a simple reboot is fine.)
That's it! Of course, you must still add users with the adduser command and give them passwords with the passwd command for them to be able to login to your AMI. Checkout this link for more info on the OpenSSH SSH client configuration files.

Resources