How do I connect to my ec2 instance using Cyberduck with privileges? - macos

I try to login using the ec2-user but for some reason the login fails:
Using the username: ubuntu I am able to login just fine, however, I don't have any privileges and I can't sudo su for the privileges to write to my files. I tried using the cyberduck terminal and send command options but sudo su doesn't work with them. Cyberduck just spins.

I don't think the ec2-user account works on recent Ubuntu AMIs, which may explain the failed login.
You can approach this in a few ways. The first is to create a new user account specifically for FTP and give it permissions only to the necessary folders. First create the user, then create a public/private key pair for non-interactive login. This will allow you to operate your FTP client like normal.
My preferred solution is to upload the files to the ubuntu home directory and then run a script as root that moves the files to the correct location. You won't have to modify the system configuration this way, but you will have to do the file transfer in two steps.
Create a staging folder in /home/ubuntu and copy the files there. Create a /home/ubuntu/copy.sh script on the server like this:
#!/bin/bash
sudo su #this will only work if sudo doesn't prompt for a password
cp -r /home/ubuntu/stage/* /var/www/html/
Then from your dev machine, call the script:
$ ssh -i ~/path/to/key.pem ubuntu#ec2.hostname.com /home/ubuntu/copy.sh
If you want to get really fancy, you could set up a git repository and use a post-receive hook to handle this all for you when you push. No need for an FTP client at all.

Related

How can I sudo inside of a bash script?

I have a post-commit hook in my subversion that will export a copy of my repo to a desired location for deployment. That part works fine, but it comes in with apache:apache. I need this to be changed to prod_user:prod_user. If I try to add a chown statement in my script, it will fail. If I try to use sudo, it will ask for a password that I cant give because this happening in a post-commit script. I'd like this to be as automated as possible.
My question is: How can I make this work? I need to export the contents of my repo to the production folder and convert the users/groups to match existing production users/groups.
Is there a way to pass my password as an argument to a sudo command?
Thank you for your help!
Is there a way to pass my password as an argument to a sudo command?
Don't do it, if at all possible. This will leak your password to anyone that can read the script.
But if you can't avoid it, use echo <password> | sudo -S <command> - -S makes sudo read from stdin so you can give it the password from there
Don't do any of sudo, chown, chgrp. It is not the responsibility of the uploader to fix permissions on the remote server.
Have the server administrator properly setup these, so that pushing production files from the repository works straight without messing with sudo permission at the server.
If you are the one same person, then take the time to fix the server side to avoid having a remote user elevate its privileges (even temporarily with sudo) for the sake of fixing uploaded files permissions.
Use crontab -e as root user, then you can change ownership without escalation of privileges.
Or run as prod_user and make it check out the code ...then it is already the owner of the files.
Keeping a file with the last deployment timestamp can be used to compare to HEAD timestamp.

Synology default SSH login directory

When I connect to my Synology server via SSH, by default I get into the root directory. I want to change this to my home user folder. I am kind of able to do this via name#server 'cd /volume1/<user> ; bash', but then I get into a different bash interface compared to how I normally 1) log in via name#server and 2) then do cd /volume1/<user>. I would like to get the result of the latter, but than in the same line of code. What is the best method for doing this?

CentOS 6.2 Jailing sftp account

I have been tasked with setting up a centOS 6.2 development box (even though I do not know linux) and am currently using vsftpd to FTP into a box at work. The problem is sftp is not working.
Authentication failed. Error: Critical error Error: Could not connect
to server
this is the error I am getting.
I have added the user by doing the following:
sudo useradd -d /var/www/PATH -s /usr/sbin/nologin USERNAME
sudo passwd USERNAME
sudo chown -R USERNAME /var/www/ PATH
sudo chmod 755 /var/www/PATH
it works for ftp (and the folder structure is jailed) but it does not work with sftp.
However, when I add a user the following way:
sudo useradd USERNAME
sudo passwd USERNAME
sudo chown –R USERNAME /opt/USERNAME
sudo chmod 777 /opt/USERNAME
I have sftp access unjailed and no FTP access.
It does not matter if I have to create multiple accounts (one for ftp and one for sftp), they do have to be jailed to the directory.
If there is a better solution to my problem, help would be welcomed!
Thanks,
Matt
You are on good way.
Personally I am using chrooting of sftp user described here: http://www.thegeekstuff.com/2012/03/chroot-sftp-setup/
IMHO in article is not stressed out enough that user's home directory has to be owned by root
# ls -ld /var/www/PATH
drwxr-xr-x 3 root root 4096 Dec 28 23:49 /var/www/PATH
You can get a lot of helpful info from logs, it this case you can search
tail -f /var/log/secure
while connecting from external host.
Let me know if you have any more help with this problem.
I've created a script for this purpose, you can use it what ever the distribution that you are using is, it works on both RHEL based and Deb's as well to create a jailed SFTP directory with no shell access, only SFTP.
SFTP Jailing with no shell access

How to upload files and folders to AWS EC2 instance?

I use SSH to connect to my Ubuntu instance. With SSH I can administer files and folders on the instance, but how do I upload files and folders from my local machine to the instance?
Is it possible to do right from SSH session, without using SFTP clients?
Just to add a bit more detail to the scp command (included in OSx and most linux/unix):
scp -i myssh.pem local_file username#200.200.200.200:/home/username
Obviously - replace the pem file with the one used for ssh access. Obviously replace "username" and "200.200.200.." with valid values for your setup.
You can try kitten utility which is a wrapper around boto3. You can easily upload/download files and run commands on EC2 server or on multiple servers at once for that matter.
kitten put -i ~/.ssh/key.pem cat.jpg /tmp [SERVER NAME][SERVER IP]
Where server name is e.g ubuntu or ec2-user etc.
This will upload cat.jpg file to /tmp directory of server
As mentioned already, I've used WinSCP, which logs me in as "ec2-user" - then make sure to adjust that user's permissions via SSH. Example:
chown -R ec2-user /path/to/files
(Authenticate as the root user first.)
Whatever folder or files you need to edit via WinSCP, allow permissions on them (otherwise you will get a permission denied error when trying to upload/edit files in WinSCP).
you cannot copy files using ssh. you can use scp/sftp.
scp if you are on linux or winscp if you are on windows
You can use this:
scp -i yourkeypair.pem source destination
This Works Fine
scp -r -i myssh.pem /local/directory remote_username#10.10.0.2:/remote/directory
-r for recursive
You could also install and set up an FTP Server, which will allow you to set up users, and directories for them to upload to. That being said, I've upvoted the above because scp/sftp is the ideal method.
The easiest way is to install webmin and user the file manager (java plugin) from your browser.
//Go to home folder
cd ~
//Download the latest version
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.660-1.noarch.rpm
//install
sudo rpm -U webmin-1.660-1.noarch.rpm
//Change default password of root user
passwd
Finally, open port 10000 in the security groups
Then, log into
https://server_name:10000
with user:root password:what_you_set_before

EC2 non root user login

Is there a way to log into an EC2 ubuntu ami or a way to set up an ubuntu ami so that non-root users can log in? I tried creating a user and logging in with the associated password. I also tried using the private key, copied the authorized-keys file into the .ssh directory of the non-root user's home directory and tried to log in to the box with that user account id. Neither method worked.
Thanks in advance.
So, this works, but the missing high-order bit of information here has to do with setting the right permission on the authorized-keys file in the home directory for the user. So, I copied /root/.ssh/authorized-key to /home/user, then did with
cp -r /root/.ssh /home/user
chown -R user /home/user/.ssh
This allowed me to use the keypair.pem file to log in.
Make sure you are sending your AWS keypair as the identity file, i.e.
ssh -i ~/.ssh/keypair.pem user#ec2-174-129-xxx-xx.compute-1.amazonaws.com
Also check that SSH is enabled in your security group
Assuming you would like to have users log in with a password so they need not supply a key every time, all you must do is turn on the ability to SSH in with a password. This option is turned off by default in all Linux AMIs.
vi, nano, pico, etc. into the following file with root privileges:
sudo vi /etc/ssg/sshd_config
Change the following setting to yes:
PasswordAuthentication = yes
Finally you must restart SSH (Since you are SSHed onto a remote machine, a simple reboot is fine.)
That's it! Of course, you must still add users with the adduser command and give them passwords with the passwd command for them to be able to login to your AMI. Checkout this link for more info on the OpenSSH SSH client configuration files.

Resources