Uploading to EC2 problems. How do you do FTP? - ftp

I have setup a new EC2 instance on AWS and I'm trying to get FTP working to upload my application. I have installed VSFTPD as standard, so I haven't changed anything in the config file (/etc/vsftpd/vsftpd.conf).
I have not set my port 21 in the security group, because I'm doing it through SSH. I log into my EC2 through termal like so
sudo ssh -L 21:localhost:21 -vi my-key-pair ec2-user#ec2-instance
I open up filezilla and log into local host. Everything goes fine until it comes to listing the directory structure. I can log in and right and everything seems fine as you can see below:
Status: Resolving address of localhost
Status: Connecting to [::1]:21...
Status: Connection established, waiting for welcome message...
Response: 220 Welcome to EC2 FTP service.
Command: USER anonymous
Response: 331 Please specify the password.
Command: PASS ******
Response: 230 Login successful.
Command: OPTS UTF8 ON
Response: 200 Always in UTF8 mode.
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: EPSV
Response: 229 Entering Extended Passive Mode (|||37302|).
Command: LIST
Error: Connection timed out
Error: Failed to retrieve directory listing
Is there something which I'm missing in my config file. A setting which needs to be set or turned off. I thought it was great that it connected but when it timed out you could picture my face. It meant time to start trawling the net try and find the answer! Now with no luck.
I'm using the standard Amazon AMI 64 bit. I have a traditional lamp setup.
Can anyone steer me in the right direction? I have read a lot about getting this working but they are all incomplete, as if they got bored half way through typing up how to do it.
I would love to hear how you guys do it as well. If it makes life easier. How do you upload your apps to a EC2 instance? (Steps please - it saves a lot of time plus it is a great resource for others.)

I figured it out, after the direction help by Antti Haapala.
You don't even need VSFTP setup on the instance created. All you have to do is make sure the settings are right in FileZilla.
This is what I did (I'm on a mac so it should be similar on windows):
Open up file zilla and go to preferences.
Under preferences click sftp and add a new key. This is your key pair for your ec2 instance. You will have to convert it to the format FileZilla uses. It will give you a prompt for the conversion
Click okay and go back to site manager
In site manager enter in your EC2 public address, this can also be your elastic IP
Make sure the protocol is set to SFTP
Put in the user name of ec2-user
Remove everything from the password field - make it blank
All done! Now connect.
That's it you can now traverse your EC2 system. There is a catch. Because you are logged in as ec2-user and not root you will not be able to modify anything. To get around this, change the group ownership of the directory where your application will lie (/var/www/html) or what ever. I would change it so it is on a EBS volume. ;) Also make sure this group has read write and execute permissions. The group for the ec2-user is ec2-user. Leave everyone else as nothing. So the command you use while logged in via ssh
sudo chgrp ec2-user file/folder
sudo chmod 770 file/folder
Hope this helps someone.

FTP is a very troublesome protocol because it requires a secondary pipe for the actual data transfer and does not definitely work well when piped. With ssh you should use SFTP which has nothing to do with FTP but is a completely different protocol.
Read also on Wikipedia

Adding the key to www is a recipe for disaster! Any minor issue with your app will become a security nightmare.
As an alternative to ftp, consider using rsync or a more "mature" deploy strategy based on capistrano for instance. There are plenty of tools for that around.

Antti Haapala's tips are the only way to work around with EC2 SFTP. It works just fine! Just note that you need to create the /var/www/.ssh/ folder and copy the authorized_keys file there.
After that you'll need to change authorized_keys ownership to www-data so ssh connection can recognize it. Amazon should let people know that. I looked for this in there forums, FAQ, etc. No clue at all... Cheers once more to stackoverflow, the way to go haha!

Related

Bandit War Game, correct command but permission denied?

I remember playing the Bandit War game in uni, so I felt like giving it another shot this weekend to refresh some knowledge.
Aaaand im Stuck on level0. But I am quite certain this is the correct command, so I am wondering if I am missing something or there can be some kind of configuration issue?
Level 0 gives you the address, the username, the port and the password. So you do an old-school login without any files etc.
This is what I went for:
ssh bandit0#bandit.labs.overthewire.org -p 2220
Also tried
ssh bandit.labs.overthewire.org -p 2220 -l bandit0
but that should be the same.
I would expect to be prompted for the password, but instead I get
This is a OverTheWire game server. More information on
http://www.overthewire.org/wargames
bandit0#bandit.labs.overthewire.org: Permission denied
(publickey,password).
Check your ssh-config in case you are stuck like me.
I had these lines among it
Host *
PreferredAuthentications publickey
this is why it did not work. Add the wargame server and switch to the preferred method of authentication for a given level.

GPG Can't connect to S.gpg-agent: Connection Refused

I am attempting to set up gpg preset passphrase caching using the gpg agent so I can automate my file encryption process. In order for the gpg-agent to run and properly cache the passphrase, it seems there needs to be a S.gpg-agent socket located within the ~/.gnupg/ directory that gets generated in the root directory when I set up gpg and gpg-agent.
What I have done (and which seemed to work in the past) is I would start up everything as root and copy over the contents of the /.gnupg directory to my less privileged user and grant permissions to that socket and directory to the user. The commands I ran to start up the gpg-agent daemon and cache passphrase:
gpg-agent --homedir /home/<user>/.gnupg --daemon
/usr/libexec/gpg-preset-passphrase --preset --passphrase <passphrase> <keygrip>
gpg-agent process seems to be running just fine but I get the below error from the second line:
gpg-preset-passphrase: can't connect to `/home/<user>/.gnupg/S.gpg-agent': Connection refused
gpg-preset-passphrase: caching passphrase failed: Input/output error
I have made sure the socket exists in the directory with proper permissions and this process runs as root. It seems that this socket is still inherently tied to root even if I copy and modify permissions. So my questions are
How exactly does this socket get initialized?
Is there a way to do so manually as another user?
To add, the agent process seems to run just fine for both users but where I get a little hazy is how the gpg-preset-passphrase is using the socket and if its that or the agent that is refusing the connection to S.gpg-agent
I also assume that I don't need to explicitly start the agent but figured I would this so that I could set any values such as the homedir if needed.
It turns out the issue was unrelated to the gpg-agent and gpg-preset-passprhase.
Note: This is not a permanent solution but it did allow me to get past the issue I was facing.
After modifying the /etc/selinux/config and disabling SE Linux, I no longer experienced the permissions issue above. SE Linux is a Linux kernel security module developed by Red Hat (I am currently running this on RHEL7). It seems the next step will likely be to make sure these binaries and packages are allowed access from my user using audit2allow. Bit more information on this here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-allowing_access_audit2allow

Google Cloud Platform - SSH/Telnet

I am running apps on Compute Engine. I run on a Windows box and use Putty to connect to the CE. This pretty much seems to work fine (leaving aside the problems in the Google doc on this).
I have set up another user who I want to enable for SSH (on a Mac) and have her use FileZilla to push files to the CE.
I am trying it out on my own Mac. I set up 2 firewall rules with 2 different priorities for tcp:22 =
myssh Apply to all IP ranges: 0.0.0.0/0 tcp:22 Allow 1000 default
default-allow-ssh Apply to all IP ranges: 0.0.0.0/0 tcp:22 Allow 65534 default
The user has permissions on of the Project of: "Compute Instance Admin(v1)"
On the Mac terminal I do the following:
ssh-keygen -t rsa -f ~/.ssh/userfirstname-ssh-key -C [googleusername.gmail.com]
I go to the GCP CE Meta data (logged in as myself) and then copy the contents of the userfirstname-ssh-key.pub to the Metadata/SSH Keys and save.
After GCP gives the ok on the key being added I enter the following in the Mac terminal:
ssh -i [userfirstname]-ssh-key [googleusername.gmail.com]#gcp-external-ip
Depending on i-don't-know-what, sometimes it says "Permission denied (public key)", "Operation timed out"
I've repeated this a few times and just tried to telnet in to the gcp-external-ip and get "Operation timed out" telnet: Unable to connect to remote host.
At a complete loss. Please help.
You could (and should) use the gcloud command line tools. Then it is easiest to simple copy the correct gcloud command from the Web Console. There is a little drop-down menu next to 'SSH' for each of your instances.

Kerberos Sercurity Error

I am having a problem with my server and so far couldn't find any solution for this. When I try to add a server from a server manager (windows server 2012) I can see only the kerberos security error. Both servers are in the same domain(i have tried from several servers from domain and got the same error).
The strange thing is when I unjoin the problematic server from domain and rejoin it with another name it works normally. But the problem is to make it work with existing name. Anyhelp will be highly appreciated
thanks in advance.
Late reply, but I've just encountered the same error and hope this solution proves useful to others.
Situation: I had to wipe and reinstall a virtual server on which I'd previously had to set some Service Principal Names, and some SPNs for a service account. Turns out the SPNs were still there for the old server/account and I had to remove them.
I recommend checking for and removing rogue SPNs to resolve this. Use the following commands in an elevated command prompt:
setspn -l <servername/username>
In my case I had problems with MBAM, the Bitlocker admin tool, so for example I used:
setspn -l mbam01
Which gave me the output (changed names to protect the innocent):
Registered ServicePrincipalNames for CN=MBAM01,OU=Member Servers,DC=corp,DC=domainname,DC=com:
termserv/mbam01.corp.domainname.com
termserv/mbam01
http/mbam01.corp.domainname.com
http/mbam01
HOST/MBAM01
HOST/mbam01.corp.domainname.com
This will list the SPNs associated with the server or user account. Then you remove the errant SPNs with this command:
setspn -d <listed service> <servername/username>
In my case it turned out the mbamapppool user had http/mbam01 and http/mbam01.corp.domainname.com associated with it, causing Server Manager to fail to poll the server. I removed the http/ refs from the user and then added them to the server with the following commands:
setspn -d http/mbam01 corp\mbamapppooluser
setspn -d http/mbam.corp.domainname.com corp\mbamapppooluser
setspn -s http/mbam01 mbam01
setspn -s http/mbam01.corp.domainname.com mbam01
I then refreshed Server Manager and it polled the server successfully, and the Kerberos Security Error had gone.

Logging into SFTP, require help chmoding pem file

I'm very, very new to Amazon EC2, SFTP -- having only used FTP clients until now. I'm trying to log into an Amazon EC2 instance and have everything I needed except a pem file with the key pair, which I have now. However I was told to chmod and reset its permissions to 400 in order to log in correctly. The problem is I have no idea how to go about doing this. There is talk of just entering chmod 400 keyfile.pem through the command line, but is that the Windows command line on my desktop? How can I do this? Any help would be much appreciated...
Either log in using an SSH client (like PuTTY).
Using it, you can execute the command you mention (chmod 400 keyfile.pem) on a command-line.
See also SSH to Amazon EC2 instance using PuTTY in Windows.
Or you can use a GUI SFTP client (like WinSCP) to set the permissions.
See https://winscp.net/eng/docs/ui_properties
Make sure only the R checkbox in Owner row is ticked (that's an equivalent of the 400 permissions in an octal format).
(I'm the author of WinSCP)

Resources