FTP on lampstack - Google cloud platform - ftp

So I installed a LAMP on a Google Cloud instance with debain wheezy7. Everything is working fine but I am not able to work the ftp. I am following this tutorial by digital ocean
I am stuck at this last step where I need to make vsftpd allow the user to write outside the chroot file.
The error is get is
hetunandu_gmail_com#lamp:~$ mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': Permission denied
Then when i use sudo with it i get this error
hetunandu_gmail_com#lamp:~$ sudo mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': No such file or directory
Where do I go from here?
Also I dont know how to get my username and password setup for FTP

I followed the tutorial and could not replicate your issue. I initially got "Permission denied" but you can circumvent this by running:
$ sudo su
and then
$ mkdir -p /root/$USER/files
Why not use /home/$USER ? not sure why you want to create the folders under /root.
As for your second question, regarding the username and password, I am not sure I understand. From the Developers Console > Compute Engine > VM Instances > click SSH and that should log you in with root privileges. then you can create all the users you want:
$ sudo adduser test_user

Please don't use FTP as it's an insecure clear-text protocol which will let others see your password and easily get access your instance, read/modify/delete your files, etc.
Instead, you should use secure protocols such as SCP or SFTP with public key authentication.
Here are some options to transfer files to/from your GCE VM instance:
sftp CLI tool, as described in this answer
gcloud compute copy-files, as described in this answer
WinSCP with SFTP

Related

Accessing Digital Ocean Server

Trying to access my digital ocean server via a mac terminal.
ssh root#myipaddress
I'm then prompted for a password (I've never been prompted before, as I've left it blank intentionally.). After 3 failed attempts I get:
Permission denied (publickey,password).
I have also tried entering the ssh key for the server and get the same outcome.
I tried adding the key to my SSH-agent and get the following:
WARNING: UNPROTECTED PRIVATE KEY FILE!
Permissions 0644 for '/Users/xxxxx/.ssh/id_rsa.xxx.pub' are too open
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
I've tried contacting D.O. but have yet to hear back. Any help is greatly appreciated!
I was finally able to access the server via:
ssh -i <keyfile> <user>#<hostname>
Seems like there was an issue with my machine defaulting to use my personal id_rsa file instead of the one that was created for this different server.
Have you tried the solution in: ssh "permissions are too open" error? Looks like the permissions on your private key are not restrictive enough.
WARNING: UNPROTECTED PRIVATE KEY FILE!
Permissions 0644 for '/Users/xxxxx/.ssh/id_rsa.xxx.pub' are too open
Above messages shows some clarification. Try the below permissions.
# chown user:user ~/.ssh/*
# chmod 600 ~/.ssh/private_key
# chmod 644 ~/.ssh/public_key.pub
Try with these permissions. From the logs it seems to be unprotected permission/ownership.
Also try with ssh -i ~/.ssh/private_key user#<IP> -vvv for more insights :)

I Can't download phpmyadmin.conf from amazon ec2

Since i have to change some settings inside "etc/httpd/conf.d/phpMyAdmin.conf".
i can't download this file using "FileZilla", I also tried sudo nano command in putty , it returns empty. i don't know how to change permission for this file.
I spent more than an hour. Guide me if someone know how to resolve this.
EC2 is a computer rental service, not a web hosting service, so you won't be able to connect with FTP (filezilla) unless you run an FTP server on your EC2 instance.
As for editing the file while you're connected through SSH (putty), you need to make sure that you're properly referencing the file you want. Try running "sudo nano /etc/httpd/conf.d/phpMyAdmin.conf". Note the leading "/" on the file path; it's important.

Unable to clone git repository from siteground

I'm trying to set up MS WebMatrix to use a Git repository from my siteground hosting account. I created the repository using their cpanel plugin and it tells me that I can clone it using this command
git clone ssh://username#sm3.siteground.biz:18765/home/username/public_html/
I replaced username of course and I created an rsa key using ssh-keygen. In the Webmatrix GUI it just opens a window saying "Clone is in progress" but it doesn't to anything.
And when I run that command in PowerShell, this is the output:
Cloning into 'public_html'...
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Any help is highly appreciated!
EDIT:
I haven't used github before but I'm pretty sure I'm not connecting to it. The repository is on siteground's server I think. Anyway, I couldn't figure it out in PowerShell so now I'm using putty to load the appropriate key and connect using an external git tool (source tree) that doesn't use the same ssh client as PowerShell. That is the solution that's working for me now.
I'll leave this question open as maybe someone comes around and can help with how to set this up using PowerShell.
The missing piece to the Siteground guide is:
Create a blank file in ~/.ssh/ or C:\Users\username\.ssh on your computer. It does not matter what you name it. I named it siteground_dsa. You could also name it id_dsa_siteground.
Copy the private ssh key that you get from siteground.com and paste the whole of it in the this newly created file.
Open Git Bash locally on your computer and run the following command
$ eval ssh-agent -s
Then run the following. Remember to use the filename that you gave it.
$ ssh-add ~/.ssh/siteground_dsa
Now you need to enter the passphrase for the ssh key. You will have defined it when creating the ssh key.
Now you should be logged in and you can run git clone the directory of your wish.
git clone ssh://username#ams14.siteground.eu:18765/home/username/public_html/
To permanently add the SSH key extend ~/.ssh/config with the following and updating server_name and username.
Host server_name
User username
Port 18765
IdentityFile ~/.ssh/siteground_dsa
keep in mind that for Windows operators, you should write eval $(ssh-agent)
eval $(ssh-agent)
chmod 600 file_name
ssh-add C:\Users\username\.ssh\siteground
Then you can easily clone your file into your local server following inserting your passphrase of the SSH key.
GitHub isn't able to authenticate you. Probably your key isn't associated with your GitHub account.
Take a look to GitHub's recommended method

Laravel Homestead: Nginx failing to start on Vagrant. Need root password to access Nginx logs

Using Laravel Homestead to work with Laravel 4. After running vagrant up this morning, I was unable to access homestead.app:8000. I pinged it with no problem so I investigated my virtualbox and discovered that Nginx wasn't starting. I then attempted to view logs and I am denied permission from the /var/log/nginx directory which is owned by www-data adm.
My question then, what is the su or sudo password which would allow me to access that directory? The documentation is surprisingly void of any information as well as the Homestead.app Git repository. Thank you.
i had similar issue with laravel/homestead vagrant virtual machine and nginx not restarting. the error after running nginx -t was :
nginx: [crit] pread() "/etc/nginx/sites-enabled/sites-available" failed (21: Is a directory)
nginx: configuration file /etc/nginx/nginx.conf test failed
solution was to delete the symbolic link sites_available:
rm -Rf /etc/nginx/sites-enabled/sites-available
than it worked:
service nginx restart
elevate to root by typing sudo -s
A quick way to jump to a root account shell is to run the "sudo bash" command. That way, if you don't have to have to type "sudo" in front of each command. Since this VM is for development purposes I don't see it as a danger, but in real production Ubuntu runs with the root account locked down so you always go in and should stay in with user level privileges until you need to execute a higher level command. You "can" enable the root account and set a password, but jumping to it with sudo is the better method.
You can just look at the log using the root account password. So: sudo nano and then just enter your root user's password. A root is able to do anything on the system, so that always is a solution for this kind of problems.
If you forgot the root password, just search google to recover it.

Failed to upload a file to ec2 instance

I'm a newbie and all I want is to set up an ec2 instance for fever rss.
Here is my info: os x 10.9.2, aws with ami of ubuntu 12.04 lts. I set up lamp on ec2 following this guide: http://www.robotmedia.net/2011/04/how-to-create-an-amazon-ec2-instance-with-apache-php-and-mysql-lamp/
Now I can ssh to my server public IP using terminal. After connected the server, I typed
scp -i /path/to/keypair.pem /path/to/test.txt ubuntu#theServerPublicIP:~/
and got the error as follows:
Warning: Identity file keypair.pem not accessible: No such file or directory.
I have tried to resolve the problem by:
1. change permission of .pem file to 600 on my os x.
chmod 600 keypair.pem
and ssh again, scp again, and got same error. Then I change its permission to 400 on my os x,
chmod 600 keypair.pem
and redid ssh and scp, and got same error.
rewrite file path using ~/path/to/file for both of keypair.pem and test.txt, and then redid ssh and scp, got same error.
Next rewrite file path using /Users/myUserName/path/to/file for both files and redid ssh and scp, got same error.
Next cd to the folder of keypair.pem and test.txt (I put them in the same folder), and tried the above two naming and got same error for each.
change path on the server. I have tried "~","~/","/","/var/www/", for all I still got the same error.
I also tried forklift because I saw the developer of Fever using it in the demo video. I tried all options for connection: sftp... but couldn't connect to the server.
Please help to get the test.txt uploaded... then I will be able to upload the fever folder.
Thanks!
If you gotta do this frequently I advise you to create an alias.
For example: I was running a webserver on EC2 instance and I had HTML contents in a local dir awsplaywww
$ alias syncaws="rsync -avrz --delete /home/sanket/workspace/awsplaywww/ -e ssh sanket#awsplay1.ddns.net:/var/www/html/"
Now everytime I update a HTML file or something and need to send it back to server. I just open terminal and type syncaws and job done!

Resources