I'm a newbie and all I want is to set up an ec2 instance for fever rss.
Here is my info: os x 10.9.2, aws with ami of ubuntu 12.04 lts. I set up lamp on ec2 following this guide: http://www.robotmedia.net/2011/04/how-to-create-an-amazon-ec2-instance-with-apache-php-and-mysql-lamp/
Now I can ssh to my server public IP using terminal. After connected the server, I typed
scp -i /path/to/keypair.pem /path/to/test.txt ubuntu#theServerPublicIP:~/
and got the error as follows:
Warning: Identity file keypair.pem not accessible: No such file or directory.
I have tried to resolve the problem by:
1. change permission of .pem file to 600 on my os x.
chmod 600 keypair.pem
and ssh again, scp again, and got same error. Then I change its permission to 400 on my os x,
chmod 600 keypair.pem
and redid ssh and scp, and got same error.
rewrite file path using ~/path/to/file for both of keypair.pem and test.txt, and then redid ssh and scp, got same error.
Next rewrite file path using /Users/myUserName/path/to/file for both files and redid ssh and scp, got same error.
Next cd to the folder of keypair.pem and test.txt (I put them in the same folder), and tried the above two naming and got same error for each.
change path on the server. I have tried "~","~/","/","/var/www/", for all I still got the same error.
I also tried forklift because I saw the developer of Fever using it in the demo video. I tried all options for connection: sftp... but couldn't connect to the server.
Please help to get the test.txt uploaded... then I will be able to upload the fever folder.
Thanks!
If you gotta do this frequently I advise you to create an alias.
For example: I was running a webserver on EC2 instance and I had HTML contents in a local dir awsplaywww
$ alias syncaws="rsync -avrz --delete /home/sanket/workspace/awsplaywww/ -e ssh sanket#awsplay1.ddns.net:/var/www/html/"
Now everytime I update a HTML file or something and need to send it back to server. I just open terminal and type syncaws and job done!
Related
I'm new to docker and am trying to bind mount a folder in my docker container with a folder on my local machine. Using the code below, I was able to create the container with no issue.
docker run -it -v /Users/bdbot/Documents/mount_demo/:/mount_demo nycdsa/linux-toolkits bash
However, when I tried to create a txt file within the container folder, I got this error:
bash: demo.txt: Permission denied
Seeing that it was an access issue, I ran
sudo chmod 777 ../mount_demo
This allowed me to create the file, however when I checked the folder on my local machine it was not there. So the folders are not syncing.
I've also made sure the docker settings "Shared Drives" had the correct credentials. I'm not familiar enough with Docker to know how to trouble shoot further and have not been able to find anything online. I am using Windows, and everything is up to date.
The answer ended up being a really simple fix. The combination of using unix on a windows machine required that I add an additional slash(/) before the folder path. The below fixed this issue for me:
docker run -it -v //Users/bdbot/Documents/mount_demo/:/mount_demo nycdsa/linux-toolkits bash
I have a specific problem.
I have a server(say x) , first I have to connect to the x server using ssh x#domain. Then there is an internal server y again I have to connect to that specific domain using ssh. I have to download a folder in server y. I tried using scp after logging into x, using
scp -r /data/home/path /Users/username/Desktop
I got the following error
cp: cannot create regular file `/Users/username/Desktop': No such file or directory
please help me in downloading the folder
That error is because the destination /Users/username/Desktop doesn't exist on server X.
However, there's more going on here. That command is just trying to copy the first folder locally because there is no host information.
You should run from X:
scp -r usery#servery:/data/home/path destination
And then repeat the process from your machine using serverx information.
Since i have to change some settings inside "etc/httpd/conf.d/phpMyAdmin.conf".
i can't download this file using "FileZilla", I also tried sudo nano command in putty , it returns empty. i don't know how to change permission for this file.
I spent more than an hour. Guide me if someone know how to resolve this.
EC2 is a computer rental service, not a web hosting service, so you won't be able to connect with FTP (filezilla) unless you run an FTP server on your EC2 instance.
As for editing the file while you're connected through SSH (putty), you need to make sure that you're properly referencing the file you want. Try running "sudo nano /etc/httpd/conf.d/phpMyAdmin.conf". Note the leading "/" on the file path; it's important.
I am new to Vagrant and get the following error on vagrant up or vagrant ssh:
The private key to connect to this box via SSH has invalid permissions
set on it. The permissions of the private key should be set to 0600, otherwise SSH will
ignore the key. Vagrant tried to do this automatically for you but failed. Please set the
permissions on the following file to 0600 and then try running this command again:
[...]/.vagrant/machines/default/virtualbox/private_key
I have run:
$ sudo chmod 666 [...]/.vagrant/machines/default/virtualbox/private_key
I also tried (600, 777) but still get the same error.
Please can someone tell me what is wrong and how to fix it?
I just had this issue, and I worked around it moving the private_key file to another place, changing its permission, and then creating a symbolic link at the original place.
So,
$ mv [...]/.vagrant/machines/default/virtualbox/private_key /some/path/where/you/can/change/permissions
$ ln -s /some/path/where/you/can/change/permissions [...]/.vagrant/machines/default/virtualbox/private_key
If you're using the Windows Subsystem for Linux (WSL), this error can occur when you're trying to vagrant up in a directory that is outside the user's home directory.
From the Vagrant docs:
If a Vagrant project directory is not within the user's home directory on the Windows system, certain actions that include permission checks may fail (like vagrant ssh). When accessing Vagrant projects outside the WSL Vagrant will skip these permission checks when the project path is within the path defined in the VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH environment variable.
Changing the VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH to the current working directory (or a directory above it) can fix this. For example, if your project is in /mnt/c/www, then set the environment variable accordingly:
export VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH="/mnt/c/www"
I got the same error now. The problem happened because i was trying to do vagrant up in an NTFS partition, just like the error message tell me.
So i created an directory link in my ext4 partition and an simbloc link in my NTFS to solve this. Works Fine now!
Thanks!
I had this same problem and turns out chmod seems to be working fine but is not actually changing permissions, my files where at an NTFS partition, try changing them to an ext4 or similar.
Got this error using otto (which layers on vagrant)
It is def filesystem related, have a fat partition to allow use with windows (used to, no longer). When the permissions couldn't be set on the partition I just copied the whole directory over to my user directory (as I always should have).
Was using git so I just reset to head to get back to my starting place... re-ran:
otto compile
otto dev
up and running now.
So I installed a LAMP on a Google Cloud instance with debain wheezy7. Everything is working fine but I am not able to work the ftp. I am following this tutorial by digital ocean
I am stuck at this last step where I need to make vsftpd allow the user to write outside the chroot file.
The error is get is
hetunandu_gmail_com#lamp:~$ mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': Permission denied
Then when i use sudo with it i get this error
hetunandu_gmail_com#lamp:~$ sudo mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': No such file or directory
Where do I go from here?
Also I dont know how to get my username and password setup for FTP
I followed the tutorial and could not replicate your issue. I initially got "Permission denied" but you can circumvent this by running:
$ sudo su
and then
$ mkdir -p /root/$USER/files
Why not use /home/$USER ? not sure why you want to create the folders under /root.
As for your second question, regarding the username and password, I am not sure I understand. From the Developers Console > Compute Engine > VM Instances > click SSH and that should log you in with root privileges. then you can create all the users you want:
$ sudo adduser test_user
Please don't use FTP as it's an insecure clear-text protocol which will let others see your password and easily get access your instance, read/modify/delete your files, etc.
Instead, you should use secure protocols such as SCP or SFTP with public key authentication.
Here are some options to transfer files to/from your GCE VM instance:
sftp CLI tool, as described in this answer
gcloud compute copy-files, as described in this answer
WinSCP with SFTP