I have been tasked with setting up a centOS 6.2 development box (even though I do not know linux) and am currently using vsftpd to FTP into a box at work. The problem is sftp is not working.
Authentication failed. Error: Critical error Error: Could not connect
to server
this is the error I am getting.
I have added the user by doing the following:
sudo useradd -d /var/www/PATH -s /usr/sbin/nologin USERNAME
sudo passwd USERNAME
sudo chown -R USERNAME /var/www/ PATH
sudo chmod 755 /var/www/PATH
it works for ftp (and the folder structure is jailed) but it does not work with sftp.
However, when I add a user the following way:
sudo useradd USERNAME
sudo passwd USERNAME
sudo chown –R USERNAME /opt/USERNAME
sudo chmod 777 /opt/USERNAME
I have sftp access unjailed and no FTP access.
It does not matter if I have to create multiple accounts (one for ftp and one for sftp), they do have to be jailed to the directory.
If there is a better solution to my problem, help would be welcomed!
Thanks,
Matt
You are on good way.
Personally I am using chrooting of sftp user described here: http://www.thegeekstuff.com/2012/03/chroot-sftp-setup/
IMHO in article is not stressed out enough that user's home directory has to be owned by root
# ls -ld /var/www/PATH
drwxr-xr-x 3 root root 4096 Dec 28 23:49 /var/www/PATH
You can get a lot of helpful info from logs, it this case you can search
tail -f /var/log/secure
while connecting from external host.
Let me know if you have any more help with this problem.
I've created a script for this purpose, you can use it what ever the distribution that you are using is, it works on both RHEL based and Deb's as well to create a jailed SFTP directory with no shell access, only SFTP.
SFTP Jailing with no shell access
Related
My user is in root group. I canot ssh to server as root because is says Permission denied, please try again. What I usualy do is I ssh as my user and once I'm logged in i type sudo su and I proivde my user's password to become root.
I want to automate part of my job so I want to write a bash script which would ssh as my user, switch to root and then call set of commands.
So far I came with following script but I am unable to switch to root user without asking user for password:
while read p; do
p=$(echo $p|tr -d '\r')
sshpass -p "myPasswd" ssh -T -o StrictHostKeyChecking=no myUser#remoteServer << EOT
cd /var/log/jboss/ #here I am getting 'permission denied' message as only root has access
exit
EOT
done < $nodes
I also tried:
sshpass -p "myPasswd" ssh -tt -o StrictHostKeyChecking=no myUser#remoteServer 'cd /var/log/jboss/'
but I got the same permission denied error message
For security reasons, root users are typically not allowed ssh access.
PermitRootLogin no # value in /etc/ssh/sshd_config
The above setting is preventing you from logging in as root in the first place. If you are "comfortable" with you network's security, you can consider modifying that setting. If you ever make modifications to the sshd config, you'll need to restart the ssh service:
sudo service sshd restart
Of course, if you want to adhere to common wisdom, you may want to make changes to your sudoers file (as recommended by chepner and Nic3500). Here's a reasonable configuration change to make:
Add the following line to the bottom of your /etc/sudoers file:
#includedir /etc/sudoers.d
And add the following files to your /etc/sudoers.d directory:
cat /etc/sudoers.d/10_wheel:
%wheel ALL=(ALL) NOPASSWD: ALL
The above example configures sudo to allow access to all commands to members of the wheel group, without a password. You may want to change the group name to a group that your user is a member of.
You can determine your groups by issuing the command:
groups
Also, to avoid the use of sshpass, you can deploy ssh public keys to the remote host. Lastly, if you don't want to change the server at all, you can achieve what you are trying to do with expect. If you are comfortable with python coding, I recommend pexpect - I find it soooo much easier than the TCL based expect that is typically discussed.
I can't understant what password is expected by hadoop.
I configured it according to tutorial. I do:
sudo su
#bash start-dfs.sh
And now it expects someting like password lan's network. I have no idea what should I write.
As you can see, I run script as root. Of course master (from that I run script) may ssh to slaves as root without password (I configured and tested it).
Disclaimer: It is possbile that I give incorrect name (for example for script name - it is beacause of I don't understand exactly now. However I am sure that it was about something like lan's network password)
Help me please, for which a password is it?
Edit: I was using http://backtobazics.com/big-data/setup-multi-node-hadoop-2-6-0-cluster-with-yarn/
It seems you may not setup passwordless-ssh. Passwordless-ssh is required to run hadoop services (daemons). So try to setup ssh among nodes
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Then ssh user#hostname
I'm attempting to configure a OSX Mavericks server running Apache and Lasso. For security and convenience I only want users belonging to a specific "web" group to be able to access the web root. I have succeeded in letting both permitted regular users and Apache (_www) access the files, but I cannot for my life manage to set the correct permissions for Lasso. I'm hoping someone here can point me in the right direction.
Basically, what I have done is the following:
sudo dseditgroup -o create web
sudo dseditgroup -o edit -a _www -t user web
sudo dseditgroup -o edit -a _lasso -t user web
sudo chgrp -R web webroot
sudo chmod -R 770 webroot
This apparently works for Apache, but any lasso files merely output a Lasso permission error:
An unhandled failure during a web request
Error Code: 13
Error Msg: Permission denied - While opening //Library/Server/Web/Data/Sites/...
I have also tried adding the _www and _lasso groups to the web group, as well as creating a new Lasso instance in the instance manager with the effective group set to "web".
Strangely, setting permissions to the _lasso user or group directly on the files (i.e. not through the web group) seems to work which makes me believe there's something wrong with how I'm creating my ACLs.
A little more info:
ls -l#e example.lasso
-rwxrwx---+ 1 danielpervan web 0 Feb 19 15:20 example.lasso
0: user:_spotlight inherited allow read,execute
I've encountered problems similar to this when I have ACLs above and beyond the standard Unix permissions. From your post, it looks like there are some ACLs on the example.lasso file. I would run the following script on your web root to remove all ACLs from every folder / file:
sudo chmod -R -N /path/to/webroot/
If that doesn't work, verify that the _lasso user is part of the web group:
dscl . -read /groups/web | grep GroupMembership
I try to login using the ec2-user but for some reason the login fails:
Using the username: ubuntu I am able to login just fine, however, I don't have any privileges and I can't sudo su for the privileges to write to my files. I tried using the cyberduck terminal and send command options but sudo su doesn't work with them. Cyberduck just spins.
I don't think the ec2-user account works on recent Ubuntu AMIs, which may explain the failed login.
You can approach this in a few ways. The first is to create a new user account specifically for FTP and give it permissions only to the necessary folders. First create the user, then create a public/private key pair for non-interactive login. This will allow you to operate your FTP client like normal.
My preferred solution is to upload the files to the ubuntu home directory and then run a script as root that moves the files to the correct location. You won't have to modify the system configuration this way, but you will have to do the file transfer in two steps.
Create a staging folder in /home/ubuntu and copy the files there. Create a /home/ubuntu/copy.sh script on the server like this:
#!/bin/bash
sudo su #this will only work if sudo doesn't prompt for a password
cp -r /home/ubuntu/stage/* /var/www/html/
Then from your dev machine, call the script:
$ ssh -i ~/path/to/key.pem ubuntu#ec2.hostname.com /home/ubuntu/copy.sh
If you want to get really fancy, you could set up a git repository and use a post-receive hook to handle this all for you when you push. No need for an FTP client at all.
I use SSH to connect to my Ubuntu instance. With SSH I can administer files and folders on the instance, but how do I upload files and folders from my local machine to the instance?
Is it possible to do right from SSH session, without using SFTP clients?
Just to add a bit more detail to the scp command (included in OSx and most linux/unix):
scp -i myssh.pem local_file username#200.200.200.200:/home/username
Obviously - replace the pem file with the one used for ssh access. Obviously replace "username" and "200.200.200.." with valid values for your setup.
You can try kitten utility which is a wrapper around boto3. You can easily upload/download files and run commands on EC2 server or on multiple servers at once for that matter.
kitten put -i ~/.ssh/key.pem cat.jpg /tmp [SERVER NAME][SERVER IP]
Where server name is e.g ubuntu or ec2-user etc.
This will upload cat.jpg file to /tmp directory of server
As mentioned already, I've used WinSCP, which logs me in as "ec2-user" - then make sure to adjust that user's permissions via SSH. Example:
chown -R ec2-user /path/to/files
(Authenticate as the root user first.)
Whatever folder or files you need to edit via WinSCP, allow permissions on them (otherwise you will get a permission denied error when trying to upload/edit files in WinSCP).
you cannot copy files using ssh. you can use scp/sftp.
scp if you are on linux or winscp if you are on windows
You can use this:
scp -i yourkeypair.pem source destination
This Works Fine
scp -r -i myssh.pem /local/directory remote_username#10.10.0.2:/remote/directory
-r for recursive
You could also install and set up an FTP Server, which will allow you to set up users, and directories for them to upload to. That being said, I've upvoted the above because scp/sftp is the ideal method.
The easiest way is to install webmin and user the file manager (java plugin) from your browser.
//Go to home folder
cd ~
//Download the latest version
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.660-1.noarch.rpm
//install
sudo rpm -U webmin-1.660-1.noarch.rpm
//Change default password of root user
passwd
Finally, open port 10000 in the security groups
Then, log into
https://server_name:10000
with user:root password:what_you_set_before