I am attempting to set up gpg preset passphrase caching using the gpg agent so I can automate my file encryption process. In order for the gpg-agent to run and properly cache the passphrase, it seems there needs to be a S.gpg-agent socket located within the ~/.gnupg/ directory that gets generated in the root directory when I set up gpg and gpg-agent.
What I have done (and which seemed to work in the past) is I would start up everything as root and copy over the contents of the /.gnupg directory to my less privileged user and grant permissions to that socket and directory to the user. The commands I ran to start up the gpg-agent daemon and cache passphrase:
gpg-agent --homedir /home/<user>/.gnupg --daemon
/usr/libexec/gpg-preset-passphrase --preset --passphrase <passphrase> <keygrip>
gpg-agent process seems to be running just fine but I get the below error from the second line:
gpg-preset-passphrase: can't connect to `/home/<user>/.gnupg/S.gpg-agent': Connection refused
gpg-preset-passphrase: caching passphrase failed: Input/output error
I have made sure the socket exists in the directory with proper permissions and this process runs as root. It seems that this socket is still inherently tied to root even if I copy and modify permissions. So my questions are
How exactly does this socket get initialized?
Is there a way to do so manually as another user?
To add, the agent process seems to run just fine for both users but where I get a little hazy is how the gpg-preset-passphrase is using the socket and if its that or the agent that is refusing the connection to S.gpg-agent
I also assume that I don't need to explicitly start the agent but figured I would this so that I could set any values such as the homedir if needed.
It turns out the issue was unrelated to the gpg-agent and gpg-preset-passprhase.
Note: This is not a permanent solution but it did allow me to get past the issue I was facing.
After modifying the /etc/selinux/config and disabling SE Linux, I no longer experienced the permissions issue above. SE Linux is a Linux kernel security module developed by Red Hat (I am currently running this on RHEL7). It seems the next step will likely be to make sure these binaries and packages are allowed access from my user using audit2allow. Bit more information on this here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-allowing_access_audit2allow
Related
I have a bash script which uses rsync to pull down backups of my server to an offline server I have running Ubuntu.
But it does not seem my offline server wants to run this script right. And the issue I get when I run it manually run it is,
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.1]
But heres the thing, the host key is fine and works when I SSH. So logging on to my offline server and from there logging into the remote server works without any issues.
Here is where the issue gets very odd, the bash script works (when asked to run via webmin) after I have SSHed into the offline server. I dont have to do anything else, just login to the remote server and the bash script will work.
That is what I dont understand, if the host keys are not configured right, then they should not work at all, but they do, once I have logged into the server?
Thanks,
Try to login with ssh -a to prevent your ssh-agent to be forwarded. You probably use an identiy in your agent to authenticate to the target Server. This authentication will not work if your ssh-agent ist not present.
I am trying to make a script to install more or less automatically oracle database as well as some other application of my own. I haven't writen a line yet because I want to make all steps manually first.
So, my environment is the following. I have RHEL 5 with no graphic interface. I am connecting to the server from Windows laptop through SSH as root. I have enabled XForwarding, so when I login with root account I can run xdpyinfo so that I can check XServer configuration.
I need XForwarding because the Oracle DB installation procedure requires an XServer. However, Oracle requires the user oracle to perform the installation. I have already created the oracle user but when changing the user from root to oracle I can no longer run xdpyinfo command so the Oracle installation procedure fails. I get the following error:
Xlib: connection to "localhost:10.0" refused by server
Xlib: PuTTY X11 proxy: wrong authorisation protocol attempted
xdpyinfo: unable to open display "localhost:10.0".
I have tried to use xhost to enable my laptop to access my server but I have failed as well to do that.
If you really feel the need to do this, then while you are root, get the current $DISPLAY value, particularly the first value after the colon, which is 10 in your case. Then find the current X authorisation token for your session:
xauth list | grep ":10 "
Which will give you something like:
hostname/unix:10 MIT-MAGIC-COOKIE-1 2b3e51af01827d448acd733bcbcaebd6
After you su to the oracle account, $DISPLAY is probably still set but if not then set it to match your underlying session. Then add the xauth token to your current session:
xauth add hostname/unix:10 MIT-MAGIC-COOKIE-1 2b3e51af01827d448acd733bcbcaebd6
When you've finished you can clean up with:
xauth remove hostname/unix:10
That's assuming PuTTY is configured to use MIT-Magic-Cookie-1 as the remote X11 authentication protocol, in the Connection->SSH->X11 section. If that is set to MDM-Authorization-1 then the value you get and set with xauth will have XDM-AUTHORIZATION-1 instead.
It might be simpler to disconnect from root and start a new ssh session as oracle to continue the installation, which would also make sure you don't accidentally do anything unexpected as root. Well, until you have to run root.sh, anyway.
If you do a silent install with a response file then you don't need a working X11 connection anyway; you just need $DISPLAY to be set, but nothing is ever actually opened on that display so it doesn't matter if xdpyinfo or any other X11 command would fail. I'm not sure how you're thinking of scripting the X11 session, but even if that is possible a silent install will be simpler and more repeatable.
I have setup a new EC2 instance on AWS and I'm trying to get FTP working to upload my application. I have installed VSFTPD as standard, so I haven't changed anything in the config file (/etc/vsftpd/vsftpd.conf).
I have not set my port 21 in the security group, because I'm doing it through SSH. I log into my EC2 through termal like so
sudo ssh -L 21:localhost:21 -vi my-key-pair ec2-user#ec2-instance
I open up filezilla and log into local host. Everything goes fine until it comes to listing the directory structure. I can log in and right and everything seems fine as you can see below:
Status: Resolving address of localhost
Status: Connecting to [::1]:21...
Status: Connection established, waiting for welcome message...
Response: 220 Welcome to EC2 FTP service.
Command: USER anonymous
Response: 331 Please specify the password.
Command: PASS ******
Response: 230 Login successful.
Command: OPTS UTF8 ON
Response: 200 Always in UTF8 mode.
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: EPSV
Response: 229 Entering Extended Passive Mode (|||37302|).
Command: LIST
Error: Connection timed out
Error: Failed to retrieve directory listing
Is there something which I'm missing in my config file. A setting which needs to be set or turned off. I thought it was great that it connected but when it timed out you could picture my face. It meant time to start trawling the net try and find the answer! Now with no luck.
I'm using the standard Amazon AMI 64 bit. I have a traditional lamp setup.
Can anyone steer me in the right direction? I have read a lot about getting this working but they are all incomplete, as if they got bored half way through typing up how to do it.
I would love to hear how you guys do it as well. If it makes life easier. How do you upload your apps to a EC2 instance? (Steps please - it saves a lot of time plus it is a great resource for others.)
I figured it out, after the direction help by Antti Haapala.
You don't even need VSFTP setup on the instance created. All you have to do is make sure the settings are right in FileZilla.
This is what I did (I'm on a mac so it should be similar on windows):
Open up file zilla and go to preferences.
Under preferences click sftp and add a new key. This is your key pair for your ec2 instance. You will have to convert it to the format FileZilla uses. It will give you a prompt for the conversion
Click okay and go back to site manager
In site manager enter in your EC2 public address, this can also be your elastic IP
Make sure the protocol is set to SFTP
Put in the user name of ec2-user
Remove everything from the password field - make it blank
All done! Now connect.
That's it you can now traverse your EC2 system. There is a catch. Because you are logged in as ec2-user and not root you will not be able to modify anything. To get around this, change the group ownership of the directory where your application will lie (/var/www/html) or what ever. I would change it so it is on a EBS volume. ;) Also make sure this group has read write and execute permissions. The group for the ec2-user is ec2-user. Leave everyone else as nothing. So the command you use while logged in via ssh
sudo chgrp ec2-user file/folder
sudo chmod 770 file/folder
Hope this helps someone.
FTP is a very troublesome protocol because it requires a secondary pipe for the actual data transfer and does not definitely work well when piped. With ssh you should use SFTP which has nothing to do with FTP but is a completely different protocol.
Read also on Wikipedia
Adding the key to www is a recipe for disaster! Any minor issue with your app will become a security nightmare.
As an alternative to ftp, consider using rsync or a more "mature" deploy strategy based on capistrano for instance. There are plenty of tools for that around.
Antti Haapala's tips are the only way to work around with EC2 SFTP. It works just fine! Just note that you need to create the /var/www/.ssh/ folder and copy the authorized_keys file there.
After that you'll need to change authorized_keys ownership to www-data so ssh connection can recognize it. Amazon should let people know that. I looked for this in there forums, FAQ, etc. No clue at all... Cheers once more to stackoverflow, the way to go haha!
I am using SFTP client(WinSCP) to get into a remote server and retrieve some files. I could not get to the SFTP server when I use WinSCP in a Windows-7 machine; but it works good when I try it from an XP machine. Can anyone think of what might be wrong. Any help appreciated!
I am also including the error screenshot, if that helps
Can anyone please help!
This could be some problem with your firewall. Check it if you are blocking WinSCP.
Quoting WinSCP documentation on the error message Server unexpectedly closed network connection:
If you get this error message while connecting to your server, it is
most usually caused by the server not being able to run some process
necessary to support your session. Always try to connect with another
SSH (SFTP) client to find, if it is server or client related problem.
Possibilities are:
Shell.
Your account may not be allowed to start a shell at all. With some servers (like OpenSSH or Sun SSH), you may need to be allowed to
start a shell, even if using SFTP protocol.
Also some servers refuse to start a shell if your password has expired or your account was terminated.
Some shells do not work with non-interactive sessions. The same it true for some configurations (or profiles used) for otherwise
working shells. This commonly exhibits with SCP protocol with
associated error message "Error skipping startup message. Your shell
is probably incompatible with the application (BASH is recommended)."
Try to force bash shell explicitly on SCP/Shell page of Advanced Site
Settings dialog. Using SFTP protocol instead of SCP is another option.
OpenSSH server may fail to start shell when chroot is configured, but not possible (e.g. due to group writeable permissions
to chroot directory).
Some environments require specific permissions (e.g. 755) to files like .profile or .bashrc.
SFTP server.
Your account may not be able to start SFTP server binary (e.g. /bin/sftp-server) or the binary is not present on your server.
Your SSH server may also lack the SFTP subsystem.
SSH server:
Your SSH server, particularly OpenSSH, may not be able to access the server key files, due to an incorrect permissions.
Ok, so I have Hudson (v1.393) running in an Ubuntu VM and everything's working fine.
However I'm trying to add a Mac slave to the Ubuntu master and I've run in to a few problems.
I have set up SSH keys so that from the command line, the Ubuntu VM can ssh using the key into a user called hudson on the Mac.
In the Hudson slave configuration, I have "Launch slave agents on Unix machines via SSH" selected and have entered the host IP, username of the user on the slave and the location of my private key file on the master (which has been added to the authorised keys file on the slave).
However, the master fails to connect to the slave.
Looking at the log (below), it's trying to authenticate using a password.
Is this a fall back for a failed key based SSH attempt?
Is Hudson only trying to authenticate using a password, and I need to change something else to get it to use the key file which is defined in the configuration?
Is it just not possible to launch slave agents via ssh on a mac? (I know the name of this type of slave launch method explicity states Unix, but I was thinking (read: hoping) that it would work with OS X too)
Log
[01/14/11 10:38:07] [SSH] Opening SSH connection to 10.0.1.188:22.
[01/14/11 10:38:07] [SSH] Authenticating as hudson/******.
java.io.IOException: Password authentication failed.
at com.trilead.ssh2.auth.AuthenticationManager.authenticatePassword(AuthenticationManager.java:319)
at com.trilead.ssh2.Connection.authenticateWithPassword(Connection.java:314)
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:565)
at hudson.plugins.sshslaves.SSHLauncher.launch(SSHLauncher.java:179)
at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:184)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.io.IOException: Authentication method password not supported by the server at this stage.
at com.trilead.ssh2.auth.AuthenticationManager.authenticatePassword(AuthenticationManager.java:289)
... 9 more
[01/14/11 10:38:07] [SSH] Connection closed.
If anyone has managed to conquer this type of set up before, or has any tips or ideas, I'd be very grateful!
Thanks
I've recently run into the same problem, trying to launch an agent on a Mac OS X 10.6 machine using SSH.
To get password authentication to work you'll need to edit /etc/sshd_config on the client node, setting PasswordAuthentication yes
In the Hudson dashboard take the node offline, make sure the configuration has a valid username and password, and launch the agent. Also make sure that the Remote FS root directory is owned by the build user you're connecting as.
For password-less ssh authentication, first check which user the Hudson master is running as. Lets assume that this is tomcat55. Generate a public/private SSH key pair (with an empty passphrase), then verify that the Hudson user can connect.
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tomcat55/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/tomcat55/.ssh/id_rsa.
Your public key has been saved in /home/tomcat55/.ssh/id_rsa.pub.
$ # authorize the hudson master on the hudson node
$ scp /home/tomcat55/.ssh/id_rsa.pub hudson#macnode:~/.ssh/authorized_keys
$ # test the connection
$ ssh -i /home/tomcat55/.ssh/id_rsa hudson#macnode
On the Hudson mac node, the /etc/sshd_config needs to allow for password-less access.
Protocol 2
PubkeyAuthentication yes
In the node configuration clear the password field, and set the private key field (in this example it is /home/tomcat55/.ssh/id_rsa). You should now be able to launch the agent:
[01/19/11 22:38:44] [SSH] Opening SSH connection to macnode:22.
[01/19/11 22:38:44] [SSH] Authenticating as hudson with /home/tomcat55/.ssh/id_rsa.
[01/19/11 22:38:45] [SSH] Authentication successful.
Check the /var/log/auth.log file on the Ubuntu machine. I'm betting you need to chmod 700 the .ssh directory of the hudson user.
I think the first answer (the selected one) is an awesome answer, but I did find a case where it is not the only solution.
In my case I have a Mac OS slave that was working and then I took that Mac down and brought up a new one. I thought I could just tweak the settings for the existing node's configuration to point it at the new Mac. It didn't work and I had all the same errors and problems described throughout this message thread.
Then I went in and deleted the node and recreated it with exactly the same settings and it worked. I suspect that SSH key fingerprint changed and by deleting the node and recreating it I was able to get it working. Whatever it is, the key component that caused it to fail is not a configuration option.