PhpStorm: Algorithm negotiation fail - windows

I've a remote server where I host my projects. And I'm using my PhpStorm locally, so every time I save, it synchronizes automatically with the remote server.
However, I failed to configure PhpStorm to run PHPUnit on the remote server.
Under Configure Remote PHP Interpreter I fill out the right information (Host, User name, and Password).
The error I'm having is "Algorithm negotiation fail" when I validate and "Test SFTP Connection: Connection to 'id address' failed. Connection failed" when I try to specify the path of PHP interpreter.
How do I fix that ?

I had the same problem. I solved this problem by adding
KexAlgorithms curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
to /etc/ssh/sshd_config and after that, I restart sshd
sudo systemctl restart sshd

Just upgraded Ubuntu to 16.04 and encountered this issue, the "Algorithm negotiation fail" in PHPStorm 8.0.3.
The problem is with the jsch-0.1.51.jar library. If you overwrite the jsch-0.1.51.jar file with the latest from https://sourceforge.net/projects/jsch/ (currently jsch-0.1.54.jar) and restart, it should be fine. No need to add insecure algorithms to your ssh daemon.

As Guillaume Fache proposed, minimal configuration for PhpStorm is:
KexAlgorithms diffie-hellman-group1-sha1
but diffie-hellman-group1-sha1 use:
1) 1024 bits modulus - breakable, marked as insecure
2) SHA-1 - breakable, confirmed collision attack possibility
conclusion:
use public/private key pair - more secure and no needs to save or type password

edit this file :
sudo /etc/ssh/sshd_config
add this line :
KexAlgorithms diffie-hellman-group1-sha1
and restart :
sudo systemctl restart sshd
It works for me!

Related

How to set up remote access on a Mac?

I need to work remotely and need to connect to a company network from my work Mac over the internet. How do I set this up? I have looked at different software for example OpenVPN and Tunnelblick. But not sure how to go about it. Any suggestions? Advice?
If your work Mac has restricted firewall, and speed is your concern, you can try shadowsocks-libev to bypass the firewall, which is primarily designed to bypass GFW, and used by millions of sneaky users. It is so fast that no vpn can compete with it.
For your work device (server side)
brew install shadowsocks-libev
# ss-server and ss-local installed
# create a server with listening port 3333
# sudo may be required
ss-server -p 3333 -m chacha20 -k your_password -u
For you client (home device)
brew install shadowsocks-libev
# apt install shadowsocks-libev
# sudo may be required
ss-local -s WORK_IP -p 3333 -b 127.0.0.1 -l 1080 -k your_password -m chacha20 -u
This created socks5 proxy with 127.0.0.1:1080. Make sure "your_password", port "3333", encrypt method "chacha20" should be matched on both sides.
set your home deivce (client side) socks5 proxy as 127.0.0.1:1080. Done.
Test IP
# With proxy, this would show your work Mac's IP
curl -x socks5h://localhost:1080 ifconfig.co/json
# without proxy
curl ifconfig.co/json
As a client side, GUI version is also recommended for beginners. Open source Mobile Version is also available.
This is a demo only. For security reasons, do not contain any password in the command line. Use -c config.json instead.
You can try this: vpn client
You should ask your admins to set up a vpn account for you. After that you can connect with a vpn client( of your choosing your use barracuda) and the provided credentials. Hope to have helped.
Since you brought up OpenVPN and Tunnelblick I should probably point out that
Tunnelblick is a free, open source graphic user interface for OpenVPN on macOS
Therefore Tunnelblick is probably going to be your app of choice.
Again, since you clearly are looking at OpenVPN I should point out there are two editions in circulation at the moment: commercial and community. I don't see any reason why you should pick commercial edition as your setup seems to be pretty simple. You probably will end up with a checklist of following things to do:
set up an OpenVPN server in your company network (windows, linux, pc, mac, raspberry pi - range of supported platforms is very extensive)
on the server generate keys for your client(s) (or use pre-shared secret as described in quick start below)
write and securely transport .ovpn config files (you can embed keys in there for simplicity) over to your mac
import the .ovpnfile into your Tunnelblick and start
The official quick start guide is probably the best place to start quick.
There's a whole bunch of other things that you (or more likely, your workplace network admin) will have to sort out. Just to name a few: routing and NAT-ting, ip address/domain name for OpenVPN server, firewall rules on machines you connect to.
But covering it all here without knowing your specifics will be problematic.
You should use any-desk or VNC server for connect your machine remotely. it's easy to use.
Your problem is not what you need to do on your Mac. What you do on the Mac-side is only half of any viable solution.
What you need to find out is what ways of connecting to the "company network" are provided by the company? Is anyone able to connect to the company network from a non-Mac computer? Does the company have any IT staff? Or do you have auth/means of changing their network configuration?
First of all, what type of control do you need? If we're talking about files and stuff like that then you should run a SSH server on your mac. More about that here (stackoverflow.com\superuser.com) and here (apple.com).
Another way to do that is to run a Remote Control Software (for example, Team Viewer), but it's laggy and unstable.
I was in the same situation as you a few months earlier and used the Tunnelblick in the Mac OS, which worked perfectly fine.
Since you are going to connect to your company network, I suggest you configure a VPN server and client to do that. I have configured the OpenVPN community edition to do that. The steps are:-
Server side configuration
- Login to root - sudo su
- Install OpenVPN and Easy-RSA - apt-get install openvpn easy-rsa
- Copy the server.conf from samples to /etc/openvpn - gunzip -c
/usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz >
/etc/openvpn/server.conf
- Edit server.conf
- Check that Diffie-Hellman is set to 2048 - dh dh2048.pem
- Uncomment push "redirect-gateway def1 bypass-dhcp"
- Uncomment push "dhcp-option DNS 10.0.2.100" or put any other DNS
server you want - default settings is OpenDNS.
- Setup IP forwarding echo 1 > /proc/sys/net/ipv4/ip_forward
- Also, edit /etc/sysctl.conf, and set "net.ipv4.ip_forward=1" after
uncommenting the line. This is for persisting the ip forwarding when
you reboot.
- Setup ufw (Uncomplicated Firewall - this is a frontend to iptables)
- ufw allow ssh
- ufw allow 1194/udp
- Edit /etc/default/ufw and set DEFAULT_FORWARD_POLICY to ACCEPT.
- Edit /etc/ufw/before.rules and add the following lines near the top
*nat :POSTROUTING ACCEPT [0.0]
-A POSTROUTING -s 10.0.8.0/8 -o ens4 -j MASQUERADE COMMIT ufw enable
- Do a ufw status and check if the rules are setup properly
- Setup the RSA keys
- cp -r /usr/share/easy_rsa/ /etc/openvpn/
- mkdir /etc/openvpn/easy-rsa/keys
- Edit /etc/openvpn/easy-rsa/vars and change keys KEY_COUNTRY etc, and
KEY_NAME="server"
- Generate the Diffie-Hellman PEM file - openssl dhparam -out
/etc/openvpm/dh2048.pem 2048 cd /etc/openvpn/easy-rsa/
- . ./vars
- ./clean-all
- ./build-ca
- ./build-key-server server
- cd keys && cp server.crt server.key ca.crt /etc/openvpn
- At this point your /etc/openvpn should contain server.key,
server.crt, ca.crt and dh2048.pem
- Start OpenVPN - service openvpn start
- Generate client config
- Copy client config from samples - cp
/usr/share/doc/openvpn/examples/example-config-files/client.conf
~/client/client.ovpn
- Generate the client keys - cd /etc/openvpn/easy-rsa && ./build-key
client It will generate client.crt and client.key files.
- Copy client.crt, client.key, ca.crt to ~/client.
- Edit client.ovpn
- Edit the entry "remote my-server-1 1194" and put the
IP/Hostname of the VPN server in place of my-server-1.
- At the end, append "auth-user-pass"
- In a new line, add an opening tag <ca>.
- Append the contents of /etc/openvpn/ca.crt.
- Append a closing tag </ca>.
- Append opening tag <cert>.
- Append contents of client.crt.
- Append a closing tag </cert>.
- Append a opening tag <key>.
- Append contents of client.key.
- Append a closing tag </key>.
- Comment out keys "remote-cert-tls server" and "tls-auth ta.key 1"
- Uncomment "user nobody" and "group nogroup".
- Save the file and download to your Mac client securely.
Client side configuration
Download the OpenVPN MacOs client
(https://openvpn.net/vpn-server-resources/connecting-to-access-server-with-macos/).
Import the .ovpn file mentioned earlier.
Connect using this client.

Jenkins couldn't clone GIT repository (MacOS X 10.8.2)

System: MacOS, standard Jenkins installation.
I can clone repository from my user. But Jenkins - can't neither from Git, not from GitHub (my key is added to Git and GitHub). I receive: "stderr: Host key verification failed."
I've copied my key into /Users/Shared/Jenkins/.ssh - but still no luck :( Maybe I've copied it to incorrect place?
Generate ssh key from Jenkins is not an option for me.
What am I doing wrong? Thanks in advance!
This is usually related to permissions, as Jenkins' process runs as user 'jenkins'.
See here: How to run jenkins as a different user -
especially the answers of Sagar and Peter Tran .
Cheers
Like the error says, the problem (at least first) is with host key verification. The first time you connect to an ssh server, ssh client will prompt you to check and accept the host key. (Of course no-one does that, so I don't know why it bothers...)
You could
sudo -u jenkins -i
and then
ssh git#github.com
and then reply to the prompt. Alternatively you can disable host key checking. Look up StrictHostKeyChecking in man ssh_config.

PuTTY fatal error: "No supported authentication methods available"

PuTTY fatal error:
No supported authentication methods available
When I tried to login into the production server, I am getting above error. Could anyone help me to fix this?
Edit file
sudo vi /etc/ssh/sshd_config
Set PasswordAuthentication yes
Then restart server
sudo service ssh restart
sudo service sshd restart
It worked for me after I did the following steps :
1- Download Puttygen (https://www.puttygen.com/download-putty)
2- Open PUttyGen and then Load the private key from :
C:\Users[username]\Chapter6.vagrant\machines\default\virtualbox
3- save the new private key with a new name.
4- Open Putty, go to Connection > SSH > Auth > and add the new private key
5- Connect now using 127.0.0.1 and 2222
I think your private key file format is not compatible with putty for putty uses its' native format instead.
Detail:http://tartarus.org/~simon/putty-snapshots/htmldoc/Chapter10.html#errors-no-auth
If you are using cloud service and trying to connect server using ssh then Don't login the user name as ec2-user, the default user name is ubuntu forubuntu server.
This error can also be seen if you haven’t selected the .ppk file for the session in Putty: Connection > SSH > Auth
You’re done if you’ve employed PuttyGen to generate the keys. Else import the private key to your .ppk file as others have instructed.
Note on Linux as opposed to Windows, puttygen is accessed only via the command line. Here’s some resources for that:
https://the.earth.li/~sgtatham/putty/0.76/htmldoc/Chapter8.html#pubkey
http://manpages.ubuntu.com/manpages/bionic/man1/puttygen.1.html
https://www.ssh.com/academy/ssh/putty/linux/puttygen
In my case, I updated the Putty application to the latest and issue was solved.
Do you still have access to the server (maybe an open shell?) Check /var/log/messages for more details. This could have something to do with your PAM configuration.
Did you change folder permissions? i met this question in this week, so i find the error that is cause to me change the folder(name is ec2-user) permission.
1.Edit the /etc/ssh/sshd_config file.
2.Change PasswordAuthentication and ChallengeResponseAuthentication to yes.
3a. Restart ssh /etc/init.d/ssh restart.
OR
3b. better you use service sshd restart
If you've saved your public key on an external drive and it's not connected, putty will throw this error when connecting to your remote server.
Solved via Puttygen
I was on a windows system and it doesnt support direct shell access like linux or macOS.
Download Puttygen.
Load the .pem key to puttygen
Save as Private key
Use this key to login to ec2 instance
P.S : Also if the SSH ask for login/username - enter ubuntu or admin
Download Puttygen
Load the .pem key to puttygen
convert .pem file to .ppk
Save as Private key
Install/Open Putty >> puTTY Configuration >> Auth >> Browse >> path to .ppk file
Use this key to login to ec2 instance (check that IP of remote server is allowed in security group config of EC2 instance)
Username
The usual user names are ec2-user, ubuntu, centos, root, or admin
If that server is in the cloud like AWS, the rookie mistake I did was not realizing that a new Public IPv4 DNS gets used when the instance was off for some time. So, check the new DNS
Today I faced the same problem. So in putty you have to use "user name" of your EC2 instance
to get your "user name" of your EC2 instance
Select EC2 instance
select Connect
Now go to putty use ec2_name#public address
To see your public address
select EC2
under details you will be able to see your public address.
Now try loading your "ppk" file you will be able to log in.
For Digital Ocean, we should enable password authentication first.
The complete instruction is here: https://docs.digitalocean.com/support/i-lost-the-ssh-key-for-my-droplet/#enable-password-authentication
Log in to the Droplet via the Recovery Console
Even though you have a root password for the Droplet, if you try to log in via SSH using that password immediately, you’ll receive a Permission denied (publickey) error. This is because password authentication is still disabled on the Droplet. To fix this, you need to log in via the Recovery Console and update its SSH configuration.
There are detailed instructions on how to connect to Droplets with the
Recovery Console for a more explicit walkthrough, but here’s a brief
summary:
On the Droplet’s detail page, in the same Access tab, click the Launch
Console button.
At the login prompt, enter root as the username.
At the subsequent password prompt, enter the root password you were
sent via email. Most distributions prompt you to enter the password
twice, but some (like Fedora 27) do not.
Enter a new root password to replace the one that was emailed to you,
then enter that same new password again.
You will now be logged in as root in the Recovery Console, which gives
you access to the Droplet’s SSH configuration.
Enable Password Authentication To enable password authentication on
your Droplet, you need to modify a line in its SSH config file, which
is /etc/ssh/sshd_config.
Open /etc/ssh/sshd_config using your preferred text editor, like nano
or vim. Find the line that reads PasswordAuthentication no line and
change it to PasswordAuthentication yes, then save and exit the file.
Because the SSH daemon only reads its configuration files when it’s
first starting, you need to restart it for these changes to take
effect. The command to do this depends on your operating system:
Operating System SSH Restart Command
Ubuntu 14.x service ssh restart
Ubuntu 15.4 and up systemctl restart ssh
Debian systemctl restart ssh
CentOS 6 service sshd restart
CentOS 7 systemctl restart sshd
Fedora systemctl restart sshd\

Git and http_proxy (SparkleShare on windows and http_proxy)

I've just successfully built SparkleShare for windows according to guide:
https://github.com/wimh/SparkleShare/wiki
and exported my ssh public key to a server.
The problem is that I can't connect from a client behind a http_proxy to a public server with ssh running on a custom port. I had also problem with cloning any git server. I need to switch git:// protocol to http:// one. Any suggestion? Does anyone have similar experience?
This is a log file:
15:25:13 [SSH] ssh-agent started, PID=4380 Identity added:
C:\Users\MYUSER\AppData\Roaming\sparkleshare\sparkleshare.MYEMAIL.key
(C:\Users\sg0922706\AppData\Roaming\sparkleshare\sparkleshare.MYEMAIL.key)
15:25:34 [Fetcher][C:\Users\MYUSER\Documents\SparkleShare.tmp\share]
Fetchin g folder: ssh://MYGITUSER#MYHOST/MYPATH 15:25:34 [Fetcher]
Disabled host key checking MYHOST 15:25:34 [Cmd] git clone --progress
"ssh://MYGITUSER#MYHOST/MYPATH" "C:\Us
ers\MYUSER\Documents\SparkleShare.tmp\share" 15:25:37 [Git] Exit code
128 15:25:37 [Fetcher] Failed 15:25:37 [Fetcher] Enabled host key
checking for MYHOST
To get SparkleShare to use your proxy you will need to modify the config of the msysgit that is installed as part of SparkleShare. Navigate to C:\Program Files (x86)\SparkleShare\msysgit\etc and edit the gitconfig file in notepad and add the following line under the [http] tag
proxy = http://user:pass#proxyurl:port
modifying the url as required to match your settings. You can then use the "On my own server" option to add the http url of your repository.
I have a work around on this particular problem. I guess that you already successfully connected to your server via a simple SSH client (i.e. PuTTY)? With PuTTY you can easily configure an ssh connection via any kind of proxy (such as HTTP, SOCKS, Telnet, ..)
What you can do now is to specify a local "tunnel" (an SSH port forwarding rule) like this: L22 127.0.01:22 (see attachment). If you are using a ssh command line add the following option: -L 22:127.0.01:22.
So now as soon as your terminal is open and running you'll be able to reach your git server via the server url: ssh://git#127.0.0.1.
If your local port 22 is busy you can define the tunnet on a other port. i.e. if the 44 is not occupied: L44 127.0.0.1:22. The url to use in SparkleShare become ssh://git#127.0.0.1:44.
But it's a work around. I'm looking for a better solution.

Uploading to EC2 problems. How do you do FTP?

I have setup a new EC2 instance on AWS and I'm trying to get FTP working to upload my application. I have installed VSFTPD as standard, so I haven't changed anything in the config file (/etc/vsftpd/vsftpd.conf).
I have not set my port 21 in the security group, because I'm doing it through SSH. I log into my EC2 through termal like so
sudo ssh -L 21:localhost:21 -vi my-key-pair ec2-user#ec2-instance
I open up filezilla and log into local host. Everything goes fine until it comes to listing the directory structure. I can log in and right and everything seems fine as you can see below:
Status: Resolving address of localhost
Status: Connecting to [::1]:21...
Status: Connection established, waiting for welcome message...
Response: 220 Welcome to EC2 FTP service.
Command: USER anonymous
Response: 331 Please specify the password.
Command: PASS ******
Response: 230 Login successful.
Command: OPTS UTF8 ON
Response: 200 Always in UTF8 mode.
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: EPSV
Response: 229 Entering Extended Passive Mode (|||37302|).
Command: LIST
Error: Connection timed out
Error: Failed to retrieve directory listing
Is there something which I'm missing in my config file. A setting which needs to be set or turned off. I thought it was great that it connected but when it timed out you could picture my face. It meant time to start trawling the net try and find the answer! Now with no luck.
I'm using the standard Amazon AMI 64 bit. I have a traditional lamp setup.
Can anyone steer me in the right direction? I have read a lot about getting this working but they are all incomplete, as if they got bored half way through typing up how to do it.
I would love to hear how you guys do it as well. If it makes life easier. How do you upload your apps to a EC2 instance? (Steps please - it saves a lot of time plus it is a great resource for others.)
I figured it out, after the direction help by Antti Haapala.
You don't even need VSFTP setup on the instance created. All you have to do is make sure the settings are right in FileZilla.
This is what I did (I'm on a mac so it should be similar on windows):
Open up file zilla and go to preferences.
Under preferences click sftp and add a new key. This is your key pair for your ec2 instance. You will have to convert it to the format FileZilla uses. It will give you a prompt for the conversion
Click okay and go back to site manager
In site manager enter in your EC2 public address, this can also be your elastic IP
Make sure the protocol is set to SFTP
Put in the user name of ec2-user
Remove everything from the password field - make it blank
All done! Now connect.
That's it you can now traverse your EC2 system. There is a catch. Because you are logged in as ec2-user and not root you will not be able to modify anything. To get around this, change the group ownership of the directory where your application will lie (/var/www/html) or what ever. I would change it so it is on a EBS volume. ;) Also make sure this group has read write and execute permissions. The group for the ec2-user is ec2-user. Leave everyone else as nothing. So the command you use while logged in via ssh
sudo chgrp ec2-user file/folder
sudo chmod 770 file/folder
Hope this helps someone.
FTP is a very troublesome protocol because it requires a secondary pipe for the actual data transfer and does not definitely work well when piped. With ssh you should use SFTP which has nothing to do with FTP but is a completely different protocol.
Read also on Wikipedia
Adding the key to www is a recipe for disaster! Any minor issue with your app will become a security nightmare.
As an alternative to ftp, consider using rsync or a more "mature" deploy strategy based on capistrano for instance. There are plenty of tools for that around.
Antti Haapala's tips are the only way to work around with EC2 SFTP. It works just fine! Just note that you need to create the /var/www/.ssh/ folder and copy the authorized_keys file there.
After that you'll need to change authorized_keys ownership to www-data so ssh connection can recognize it. Amazon should let people know that. I looked for this in there forums, FAQ, etc. No clue at all... Cheers once more to stackoverflow, the way to go haha!

Resources