websocket connection fails in Ubuntu 13.10 - websocket

We're having a weird problem at work and I'm hoping someone here can give me some ideas on how to troubleshoot it.
The problem is that I cannot make websocket connections from my Kubuntu 13.10 workstation. I've tried from both Chrome and Firefox. I'm behind a proxy and first I thought that must be the reason. However, I got some coworkers to try to connect to the same websocket echo demo and all of them were able to, except one. He was the only one running Ubuntu (same as me), the others were on Mac, Windows and even one on RedHat! Theirs all worked fine.
Ok, so now for the really weird part. I created and ran a virtual machine on my workstation (the one that couldn't connect). The VM is a Lubuntu 13.10 and what do you know, the darn thing establishes a websocket connection just fine!
So any ideas on how to troubleshoot this or even some suggestions for solutions would be very much appreciated.

Ugh... well that one was dumb.
So it turns out that in Linux you can check a checkbox in your Network Proxy settings (the system settings) that will use the same proxy for all protocols.
Yeah... don't do that!
Unless, that is, your proxy server supports SOCKS as well as HTTP/HTTPS/FTP (highly unlikely).
It turns out that if you check that checkbox, your proxy server will be registered as a SOCKS proxy and for some reason, websocket connections in both Chrome and Firefox will want to use that. So your HTTP proxy will end up getting a bunch of weird SOCKS handshakes that it doesn't understand and any websocket connection will fail.
This was tested on both Ubuntu and Kubuntu and the "problem" exists on both.
TL;DR; Don't check the "Use this proxy server for all protocols" checkbox unless your proxy server supports the SOCKS protocol. Instead, manually fill in the same server for the different protocols (http, https and ftp) but leave the socks protocol empty (or point to an actual socks proxy server).

Related

How does the VPN server redirect all passed HTTPS requests to a fixed interface

Background: I installed VPN on CentOS (ipsec-vpn project on github). I want to direct all HTTPS requests that have passed VPN to a fixed interface. The function is similar to the Charles rewrite function. One possible solution is to build a server with Charles installed, set the VPN server agent to charles, and then use Charles to modify the request. But Charles can only be installed on the win server, and it's expensive to have another server. My question is:
Can VPN itself To achieve the above objectives?
If VPN itself can ,what should be done?
If proxies must be used, do CentOS have similar tools? How to operate?
I have solved this problem through DNS Sever.

How does SOCKS configuration works on firefox?

I'm using a manual proxy configuration on Firefox to use a SSH tunnel and everything works fine. However I try to understand how does it works. From my understanding Firefox is just transferring every request to the specified port but I'm not quite sure of what does it means. Also what happen when I don't use a proxy? I guess Firefox is using an other port or something? Basically I'm trying to understand how a web browser connects to internet.

How to use direct connection applications behind a kerberos proxy

I have a corporate proxy using Squid and kerberos for authentication, the proxy is configured for standard use, I.E allow http, https, a few others and block everything else. Now, there are many applications that support basic proxy authentication, but do not support Kerberos based authentication and many others that connect directly to the internet. I used Proxifier before the upgrade to kerberos to make my applications use the proxy, but I cannot do so now. I then installed an application called PX to create a proxy that connects to kerberos, but the proxy it creates is a simple HTTP Proxy and proxifier doesn't work correctly with it. Anyone has a setup for a situation like this?. I use Windows 10 and I obviously don't have access to the server where squid is configured. The application I need to connect to the internet uses standard https ports, it's not a torrent application nor anything that uses the ports blocked by squid. Thanks in advance.
Ok, for this particular case I've found the following setup to solve 99% of my problems.
First get Px here https://github.com/genotrance/px
Next get Fiddler: http://www.getfiddler.com/dl/Fiddler4BetaSetup.exe
Configure PX with your user and your domain and run it. By default it creates a running proxy on 127.0.0.1:3128
Configure your sistem proxy to use the proxy supplied by PX.
Execute fiddler, it should create ANOTHER proxy at 127.0.0.1:8888
Use this proxy in your apps. Proxifier should work as well.
Why use fiddler and not the direct 127.0.0.1:3128?, PX creates a pure http proxy and fiddler allows to tunnel https and connect request through it.
Any requests will pass through fiddler which will redirect them to the PX proxy which will redirect them to the squid proxy (So expect very slow speeds).
In the end since you're just redirecting your apps towards your proxy, if your proxy bans using regex expressions or direct IP connections some apps will NOT work, and in these cases using TOR or a VPN is the only real solution. Hope it helps someone avoid all the headaches I went through.

Delays when using Firefox (34.0) to query neo4j on an remote machine

I have a strange problem with Firefox (34.0) vs. Internet Explorer (11) and Neo4j (community 2.1.6):
When I connect locally (localhost:7474/browser), I get answers in <= 200ms time with both browsers.
When I connect to a remote computer (other:7474/browser), the answers in Firefox take 30 seconds plus a few milliseconds.
Has anyone the same problem or any ideas for the reason of these delays?
I finally found the problem myself. In Internet Option -> Connections -> LAN settings, there is a proxy configured and "Bypass Proxy Server for local addresses" enabled. Firefox is configured to use System Proxy Settings.
In IE, both hostname and hostname.mydomain.com works. In Firefox, only hostname.mydomain.com works. If I use only hostname, the delay occurs (I still don't understand why). If I switch to manually proxy configuration in Firefox and explicitely add hostname to the exclusion list, it works without the delay.
So it has to do with the Proxy Settings, although I would expect that the local domain would be appended to hostname, if not fully qualified, and the result should be the same. So this seems to be a bug in Firefox?! But it does not occur when connecting to e.g. an Apache server. So it is a probably a problem in the Neo4j browser code.

Configuring iTerm and Git to use a proxy on OS X

I am successfully connecting to the internet using an application called tether for a jailbroken iphone. (I know there's better options now).
My iphone is connected to my laptop's wifi "device network".
I have in my OS X network settings a location called iphone and the proxy is configured to use the correct IP and port for the phone.
I can browse the internet using Chrome over http and https perfectly.
iTerm cannot ping google. Git cannot pull. I've googled for days and don't see anything "easy" or that I understand. Any advice is appreciated.
Command-line tools usually only support HTTP proxy. For providing a HTTP proxy out of a SOCKS one you have Privoxy. After you've set up Privoxy you have an HTTP proxy. In terminal usually saying export http_proxy=ip:port is enough for most applications. For Git specifically consult here.
But if that's too much for you, you can use sshuttle. It transparently transfers all your connections through a SOCKS proxy. That is, all the connections in your computer, and after that you don't have to change proxy settings for your GUI apps.
I use Homebrew as my package manager in Mac, and both Privoxy and sshuttle are available in it.

Resources