Are Websockets more secure for communication between web pages? - ajax

This might sound really naive but I would really find a descriptive answer helpful.
So, my question is this:
I can use Firebug to look at AJAX requests made from any website I visit. So, am I right in saying that I wouldn't be able to examine the same communication between the client and the server if the website choses to use Websockets? In other words, does this make it more secure?

No. Not at all. Just because the browser does not (yet) have a tool to show WebSocket traffic, doesn't make it any more secure. You can always run a packet sniffer to monitor the traffic, for example.

No, because there will be other ways beside the browser-build in tools to read your traffic.
Have a try: Install and run Wireshark and you will be able to see all packets you send and receive via Websockets.

Depends on the application. If you are fully Ajax without reloading the document for data then I would think websockets would provide a better authentication for data requests then a cookie session in regards to connection hijack. Given that you are using SSL of course.

Never rely on secrecy of algorithm cause it only gives you false sense of security. Wiki: Security by obscurity
Remember that browser is a program on my computer and I am the one who have a full control over what is send to you, not my browser.
I guess it's only matter of time (up to few months IMO) when developer tools such as Firebug will provide some fancy tool for browsing data send/received by WebSockets.

WebSockets has both an unencrypted (ws://) and encrypted mode (wss://). This is analogous to HTTP and HTTPS. WebSockets protocol payload is simply UTF-8 encoded. From a network sniffing perspective there is no advantage to using WebSockets (use wss and HTTPS for everything at all sensitive). From the browser perspective there is no benefit to using WebSockets for security. Anything running in the browser can be examined (and modified) by a sufficiently knowledgeable user. The tools for examining HTTP/AJAX requests just happen to be better right now.

Related

Caching proxy for all traffic

I am trying to find (or write) a caching proxy tool that accepts all traffic from a specific container in my localhost (using Iptables). What I want to do with this traffic is to save it and cache the response, and later, if I see that a request was already sent to a server, return the cached response to the requesting party (and not sending the request to the server again, because a previous similar request was already sent).
Here's a diagram to demonstrate what I'm trying to do:
I'm not sure exactly how big is the problem I'm trying to deal with here. I want to do it for all traffic, including HTTP, TLS and other TCP based traffic (database connections and such). I tried to check mitmproxy, and it seems to deal pretty good with HTTP and the TLS part, but intercepting raw TCP traffic (for databases etc.) is not possible.
Any advices or resources I can use to accomplish that? (Not necessarily in Python). How complex do you think this problem is? Do you think I can find a generic solution?
Thanks in advance!

Is there a Socket.IO alternative that is not based on WebSockets?

I built a realtime application that, thanks to Socket.IO, can serve a lot of different client types (C#, Java, Browser, ...)! I know that there are a lot of Socket.IO alternatives, but from my understanding, everything is more or less based on WebSockets. (I know that Socket.IO has fallbacks if WebSockets are not working, but that they are more less "inferior workarounds" so to speak...)
My question is: Is there any comparable real-time engine available that is NOT based on WebSockets, but can still serve all those different clients?
You don't say what your endpoints are. If one of the endpoints is a browser with purely the built-in capabilities of the browser and Javascript, then a webSocket is your only way to get a continuous connection from the browser to some other destination.
If a webSocket is not supported (in an older browser), then the other socket.io fallbacks (such as xhr-long-polling) are the next best alternatives. As the browser has limited communication capabilities, if you can't use a webSocket, then an ajax call is your only other generally supported option without requiring plug-ins on each browser (such as Flash or Java or something like that). socket.io already supports the next-best options that are available in a browser - you can't do better than that if you're talking about a standard browser with no custom plug-ins.
If your endpoints don't necessarily include a browser and you can use any language or library you want, then you can use plain TCP sockets and then use whatever protocol you want over a TCP socket.
The WebSocket protocol establishes a bidirectional communication channel between server and client; they kind of speak more naturally with each other. The server can just send something to the client and the other way around. In http it just goes in one direction, there's a request and a response and everything needs to be initiated with a request from the client.
From my experience, realtime webApps like a multiplayer game or a chat become easier to develop and it apparently creates less overhead than using http - but still you can do the same things more or less elegant with http as well (see e.g. long polling).
Look at gmail or other existing webApps, they all use http (so does Socket.io as a fallback) and it works quite well.

How can web requests be made and go undetected by a packet sniffer tool like Charles?

I am using a third party (OS X) tool to help me process OFX financial data. It works, but I am interested in knowing what exactly is going on behind the scenes to make it work (the structure of the HTTP requests).
I setup Charles as an SSL proxy for all traffic in hopes that I could observe the requests being made by this tool, but the program runs and Charles gets nothing. No requests show up whatsoever. How is that possible? Is there something I am not understanding about how Charles or other packet sniffing tools work? What are some ways that web requests could be made that wouldn't show up in a tool like Charles?
Charles is not a packet sniffer. It's a proxy. The app initiating the connection has to "voluntarily" use the proxy for the proxy to see anything. If an app uses a high-level networking API like NSURLConnection then it will, by virtue of the frameworks, automatically pick up the system-wide proxy settings and use the proxy. If, instead, the app wrote their networking using low-level socket API, then they will not end up going through the proxy unless they specifically re-implement that functionality.
If you want to see everything, you will need a real promiscuous-mode packet sniffer, which Charles is not. Unfortunately, using a "real" packet sniffer will just show you the gibberish going over the wire for SSL connections, so that's probably not what you want either. If an app has "in-housed" its SSL implementation and is not using a properly configured system-wide proxy, sniffing its traffic unencrypted will be considerably harder (you'll probably have to use a debugger or some other runtime hooking approach.)

What's the best way to be able to continously be able to receive WebRTC calls in browser?

Need to be able to continuously receive calls when a Chrome webpage is open. How do I do that even for users who are inside a strict enterprise network?
WebSockets? (but there's the proxy problems that doesn't know what wss:// is)
HTTP? (but will I have to poll?)
Other?
Since you included the "vLine" tag, I'll reply with some information on how our WebRTC platform will behave in an enterprise network. vline.js will use a secure WebSocket by default if the browser supports it and fall back to HTTPS long polling. As described here, the secure WebSocket may work depending on the exact proxy configuration. Feel free to test it out by using GitTogether or creating your own vLine service for testing.

How to detect connections made by the browser from a Firefox add-on?

I'm trying to develop an extension that detects every connection made by the browser to figure out the URLs being accessed. I know that this is possible via writing an HTTP/SOCKS proxy and configuring the browser to flow traffic via that. However, that's kind of overkill for the application that I'm trying to develop and it's best done as a Firefox Add-on if that's possible. Any clues/pointers would be highly appreciated.
Use nsIHttpActivityDistributor and there is many information about the http transaction and socket transport through observeActivity callback.
Read the official documentation https://developer.mozilla.org/en/Monitoring_HTTP_activity.

Resources