I am looking to get access to all HTTP traffic on my machine (my windows machine - not a server). From what I understand having a local proxy through which all traffic routes is the way to go. I have been Googling but failed to find any resources (in respect to Ruby) to help me out. Any tips or links are much appreciated.
There's an HTTP Proxy in WEBrick (part of Ruby stdlib) and here's an implementation example.
If you like living on the edge there's also em-proxy by Ilya Grigorik.
This post by Ilya implies that it does seem to need some tweaking to solve your problem.
Is having a proxy built in Ruby the important point here? Or just to "get access to all HTTP traffic on your machine"? If the latter, there's a free program called HTTP Sniffer and Analyzer that can supposedly do this. I have not used it but I have seen it get some positive reviews. There are several other such programs, though most seem to be paid. On OS X, Linux, etc, you can use the in-built tcpdump in clever ways to get a similar effect.
Related
I am trying to find (or write) a caching proxy tool that accepts all traffic from a specific container in my localhost (using Iptables). What I want to do with this traffic is to save it and cache the response, and later, if I see that a request was already sent to a server, return the cached response to the requesting party (and not sending the request to the server again, because a previous similar request was already sent).
Here's a diagram to demonstrate what I'm trying to do:
I'm not sure exactly how big is the problem I'm trying to deal with here. I want to do it for all traffic, including HTTP, TLS and other TCP based traffic (database connections and such). I tried to check mitmproxy, and it seems to deal pretty good with HTTP and the TLS part, but intercepting raw TCP traffic (for databases etc.) is not possible.
Any advices or resources I can use to accomplish that? (Not necessarily in Python). How complex do you think this problem is? Do you think I can find a generic solution?
Thanks in advance!
I have recently been sharing the connection of my mobile device to my laptop, when i'm out and about, through the use of an app called netshare. It provides a https proxy I believe through which it acts as a network repeater?(not sure about this part). I can access webpages and such quite easily. However, I have realised that I cannot connect to some apps. For example, I cannot use spotify. Installing some other apps like games etc also prove to fail. I have done a bit of research and found that apparently I could only surf the web with a https proxy. However, I found this to be unambiguous. Does this mean that I can only make https requests? Or is this because of https using TCP over UDP? What are the limitations and what can I do to possibly solve it?
Thanks
I am using a third party (OS X) tool to help me process OFX financial data. It works, but I am interested in knowing what exactly is going on behind the scenes to make it work (the structure of the HTTP requests).
I setup Charles as an SSL proxy for all traffic in hopes that I could observe the requests being made by this tool, but the program runs and Charles gets nothing. No requests show up whatsoever. How is that possible? Is there something I am not understanding about how Charles or other packet sniffing tools work? What are some ways that web requests could be made that wouldn't show up in a tool like Charles?
Charles is not a packet sniffer. It's a proxy. The app initiating the connection has to "voluntarily" use the proxy for the proxy to see anything. If an app uses a high-level networking API like NSURLConnection then it will, by virtue of the frameworks, automatically pick up the system-wide proxy settings and use the proxy. If, instead, the app wrote their networking using low-level socket API, then they will not end up going through the proxy unless they specifically re-implement that functionality.
If you want to see everything, you will need a real promiscuous-mode packet sniffer, which Charles is not. Unfortunately, using a "real" packet sniffer will just show you the gibberish going over the wire for SSL connections, so that's probably not what you want either. If an app has "in-housed" its SSL implementation and is not using a properly configured system-wide proxy, sniffing its traffic unencrypted will be considerably harder (you'll probably have to use a debugger or some other runtime hooking approach.)
I'm searching for some examples on how to write a proxy in Ruby that supports HTTPS. I have a simple proxy implemented with Webricks HTTPProxyServer, but I noticed, that HTTPS traffic is just tunneling (as it should ;) ). But I want to record the content with VCR (regarding my question here VCRProxy: Record PhantomJS ajax calls with VCR inside Capybara) and as long the content is only tunnled through, VCR can't record it.
So I was thinking of writing the proxy as a man-in-the-middle, generate SSL certificates on the fly (I don't care about certificate errors, its just for testing), and then I would be able to record the content / playback it later.
So if somebody has a good ressource from how to start, or a tutorial or a gist, please let me know.
PS: I have already seen this questions, but they don't provide any further stuff (and it need to be in ruby):
Man in the Middle (MITM) proxy with HTTPS support
How do I write a simple HTTPS proxy server in Ruby?
Help with HTTP Intercepting Proxy in Ruby?
An old question, but for the sake of completeness here goes another answer.
I've implemented a HTTP/HTTPS interception proxy in Ruby, the project is hosted in github.
The project is new, so it's not (yet) as mature as Python's mitmproxy, but it supports HTTPS with certificates generation on-the-fly.
There's an excellent MITM proxy in Python aptly named mitmproxy. The netlib library by the author does the tricks and mitmproxy uses it.
The codebase isn't large and it shouldn't be hard to go through it given that you know Ruby.
The Performance Golden Rule from Yahoo's performance best practices is:
80-90% of the end-user response time
is spent downloading all the
components in the page: images,
stylesheets, scripts, Flash, etc.
This means that when I'm developing on my local webserver it's hard to get an accurate idea of what the end user will experience.
How can I simulate latency so that I can understand how my application will perform when I've deployed it on the web?
I develop primarily on Windows, but I would be interested in solutions for other platforms as well.
A laser modem pointed at the mirrors on the moon should give latency that's out of this world.
Fiddler2 can do this very easily. Plus, it does so much more that is useful when doing development.
YSlow might help you out. YSlow analyzes web pages based on Yahoo!'s rules.
Firefox Throttle. This can throttle speed (Windows only).
These are plugins for Firefox.
You can just set up a proxy outside that will tunnel traffic from your web server to it and then back to local browser. It would be quite realistic (of course it depends where you put the proxy).
Otherwise you can find many ways to implement it in software..
Run the web server on a nearby Linux box and configure NetEm to add latency to packets leaving the appropriate interface.
If your web server cannot run under Linux, configure the Linux box as a router between your test client machine and your web server, then use NetEm anyway
While there are many ways to simulate latency, including some very good hardware solutions, one of the easiest for me is to run a TCP proxy in a remote location. The proxy listens and then directs the traffic back to my final destination. On a remote server, I run a unix program called balance. I then point this back to my local server.
If you need to simulate for a just a single server request, a simple way is to simply make the server sleep() for a second before returning.