Wireshark vs Firebug vs Fiddler - pros and cons? [closed] - debugging

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Recently, I came across an issue where a CGI application is not responding. Symptom is Firefox displaying:
Transferring data from localhost...
But the thing is I cannot see any traffic from Firebug's Net panel, and the browser just stays on the same stage forever.
I am thinking about the ways to debug this application but I cannot see the source code or any of its compiled Java/C++ components, therefore I reckon a HTTP network level of diagnostics is a good start.
I have little experience in Fiddler and Wireshark, just wondering will they get better feedback/statistics in the HTTP network level? I've heard Wireshark is advanced but could possibly introduce a large volume of traffic so system admins don't like it very much. At this time I think Firebug doesn't really show me enough information.
I need to collect information so that I can then forward to client as proof.

Wireshark, Firebug, Fiddler all do similar things - capture network traffic.
Wireshark captures any kind of network packet. It can capture packet details below TCP/IP (HTTP is at the top). It does have filters to reduce the noise it captures.
Firebug tracks each request the browser page makes and captures the associated headers and the time taken for each stage of the request (DNS, receiving, sending, ...).
Fiddler works as an HTTP/HTTPS proxy. It captures every HTTP request the computer makes and records everything associated with it. It does allow things like converting post variables to a table form and editing/replaying requests. It doesn't, by default, capture localhost traffic in IE, see the FAQ for the workaround.

The benefit of WireShark is that it could possibly show you errors in levels below the HTTP protocol. Fiddler will show you errors in the HTTP protocol.
If you think the problem is somewhere in the HTTP request issued by the browser, or you are just looking for more information in regards to what the server is responding with, or how long it is taking to respond, Fiddler should do.
If you suspect something may be wrong in the TCP/IP protocol used by your browser and the server (or in other layers below that), go with WireShark.

None of the above, if you are on a Mac. Use Charles Proxy. It's the best network/request information collecter that I have ever come across. You can view and edit all outgoing requests, and see the responses from those requests in several forms, depending on the type of the response. It costs 50 dollars for a license, but you can download the trial version and see what you think.
If your on Windows, then I would just stay with Fiddler.

Fiddler is the winner every time when comparing to Charles.
The "customize rules" feature of fiddler is unparalleled in any http debugger. The ability to write code to manipulate http requests and responses on-the-fly is invaluable to me and the work I do in web development.
There are so many features to fiddler that charles just does not have, and likely won't ever have. Fiddler is light-years ahead.

To complement the list, also be aware of http://mitmproxy.org/

I use both Charles Proxy and Fiddler for my HTTP/HTTPS level debugging.
Pros of Charles Proxy:
Handles HTTPS better (you get a Charles Certificate which you'd put in 'Trusted Authorities' list)
Has more features like Load/Save Session (esp. useful when debugging multiple pages), Mirror a website (useful in caching assets and hence faster debugging), etc.
As mentioned by jburgess, handles AMF.
Displays JSON, XML and other kind of responses in a tree structure, making it easier to read. Displays images in image responses instead of binary data.
Cons of Charles Proxy:
Cost :-)

If you're developing an application that transfers data using AMF (fairly common in a particular set of GIS web APIs I use regularly), Fiddler does not currently provide an AMF decoder that will allow you to easily view the binary data in an easily-readable format. Charles provides this functionality.

Related

How can web requests be made and go undetected by a packet sniffer tool like Charles?

I am using a third party (OS X) tool to help me process OFX financial data. It works, but I am interested in knowing what exactly is going on behind the scenes to make it work (the structure of the HTTP requests).
I setup Charles as an SSL proxy for all traffic in hopes that I could observe the requests being made by this tool, but the program runs and Charles gets nothing. No requests show up whatsoever. How is that possible? Is there something I am not understanding about how Charles or other packet sniffing tools work? What are some ways that web requests could be made that wouldn't show up in a tool like Charles?
Charles is not a packet sniffer. It's a proxy. The app initiating the connection has to "voluntarily" use the proxy for the proxy to see anything. If an app uses a high-level networking API like NSURLConnection then it will, by virtue of the frameworks, automatically pick up the system-wide proxy settings and use the proxy. If, instead, the app wrote their networking using low-level socket API, then they will not end up going through the proxy unless they specifically re-implement that functionality.
If you want to see everything, you will need a real promiscuous-mode packet sniffer, which Charles is not. Unfortunately, using a "real" packet sniffer will just show you the gibberish going over the wire for SSL connections, so that's probably not what you want either. If an app has "in-housed" its SSL implementation and is not using a properly configured system-wide proxy, sniffing its traffic unencrypted will be considerably harder (you'll probably have to use a debugger or some other runtime hooking approach.)

Monitor network activity of specific program

I have a program that I'm trying to reverse engineer.
It gets a specific key by using HTTP GET on some URLs.
I need to figure out the details on how this works.
The good news is that there's the option to preform these requests over an HTTP proxy.
Would anybody know of a program to monitor a specific application's network traffic?
I've tried Wireshark, but its no giving me enough information (Headers, URL path).
After Wireshark, I tried FreeProxy. The problem with FreeProxy is that it only gives headers for around 1/3 of the requests and it doesn't give the full path either.
Could anyone suggest a better alternative for monitoring the internet activity of my application?
I thought Wireshark was able to capture the full packet with all its content? If so, how can it not give you enough information? Maybe you need to revise your traffic capture config?
It's been a while since I used Wireshark, but if you have trouble capturing full packets, what you can do is use tcpdump to capture and write to file, then view the capture file using Wireshark. tcpdump's -s option will allow you to set the packet size so as to capture full packets.
I use Fiddler for all my HTTP traffic monitoring. It is very powerful and displays data in the HTTP layer only. Wireshark will get all of your data, but it displays the details at a much lower layer. It even has capability to decrypt SSL traffic.
Fiddler installs itself as a proxy, and configures IE and FF automatically to use it when it is on. If you are having too much traffic mix in, then you can install Fiddler on a remote box, and point your proxy to that IP address.
I was recommemded another program called "mitmproxy" which worked perfectly for what I needed. Fiddler also worked, but SSL was giving me problems.

HTTPS header and full URL path

How do I go about looking at an HTTPS header?
we would look inside the HTTPs traffic and extract the headers and full URL path.
Any thoughts on what it takes to do this?
In general your question doesn't make much sense because SSL was designed to be opaque secure transport and prevent peeping into it.
In reality if you control one of the sides of communication you can take some actions. But these actions depend on many details, such as whether you have access to the client or server, what software is used on the server and client side.
Finally, I'd say that this question doesn't look to be programming-related unless you describe your task in more details.

Are Websockets more secure for communication between web pages?

This might sound really naive but I would really find a descriptive answer helpful.
So, my question is this:
I can use Firebug to look at AJAX requests made from any website I visit. So, am I right in saying that I wouldn't be able to examine the same communication between the client and the server if the website choses to use Websockets? In other words, does this make it more secure?
No. Not at all. Just because the browser does not (yet) have a tool to show WebSocket traffic, doesn't make it any more secure. You can always run a packet sniffer to monitor the traffic, for example.
No, because there will be other ways beside the browser-build in tools to read your traffic.
Have a try: Install and run Wireshark and you will be able to see all packets you send and receive via Websockets.
Depends on the application. If you are fully Ajax without reloading the document for data then I would think websockets would provide a better authentication for data requests then a cookie session in regards to connection hijack. Given that you are using SSL of course.
Never rely on secrecy of algorithm cause it only gives you false sense of security. Wiki: Security by obscurity
Remember that browser is a program on my computer and I am the one who have a full control over what is send to you, not my browser.
I guess it's only matter of time (up to few months IMO) when developer tools such as Firebug will provide some fancy tool for browsing data send/received by WebSockets.
WebSockets has both an unencrypted (ws://) and encrypted mode (wss://). This is analogous to HTTP and HTTPS. WebSockets protocol payload is simply UTF-8 encoded. From a network sniffing perspective there is no advantage to using WebSockets (use wss and HTTPS for everything at all sensitive). From the browser perspective there is no benefit to using WebSockets for security. Anything running in the browser can be examined (and modified) by a sufficiently knowledgeable user. The tools for examining HTTP/AJAX requests just happen to be better right now.

Is there a good guide to interpreting the Firebug net panel?

I’m using the Net panel in Firebug to evaluate the performance of web pages I’m writing.
Specifically, I’m wondering what the precise meaning is of the stages for each resource that’s downloaded (i.e. DNS lookup, Connecting, Blocking, Sending, Waiting, receiving).
But more generally, is there a Firebug guide where I can look this stuff up?
The various stages correspond to the various states of the connection being made for the resource. I don't know of any documents on them and a quick look around the Firebug network page doesn't show any explanations. There is some documentation in the resources area (wiki) of the Firebug site, though it looks like its subtly different than what is actually presented in the interface. They seem reasonably obvious to me, but I suppose I could be wrong, too.
DNS lookup - the name of the remote server is being resolved to an IP address
Connecting - a TCP/IP connection is being opened to the remote server
Blocking - the client is waiting for another request to complete (or a thread to become available) before sending the request
Sending - the client is sending data to the remote server
Waiting - the client is waiting on a response from the remote server
Receiving - the client is reading data from the remote server
You can read up on HTTP headers.
And for the whole firebug net panel you can read this.
Although it doesn’t include an answer to this question, Amy Hoy and Thomas Fuchs’s PDF ebook JavaScript Performance Rocks! has a lot of good information about measuring web page performance using Firebug

Resources