Charles proxy - content on app not displayed on device - windows

I managed to successfully install the Charles proxy on my Windows 10. I have also successfully installed the certificates on my devices. However when I use the app that I want to test, none of the content is returned, I can see the URLs being called on Charles. If I use the same app on my windows machine, the content is returned. Am I missing a setting?

If you're seeing the call to the URLs that you expect, but not the content you're expecting, it may be SSL Proxying hasn't been enabled for those URLs.
If you're seeing encrypted data instead of your expected content, follow the instructions below to enable SSL Processing.
But first, something to keep in mind:
Be judicious when enabling SSL Proxying. It will allow unencrypted data to be sent and received, which is risky.
Don't enable it for any host that deals with sensitive/private information.
The screenshots I've included appear to show that I'm allowing SSL Proxying for google-dot-com, but this is for illustrative purposes only (which I did in order to avoid sharing any proprietary information within my company's service calls).
Instructions for enabling SSL Proxying:
Select the line for which you'd like decrypted data:
Right-click and select "Enable SSL Proxying"
You may also manually add hosts via the this method:
Toolbar > Proxy Settings > SSL Proxying Settings
Please note:
This answer contains the solution that I have given to my work peers when they said they "weren't seeing content" (but it was actually just encrypted). If there is NO content at all, this answer may not resolve your issue. I'd have liked to reply to the original question for more details, but my reputation is too low to comment.
Hoping this helps, and if not, any additional details and/or screenshots of what you're seeing under the request/response may help to get you a helpful answer.

Related

RabbitMQ: configuring ssl of rabbitmq_management, fail_if_no_peer_cert and fail_if_no_peer_cert parameters

general questions about the using of *fail_if_no_peer_cert *and *fail_if_no_peer_cert *params in *rabbitmq_management *on windows
if a client calls management API through https, the requests are secured by the certificate which is installed on the server and trusted on the client. actually, it means, this certificate shouldn't be rabbitmq compatible, it can be just http-certificate... is it correct?
in case i want to validate clients as well, i must have these params verify and true? what is the best practice? because i see these params explained in AMPQ settings and never in management
actually my motivation for this questions is just to understand if i need to deal with this issue at all. because setting *fail_if_no_peer_cert *as true makes a lot of things much more complicated. foe example you just can simply open management api plugin in browser, you need deal with client certificate.

931107 - configuring squid

i'm too beginner in squid. i want a way to remain anonymous over the net. i also want to be able to access the contents of the internet which are filtered. my Windows computer is beyond firewall (filtered). my server (CentOS 5) is not. for example, when i enter http://facebook.com in the browser url, it redirects to an intranet ip which tells me to avoid going to this site!
now i've installed squid on server and traffic is propagated through this server. but this redirection occurs. so still i can't open filtered sites.
what can i do? a friend of mine told that the only way is to use https. ie. the connection between browser (Firefox) and the server must use this protocol. is it right? and how can i do that?
what's your suggestion? i don't want necessarily to use squid. besides, https protocol gets banned or decreased in speed in my country sometimes. so i prefer the protocol remain http. i thought also about writing a code in client and server to transform, compress/decompress and packetize as hoax binary http packets to be sent as much speed and success as possible. but i'm not an expert in this context and now i prefer more straightforward ways.
i respect any help/info.
I assume you are located in Iran. I would suggest using TOR if you mainly access websites. The latest release works reasonably well in Iran. It also includes an option to obfuscate traffic so it is not easily detectable that you are using TOR.
See also this question: https://tor.stackexchange.com/questions/1639/using-tor-in-iran-for-the-first-time-user-guide
A easy way to get the TOR package is using the autoresponder: https://www.torproject.org/projects/gettor.html
In case the website is blocked, it works as follows:
Users can communicate with GetTor robot by sending messages via email.
Currently, the best known GetTor email address is gettor#torproject.org.
This should be the most current stable GetTor robot as
it is operated by Tor Project.
To ask for Tor Browser a user should send an email to GetTor robot
with one of the following options in the message body:
windows: If the user needs Tor Browser for Windows.
linux: If the user needs Tor Browser for Linux.
osx: If the user needs Tor Browser for Mac OSX.

See useragent in an https connection?

I have an app, and it makes an https connection to a server. Is it possible to use something like wireshark or charlesproxy to just see the useragent that it's connecting with? I don't want to see any of the actual data, just the useragent - but I'm not sure if that is encrypted as well? (and if it's worth trying)
Thanks
Is it possible to...
No. Browser first establishes secure connection with server, then use it for transfer all data including requests' data, various headers etc.
Too late for the original inquirer, but the answer is that it may be possible in some cases, depending on application implementation.
You can use fiddler, and by turning on the 'decrypt https traffic' you also have visibility to the HTTPS content in some cases.
What fiddler does (on windows at least) is register itself within the wininet as system proxy. It can also add certificates (requires your approval when you select to decrypt https traffic) and generates on the fly certificates for the accessed domains, thus being MitM.
Applications using this infrastructure will be 'exposed' to this MitM. I ran fiddler and ran a few applications and was able to view https traffic related to office products (winword, powerpoint, outlook) other MS executables (Searchprotocolhost.exe) but also to some non-microsoft products such as apple software update, cisco jabber)

If a website doesn't use HTTPS to do user log in, are the users passwords fairly unprotected?

This question tries to look into whether doing HTTPS log in is very important for any website.
Is it true that for many websites, if the login is done through HTTP but not HTTPS, then anybody can pretty much see the userID and password easily along the internet highway (or by looking between a router and the internet connection in an Internet Cafe)?
If so... do popular frameworks actually use HTTPS by default (or at least as an option), such as Rails 2.3.5 or Django, CakePHP, or .Net?
Yes, any machine on the pathway (that the packets pass through) can just examine the contents of the those packets. All it takes is a capturing proxy or a promiscuous mode network card with something like WireShark. Assuming that the passwords aren't encrypted in some other way (at a higher level), they will be visible.
I can't answer the second part of your question since I have no knowledge of those particular products but I would say that the inability to use secure sockets would pretty much make them useless.
Pax is right about passwords that aren't otherwise encrypted being visible.
Still, most sites don't use SSL still, and it does put the users at a certain degree of risk when accessing sites from public wifi.
HTTPS isn't a framework level option, it would be something you'd do when you set up the webserver. If you were to use an apache configuration for instance, you would open it up to a properly configured https, close http and install a certification. The framework wouldn't have a direct influence on that portion of the release.
If the user credentials are submitted via an HTML webform without HTTPS, then it is unsecure, the data is submitted in plain text. However, if the website uses HTTP authentication instead, then the server can send back a 401 reply (or 407 for proxies) to any request that does not provide valid credentials. 401/407 is the server's way to ask for credentials, and the reply provides a list of authentication schemes (Digest, NTLM, Negotiate, etc) that the server supports, which are usually more secure by themselves. The client/browser sends the same request again with the necessariy credentials in one of the schemes, then the server either sends the requested data, or sends another 401/407 reply if the credentials are rejected.

How to build local web proxy without configuring the browsers

How does Netnanny or k9 Web Protection setup web proxy without configuring the browsers?
How can it be done?
Using WinSock directly, or at the NDIS or hardware driver level, and
then filter at those levels, just like any firewalls soft does. NDIS being the easy way.
Download this ISO image: http://www.microsoft.com/downloads/en/confirmation.aspx?displaylang=en&FamilyID=36a2630f-5d56-43b5-b996-7633f2ec14ff
it has bunch of samples and tools to help you build what you want.
After you mount or burn it on CD and install it go to this folder:
c:\WinDDK\7600.16385.1\src\network\ndis\
I think what you need is a transparent proxy that support WCCP.
Take a look at squid-cache FAQ page
And the Wikipedia entry for WCCP
With that setup you just need to do some firewall configuration and all your web traffic will be handled by the transparent proxy. And no setup will be needed on your browser.
netnanny is not a proxy. It is tied to the host machine and browser (and possibly other applications as well. It then filters all incoming and outgoing "content" from the machine/application.
Essentially Netnanny is a content-control system as against destination-control system (proxy).
Easiest way to divert all traffic to a certain site to some other address is by changing hosts file on local host
You might want to have a look at the explanation here: http://www.fiddlertool.com/fiddler/help/hookup.asp
This is how Fiddler2 achieves inserting a proxy in between most apps and the internet without modifying the apps (although lots of explanation of how-to failing the default setup). This does not answer how NetNanny/K9 etc work though, as noted above they do a little more and may be a little more intrusive.
I believe you search for BrowserHelperObjects. These little gizmos capture ALL browser communication, and as such can either remote ads from the HTML (good gizmo), or redirect every second click to a spam site (bad gizmo), or just capture every URL you type and send it home like all the WebToolBars do.
What you want to do is route all outgoing http(s) requests from your lan through a reverse proxy (like squid). This is the setup for a transparent web proxy.
There are different ways to do this, although I've only ever set it up OpenBSD and Linux; and using Squid as the reverse proxy.
At a high level you have a firewall with rules to send all externally bound http traffic to a local squid server. The Squid server is configured to:
accept all http requests
forward the requests on to the real external hosts
cache the reply
forward the reply back to the requestor on the local lan
You can then add more granular rules in Squid to control access to websites, filter content, etc.
I pretty sure you can also get this functionality in different networking gear. I bet F5 has some products that do some or all of what I described, and probably Cisco as well. There is probably other proxies out there besides Squid that you can use too.
PS. I have no idea if this is how K9 Web Protection or NetNanny works.
Squid could provide an intercept proxy for HTTP and HTTPs ports, without configuring the browsers and it also supports WCCP.

Resources