Httpclient calls - caching

I was wondering if there is a way to find out if an ip address or dns was called from an application via api calls.
Let's take this example: You open an app that makes an api call to a specific dns or link etc, you then close the app. If I already know the specified url, where should I look for it in windows? I was thinking there is some sort of cache or something that remains in your pc till you close it, tried to googling couple things but not found the right one yet.

Related

Reading HTTPS traffic using a proxy server

Context: I have an application which communicates with a server of the owner. This application requests certain information from the server by accessing some URL's. I don't know these URL's, except for a few, but once I know them, I can manually visit them in the browser and obtain said information.
Goal: Figuring out the URLs of those requests, i.e., what are the requests being made by the application when I'm using it, so that I can, in the future, make them manually myself.
Progress:
Since the communications are in HTTPS, reading the packets with Wireshark while using the application was unsuccessfull since they are encrypted.
However, I was able to find where in the binary of the application is the URL of the server located. Thus, I can theoretically redirect the requests of the application to any other server. Hence, I thought a good idea to be able to receive the unencrypted requests would be to set up a proxy server, redirect the application to it, and then execute the application and obtain the results.
Problem: I don't know how to implement this idea in practice though, and it is here where I'd appreciate your help. I suppose that, ideally, I would be able to both receive the requests made by the application (and thus read them), as well as redirect them to the server and read the received information.

931107 - configuring squid

i'm too beginner in squid. i want a way to remain anonymous over the net. i also want to be able to access the contents of the internet which are filtered. my Windows computer is beyond firewall (filtered). my server (CentOS 5) is not. for example, when i enter http://facebook.com in the browser url, it redirects to an intranet ip which tells me to avoid going to this site!
now i've installed squid on server and traffic is propagated through this server. but this redirection occurs. so still i can't open filtered sites.
what can i do? a friend of mine told that the only way is to use https. ie. the connection between browser (Firefox) and the server must use this protocol. is it right? and how can i do that?
what's your suggestion? i don't want necessarily to use squid. besides, https protocol gets banned or decreased in speed in my country sometimes. so i prefer the protocol remain http. i thought also about writing a code in client and server to transform, compress/decompress and packetize as hoax binary http packets to be sent as much speed and success as possible. but i'm not an expert in this context and now i prefer more straightforward ways.
i respect any help/info.
I assume you are located in Iran. I would suggest using TOR if you mainly access websites. The latest release works reasonably well in Iran. It also includes an option to obfuscate traffic so it is not easily detectable that you are using TOR.
See also this question: https://tor.stackexchange.com/questions/1639/using-tor-in-iran-for-the-first-time-user-guide
A easy way to get the TOR package is using the autoresponder: https://www.torproject.org/projects/gettor.html
In case the website is blocked, it works as follows:
Users can communicate with GetTor robot by sending messages via email.
Currently, the best known GetTor email address is gettor#torproject.org.
This should be the most current stable GetTor robot as
it is operated by Tor Project.
To ask for Tor Browser a user should send an email to GetTor robot
with one of the following options in the message body:
windows: If the user needs Tor Browser for Windows.
linux: If the user needs Tor Browser for Linux.
osx: If the user needs Tor Browser for Mac OSX.

Detecting when Firefox is showing a Server not found message

I'm using Firefox for a digital signage application, and there a couple of scenarios where a Server not found might result.
Network outage on boot
DNS fails to resolve for the homepage
Server (its homepage) fails to respond
Boots and the network just isn't ready by the time Firefox is loaded
Browser crashes, process is restarted, but the network is down
In such cases I would like to detect this state and simply kill and restart the process after a minute. Any other tweaks or suggestions, I'm all ears.
You do not need to consider the case whereby the loaded Web application loses Internet connectivity. That scenario I think has been handled by the Web app itself, once it has loaded.
I don't want to go down the local httpd or local extension/addon route.
Thank you in advance,
I've discovered a simple solution of overriding Firefox netError.xhtml to do a simple location.reload every ten seconds.
Source code can be found from https://github.com/Webconverger/iceweasel-webconverger/blob/master/content/netError.xhtml#L410

Seeking info on how to use the VB6 Winsock, flow of events, etc

I'm using the MS Winsock control in VB6 and I want to understand things like
"when does the Server Close the
connection (triggering the
Winsock_Close() event), and a
related question:
How do you know
when all the data from a a Post has
been returned?
More info:
I should have mentioned: I've already read the MSDN description, etc., but it doesn't actually explain what's happening. E.g., it explains the the Close() event fires when the Server ends the connection but doesn't explain what would cause the connection to end and whether a broken connection would trigger a Close event, etc.
And none of the MSDN descriptions explain know when all the data has arrived. (I suspect it's the Close even firing).
You might want to try out the following walkthrough
tcp.oflameron.com/
You can find the complete code here
If you have any Qs in particular, plz ask here...
GoodLUCK!!
- CVS
Using the Winsock Control at http://msdn.microsoft.com/en-us/library/aa733709(VS.60).aspx
MSDN Search of "Winsock control" at http://social.msdn.microsoft.com/Search/en-US?query=Winsock+control&ac=8
Documentation Lacks
The documentation will not provide the information you are asking for. This is an ActiveX control that allows you to connect computers through TCP/IP protocol stacks.
The information you want applies to how these computer "talk" (the protocol). That totally depends on the server application and client application that are communicating. For instance, if I am connecting to the FTP Service of another computer, the server will not close the connection until I send the appropriate command or until the server detects an idle connection. On the other hand, some services will close the connection on any invalid command, especially SMTP Servers will tighten security.
You need to check out the documentation of the service you are connecting with. The documentation will tell you how to send commands, command format, response codes, how commands are acknowledge, and so on.
SAMPLE: VBFTP.EXE: Implementing FTP Using WinInet API from VB at http://support.microsoft.com/kb/175179

Any way to know the URL being fetched?

I wanted to know is there any way programmatically in win32, where I can get the URL being fetched from browser.
Like for example as we have MIB table, which shows data sent and received from Ethernet card, can we get the URL being fetched from system
Thanks in advance.
This is an IE-only solution, but if you write a browser helper object, it will be notified before IE navigates to a new URL.
There is no simple way to do this. The main problem you will encounter is that each browser on your system will independently connect to a webserver. That's just a straightforward HTTP connection, usually on port 80. The browser will send the URL in an HTTP request, possibly in multiple TCP packets. So, unless you are going to inspect and reassemble those TCP packets, you're not going to get this information. Even if you did, you'd miss out on the URLs of HTTPS fetches (by design).
An easier solution is to set up a proxy, and hope that the webbrowser doesn't bypass it.
You could try using WinPCap, which is what's utilized by Wireshark. What this would allow you to do is put the network interface into "promiscuous mode," and from there you could just look for HTTP traffic. From there, you could extract the URLs that are being requested, no matter which browser's being used.

Resources