How does the proxy mechanism work with proxy settings in browser - proxy

We often find columns like Address, Port in web browser proxy settings. I know when we use proxy to visit a page, the web browser request the web page from the proxy server, but what I want to know is how the whole mechanism works? I have observed that many ISP allow only access to a single IP(of their website) after we exhausted our free data usage. But when we enter the site which we wants to browse in proxy URL and then type in the allowed IP, the site get loaded. How this works?

In general, your browser simply connects to the proxy address & port instead of whatever IP address the DNS name resolved to. It then makes the web request as per normal.
The web proxy reads the headers, uses the "Host" header of HTTP/1.1 to determine where the request is supposed to go, and then makes that request itself relaying all remaining data in both directions.
Proxies will typically also do caching so if another person requests the same page from that proxy, it can just return the previous result. (This is simplified -- caching is a complex topic.)
Since the proxy is in complete control of the connection, it can choose to route the request elsewhere, scrape request and reply data, inject other things (like ads), or block you altogether. Use SSL to protect against this.
Some web proxies are "transparent". They reside on a gateway through which all IP traffic must pass and use the machine's networking stack to redirect outgoing connections to port 80 to a local port instead. It then behaves the same as though a proxy was defined in the browser.
Other proxies, like SOCKS, have a dedicated protocol that allows non-HTTP requests to be made as well.

There are 2 types of HTTP proxies, there are the ones that are reversed and the ones that
are forward.
The web browser uses a forward proxy, basically it is sending all http traffic through the proxy, the proxy will take this traffic out to the internet. Every http packet that comes out from your computer, will be send to the proxy before going to the target site.
The ISP blocking does not work when using a proxy because, every packet that comes out from your machine is pointing to the proxy and not to the targe site. The proxy could be getting internet through another ISP that has no blocks whatsoever.

Related

Send the request to Proxy server from Web server

I made a proxy server in python 3. It listens on the port 4444. It basically receives the request from clients and sends it to the server. I want to use it as a firewall to my Dvwa server. So added another functionality to the proxy. What it does is, before sending the request to the DVWA server, it validates the input.
But the problem is, the clients have to configure their proxy settings in the browser to use my proxy server. Is there any way to access the proxy without configuring the browser settings. Basically I want to host the proxy server instead of the original web server. So that all the traffic goes through the proxy before going to the webserver.
Thanks in advance...
You don't say whether your Python3 proxy is hosted on the same machine as the DVWA.
Assuming it is, the solution is simple: a reverse-proxy configuration. Your proxy transparently accepts and forwards requests to your server who then processes them and sends them back via the proxy to the client.
Have your proxy listen on port 80
Have the DVWA listen on a port other than 80 so it's not clashing (e.g. 8080)
Your proxy, which is now receiving requests for the IP/hostname which would otherwise go to the DVWA, then forwards them as usual.
The client/web browser is none the wiser that anything has changed. No settings need changing.
That's the best case scenario, given the information provided in your question. Unfortunately, I can't give any alternative solutions without knowing the network layout, where the machines reside, and the intent of the project. Some things to consider:
do you have a proper separation of concerns for this middleware you're building?
what is the purpose of the proxy?
is it for debugging/observing traffic?
are you actually trying to build a Web Application Firewall?

DNS solution for Dante SOCKS proxy

I am trying to build a SOCKS solution for forward proxy. I am using dante SOCKS proxy as I have heard that big companies like google uses it as forward proxy solution.
on the SOCKS server, I am allowing based on FQDN's like google.com:443
Now the problem is, when the client constructs the packet, it tries to resolve google.com and gets X.X.X.X and sends connect request to SOCKS server. Now when the server receives the packets, it tries to reconstruct the packet to send out to internet, the server again does DNS resolution and if the server gets response as Y.Y.Y.Y, then it doesn't allow client's request as the destination IP in the client's request is different then the server's resolved IP address.
There was a solution in dante client which tells client to put a dummy destination address 0.0.0.1 and sends request to server and server processes it properly then. However that is creating a problem with internal domains as after using that dns resolution method, every requests goes through dante server :(
Please let me know
If there is any solution through which would help me in maintaining a DNS record expiry DC wide for e.g. google.com resolves to X.X.X.X and I should be able to resolve to this same IP address on 100's of DNS client and in case if the record changes, then it should immediately change/expire on client.
Any other proxy/socks solution which should be transparent to applications for forward proxy
I went ahead with this solution in case anyone is curious to see the solution.
I used PowerDNS Auth Server with Pipe backend. The requests would land to PowerDNS server for resolution, it will pass on all the data to Pipe backend script with ABI, the script analysis the requests, sees if it is present under cached variable/memory map, if it is cache hit, it will respond using cached DNS records else it will use a DNS resolver to resolve that query like a resolver resolves normally.
PowerDNS version lower than 4.1 supports Pipe backend + resolver. This way, the request would first land to pipe backend script, if the script doesn't have any entries cached, it will not respond or will respond blank and then PowerDNS would resolve it with the mentioned resolver server in the configuration. However with version 4.1 and above, the resolver part is removed from PowerDNS Auth server hence you need to handle that behaviour via Pipe backend script.
It depends on your client. Firefox, for example, sends hostname to SOCKS proxy without resolving it. You can confirm that by Wireshark.
PS. assume you are using a SOCKS5/4a proxy. SOCKS4 does not support hostname. Ref: https://en.wikipedia.org/wiki/SOCKS#SOCKS4a

In Fiddler, is it possible to spoof the client IP address?

In our application's Production environment, when we call the Navigate operation on our C# WebBrowser control, we POST the authentication details and a redirect URL first to an authentication server. This server authenticates and sends back a HTTP 302 response which prompts the WebBrowser control to redirect to another server. Because of a change in the IP address by the time the redirect is performed, a fingerprint monitor masking the target url sends us a challenge. We then forward the cookies and what not that we received from the authentication server.
Now, the problem is, when we debug this in our non-prod environment, because the client IP remains unchanged, there is no challenge issued by the monitor and we are not able to test out our changes which ensure all the right authenticatoin information is forwarded from the earlier Authentication Server's response.
Is it possible to do this sort of client ip address spoofing in between redirects in order to allow us to test our code? I was using Fiddler for this and as far as I can see, there are no properties which can be modified. the Session variable "x-clientip" is a readonly variable.
How does your server determine the IP address of the client? If it looks at, for instance, an X-Forwarded-For request header, Fiddler can easily change that.
If not, no, Fiddler does not itself have some magical way to make traffic originate from a different IP address. If your machine has multiple NICs, Fiddler can direct the second request to egress via a specific IP address using the X-EgreesIP. Or you can have Fiddler direct a given request through a different proxy (say, another Fiddler instance) running on a different machine that has a different IP address; use the X-OverrideGateway flag to do that.

How can a web page send a message to the local network

Our web application has a button that is supposed to send data to a server on the local network that in turn prints something on a printer.
So far it was easy: The button triggered an AJAX POST request to http://printerserver/print.php with a token, that page connected to the web application to verify the token and get the data to print and then printed.
However, we are now delivering our web application via HTTPs (and I would rather not go back to HTTP for this) and newer versions of Chrome and Firefox don't make the request to the HTTP address anymore, they don't even send the request to check CORS headers.
Now, what is a modern alternative to the cross-protocol XHR? Do Websockets suffer from the same problem? (A Google search did not make clear what is the current state here.) Can I use TCP Sockets already? I would rather not switch to GET requests either, because the action is not idempotent and it might have practical implications with preloading and caching.
I can change the application on the printerserver in any way (so I could replace it with NodeJS or something) but I cannot change the users' browsers (to trust a self-signed certificate for printerserver for example).
You could store the print requests on the webserver in a queue and make the printserver periodically poll for requests to print.
If that isn't possible I would setup a tunnel or VPN between the webserver and printserver networks. That way you can make the print request from the webserver on the server-side instead of the client. If you use curl, there are flags to ignore invalid SSL certificates etc. (I still suspect it's nicer to introduce a queue anyway, so the print requests aren't blocking).
If the webserver can make an ssh connection to something on the network where the printserver is on, you could do something like: ssh params user#host some curl command here.
Third option I can think of, if printserver can bind to for example a subdomain of the webserver domain, like: print.somedomain.com, you may be able to make it trusted by the somedomain.com certificate, IIRC you have to create a CSR (Certificate Signing Request) from the printserver certificate, and sign it with the somedomain.com certificate. Perhaps it doesn't even need to be a subdomain for this per se, but maybe that's a requirement for the browser to do it client-side.
The easiest way is to add a route to the webapp that does nothing more than relay the request to the print server. So make your AJAX POST request to https://myapp.com/print, and the server-side code powering that makes a request to http://printerserver/print.php, with the exact same POST content it received itself. As #dnozay said, this is commonly called a reverse proxy. Yes, to do that you'll have to reconfigure your printserver to accept (authenticated) requests from the webserver.
Alternatively, you could switch the printserver to https and directly call it from the client.
Note that an insecure (http) web-socket connection on a secure (https) page probably won't work either. And for good reason: generally it's a bad idea to mislead people by making insecure connections from what appears to them to be a secure page.
The server hosting the https webapp can reverse proxy the print server,
but since the printer is local to the user, this may not work.
The print server should have the correct CORS headers
Access-Control-Allow-Origin: *
or:
Access-Control-Allow-Origin: https://www.example.com
However there are pitfalls with using the wildcard.
From what I understand from the question, printserver is not accessible from the web application so the reverse proxy solution won't work here.
You are restricted from making requests from the browser to the printserver by cross-origin-policy.
If wish to communicate with the printserver from an HTTPS page you will need the printserver to expose print.php as HTTPS too.
You could create a DNS A record as a subdomain of your web application that resolves to the internal address of your printserver.
With those steps in place you should be able to update your printserver page to respond with permissive CORS headers which the browser should then respect. I don't think the browser will even issue CORS requests across different protocol schemes (HTTPS vs HTTP) or to internal domains, without a TLD.

Play Framework HTTPS Proxy

I have a Play! Application which needs to use HTTP and HTTPS. The application is running behind a proxy server (using Apache) that forwards web requests to the play application.
The proxy is using one port for HTTP requests, and another port that is intended for HTTPs requests. Note that the ports on the proxy are not the same ports as the ones used by the Play application (this is due to provider restrictions!).
The Play application is using the "standard" ports 9000 for HTTP and 9443 for HTTPs. The proxy receives HTTP requests on Port 8080 and forwards them to Play's port 9000. The proxy receives HTTPs requests on Port 8090 and forwards them to Play's port 9443.
My problem is that when I use the secure() method of making pages appear in Play, Play's logic causes the app to attempt to use 9443 as the port for HTTPs. This causes the request to be lost because the proxy is using a different port.
The same appears to happen when I want to switch from HTTPs to HTTP. I cannot seem to make the system go to the port used by the proxy.
Somehow I need to make the system go to the ports known to the proxy server, without screwing up my routes. Is there some way to do this?
Thanks in advance for any help/insights.
I have found my own "answer", though it is somewhat of a cludge.
It turns out that, based on what I can ascertain from the documentation, there really is no way to tell Play to switch between ports when a Play application is behind a proxy. This is because while Play does recognize the port that a request comes in on, it has no way of telling what proxy port it should use when going between secure and unsecure ports. It knows, for example, that an HTTP request comes through a proxy port 8080, and it knows that subsequent requests to its port 9000 will come from that proxy port. What it does not know to do is to switch to another proxy port when someone attempts to use https to access its port 9443. Consequently, if you have a page like
http://toproxy:8080/links that has one or more links that use the secure() method to activate https, then Play will resolve the link to be https://toproxy:8080 -- despite the fact that the proxy server may want to use port 8090 for HTTPS requests. Because proxy port 8080 is redirected to Play's port 9000, use of that port for HTTPS requests always fails. This is true in Play 2.0 as well as Play 1.X.
I believe that Play needs some standard configuration parameter that can tell it to map proxy ports to its HTTP and HTTPS ports. That way, when it is behind a proxy server a developer can use the secure() method and Play will be able to resolve the URL to the correct proxy port. This capability should be available in 1.X as well as Version 2.
Until someone actually implements this (I might if I ever get the time, but with all that I am committed to do people shouldn't hold their breath), my cludge is to simply use the rediirect() method to switch between HTTP and HTTPS proxy ports. The redirect() method apparently allows us to enter full URLs, so I simply call the full URL of the page that I switch requests on.
For example: in the aforementioned page http://toproxy:8080/links, I may have a link to a login page that I want to protect using HTTPS. To do this I create two actions: one for the redirect to the proxy HTTPS port (for this example, call it gotoLogin()) and another for actually rendering the login page (for this example, call it loginPage() and give it a route of /loginpage).
In gotoLogin() I redirect to loginPage in the following manner:
redirect("https://toproxy:8090/loginpage");
This makes Play come in on proxy port 8090, which is redirected to Play's port 9443.
When the user properly logs in, my authentication action simply uses another redirect() call:
redirect("http://toproxy:8080/destination_page");
This causes Play to go back to the proper proxy port for unsecured requests.
This is not the best solution for this problem. There is a possibility that the proxy server can be somehow configured to make the proper switch between HTTP and HTTPS ports, but investigating this may take some time for someone not an expert in proxy server configuration (which describes me!). Of course, the best solution would be to implement in Play the kind of proxy port handling that I have previously described.
But this solution works and can be adapted to many different situations. Unless someone comes up with another better solution, this solution is the one I am using.
Play should recognise the port the request is coming from, and use that port. It is possible your apache config is not set up correctly.
Have you put the following line in your apache config?
ProxyPreserveHost on
You may also need XForwardedSupport=127.0.0.1 depending on your server configuration.

Resources