How to determine uniqueness of clients from http requests? - proxy

I notice that when http requests are made from clients through a proxy server, then the IP address of the requests is always that of the proxy. So if many clients from a huge corporation with a proxy server access a web site, I cannot tell if the requests are from unique clients or not. Is there any way to determine uniqueness of clients if the http requests are through a proxy? I know that the mac address is not included in the http request, so I have just about ruled that out.

The simplest way would be to set a cookie on the response, and check it in the request. If it's there, then you've seen that client before (and you could include some identification in the cookie). Of course, this relies on the clients being cookie-aware and the user not having disabled cookies (or clearing them manually).
There's also the issue of some clients which may be cookie-aware, but will effectively start from scratch each time - for instance, if someone's running a program to scrape your site, it will probably start with a fresh cookie jar each time, no matter how you set the cookie.

Provide a cookie to each new user with a GUID. You can track that and even include the GUID in your server logs.
We do this with our public web server to track "unique paths" through our site.

Related

If request sent through jmeter, in glassfish clustering requests are not segregating to different servers

For the application server set as clustering in glass fish. I have sent request through jmeter and all the requests hits to only one server . Expected was requests should be distributed to multiple servers in the cluster. But if sent requests manually clustering is working. Please help to sort out this issue
There could be different clustering load balancing mechanisms, as far as I can see from the GlassFish Server High Availability Administration Guide:
Cookie Method
The Loadbalancer Plug-In uses a separate cookie to record the route information. The HTTP client (typically, the web browser) must support cookies to use the cookie based method. If the HTTP client is unable to accept cookies, the plug-in uses the following method.
Explicit URL Rewriting
The sticky information is appended to the URL. This method works even if the HTTP client does not support cookies. To implement explicit URL rewriting, the application developer must use HttpResponse.encodeURL() and encodeRedirectURL() calls to ensure that any URLs in the application have the session information appended to them.
So depending on your Load Balancer configuration you need to
Either define either different cookies in the HTTP Cookie Manager
Or make sure different threads send requests to different URLs i.e. via HTTP URL Re-writing Modifier
In any case it is recommended to add DNS Cache Manager so each virtual user would resolve the underlying IP address of the application under test on its own.

How can a web page send a message to the local network

Our web application has a button that is supposed to send data to a server on the local network that in turn prints something on a printer.
So far it was easy: The button triggered an AJAX POST request to http://printerserver/print.php with a token, that page connected to the web application to verify the token and get the data to print and then printed.
However, we are now delivering our web application via HTTPs (and I would rather not go back to HTTP for this) and newer versions of Chrome and Firefox don't make the request to the HTTP address anymore, they don't even send the request to check CORS headers.
Now, what is a modern alternative to the cross-protocol XHR? Do Websockets suffer from the same problem? (A Google search did not make clear what is the current state here.) Can I use TCP Sockets already? I would rather not switch to GET requests either, because the action is not idempotent and it might have practical implications with preloading and caching.
I can change the application on the printerserver in any way (so I could replace it with NodeJS or something) but I cannot change the users' browsers (to trust a self-signed certificate for printerserver for example).
You could store the print requests on the webserver in a queue and make the printserver periodically poll for requests to print.
If that isn't possible I would setup a tunnel or VPN between the webserver and printserver networks. That way you can make the print request from the webserver on the server-side instead of the client. If you use curl, there are flags to ignore invalid SSL certificates etc. (I still suspect it's nicer to introduce a queue anyway, so the print requests aren't blocking).
If the webserver can make an ssh connection to something on the network where the printserver is on, you could do something like: ssh params user#host some curl command here.
Third option I can think of, if printserver can bind to for example a subdomain of the webserver domain, like: print.somedomain.com, you may be able to make it trusted by the somedomain.com certificate, IIRC you have to create a CSR (Certificate Signing Request) from the printserver certificate, and sign it with the somedomain.com certificate. Perhaps it doesn't even need to be a subdomain for this per se, but maybe that's a requirement for the browser to do it client-side.
The easiest way is to add a route to the webapp that does nothing more than relay the request to the print server. So make your AJAX POST request to https://myapp.com/print, and the server-side code powering that makes a request to http://printerserver/print.php, with the exact same POST content it received itself. As #dnozay said, this is commonly called a reverse proxy. Yes, to do that you'll have to reconfigure your printserver to accept (authenticated) requests from the webserver.
Alternatively, you could switch the printserver to https and directly call it from the client.
Note that an insecure (http) web-socket connection on a secure (https) page probably won't work either. And for good reason: generally it's a bad idea to mislead people by making insecure connections from what appears to them to be a secure page.
The server hosting the https webapp can reverse proxy the print server,
but since the printer is local to the user, this may not work.
The print server should have the correct CORS headers
Access-Control-Allow-Origin: *
or:
Access-Control-Allow-Origin: https://www.example.com
However there are pitfalls with using the wildcard.
From what I understand from the question, printserver is not accessible from the web application so the reverse proxy solution won't work here.
You are restricted from making requests from the browser to the printserver by cross-origin-policy.
If wish to communicate with the printserver from an HTTPS page you will need the printserver to expose print.php as HTTPS too.
You could create a DNS A record as a subdomain of your web application that resolves to the internal address of your printserver.
With those steps in place you should be able to update your printserver page to respond with permissive CORS headers which the browser should then respect. I don't think the browser will even issue CORS requests across different protocol schemes (HTTPS vs HTTP) or to internal domains, without a TLD.

Sharing/Proxying CodeIgniter Sessions across multiple CI instances on the same domain

I've got the following setup:
Site1 (Core) at Core.example.com
Site2 (Work1) at Work1.example.com
Site3 (Work2) at Work2.example.com
etc... I'll just user Work1 in discussion but the problem applies to all the Work sites
The idea is that Core is used for logins, payments, account management, etc and the Work sites offer functionality which is sufficiently different to justify separate CI instances/Dbs/etc.
This works relativfely well in that Core can set cookies which are picked up by the other sites.
The issue I've got is that I want to allow eg Work1 to make calls to Core on behalf of the user/as the user - for things like updating user account details, getting a list of services available to the user, etc.
I'm currently trying to do this via CURL. If I read the Core session cookie in the HTTP request made by the client to Work1 and inject it into the CURL request from Work1 to Core, Core doesn't accept it as a valid session cookie. I'm not sure if this is due to differing IP addresses (Client vs Work1) or something else.
Unfortunately, I need Work1 to have its own database so sharing a DB is not a viable option. That said, I've used the same encryption key across the sites so can decrypt/parse cookies (or anything else) as required on any site.
Can someone please suggest how I can convince Core that a request from Worker1 with the users' Core session cookie is in fact from the user?
I was eventually able to work around this by reading to cookie and injecting it into curl requests. I also had to handle session cookie updates retrieved via CURL, re-wrap them and pass them back to the client.
I also extended the IP verification to check the the session IP is Either: the client IP for the current request OR is my Work1 site's IP address and the HTTPS request has a MySite-OriginalIp header (which is attached to all my CURL requests) and that this value matches the session.
There are a number of other security enhancements, tweaks and sanity checks that are required to get this going in a robust and reliable way without compromising security - Too many to detail here.

If a website doesn't use HTTPS to do user log in, are the users passwords fairly unprotected?

This question tries to look into whether doing HTTPS log in is very important for any website.
Is it true that for many websites, if the login is done through HTTP but not HTTPS, then anybody can pretty much see the userID and password easily along the internet highway (or by looking between a router and the internet connection in an Internet Cafe)?
If so... do popular frameworks actually use HTTPS by default (or at least as an option), such as Rails 2.3.5 or Django, CakePHP, or .Net?
Yes, any machine on the pathway (that the packets pass through) can just examine the contents of the those packets. All it takes is a capturing proxy or a promiscuous mode network card with something like WireShark. Assuming that the passwords aren't encrypted in some other way (at a higher level), they will be visible.
I can't answer the second part of your question since I have no knowledge of those particular products but I would say that the inability to use secure sockets would pretty much make them useless.
Pax is right about passwords that aren't otherwise encrypted being visible.
Still, most sites don't use SSL still, and it does put the users at a certain degree of risk when accessing sites from public wifi.
HTTPS isn't a framework level option, it would be something you'd do when you set up the webserver. If you were to use an apache configuration for instance, you would open it up to a properly configured https, close http and install a certification. The framework wouldn't have a direct influence on that portion of the release.
If the user credentials are submitted via an HTML webform without HTTPS, then it is unsecure, the data is submitted in plain text. However, if the website uses HTTP authentication instead, then the server can send back a 401 reply (or 407 for proxies) to any request that does not provide valid credentials. 401/407 is the server's way to ask for credentials, and the reply provides a list of authentication schemes (Digest, NTLM, Negotiate, etc) that the server supports, which are usually more secure by themselves. The client/browser sends the same request again with the necessariy credentials in one of the schemes, then the server either sends the requested data, or sends another 401/407 reply if the credentials are rejected.

Preventing man in the middle attack while using https

I am writing a little app similar to omegle. I have a http server written in Java and a client which is a html document. The main way of communication is by http requests (long polling).
I've implemented some sort of security by using the https protocol and I have a securityid for every client that connects to the server. When the client connects, the server gives it a securityid which the client must always send back when it wants a request.
I am afraid of the man in the middle attack here, do you have any suggestions how I could protect the app from such an attack.
Note that this app is build for theoretical purposes, it won't be ever used for practical reasons so your solutions don't have to be necessarily practical.
HTTPS does not only do encryption, but also authentication of the server. When a client connects, the server shows it has a valid and trustable certificate for its domain. This certificate can not simply be spoofed or replayed by a man-in-the-middle.
Simply enabling HTTPS is not good enough because the web brings too many complications.
For one thing, make sure you set the secure flag on the cookies, or else they can be stolen.
It's also a good idea to ensure users only access the site via typing https://<yourdomain> in the address bar, this is the only way to ensure an HTTPS session is made with a valid certificate. When you type https://<yourdomain>, the browser will refuse to let you on the site unless the server provides a valid certificate for <yourdomain>.
If you just type <yourdomain> without https:// in front, the browser wont care what happens. This has two implications I can think of off the top of my head:
The attacker redirects to some unicode domain with a similar name (ie: looks the same but has a different binary string and is thus a different domain) and then the attacker provides a valid certificate for that domain (since he owns it), the user probably wouldn't notice this...
The attacker could emulate the server but without HTTPS, he would make his own secured connection to the real server and become a cleartext proxy between you and the server, he can now capture all your traffic and do anything he wants because he owns your session.

Resources