Sharing/Proxying CodeIgniter Sessions across multiple CI instances on the same domain - session

I've got the following setup:
Site1 (Core) at Core.example.com
Site2 (Work1) at Work1.example.com
Site3 (Work2) at Work2.example.com
etc... I'll just user Work1 in discussion but the problem applies to all the Work sites
The idea is that Core is used for logins, payments, account management, etc and the Work sites offer functionality which is sufficiently different to justify separate CI instances/Dbs/etc.
This works relativfely well in that Core can set cookies which are picked up by the other sites.
The issue I've got is that I want to allow eg Work1 to make calls to Core on behalf of the user/as the user - for things like updating user account details, getting a list of services available to the user, etc.
I'm currently trying to do this via CURL. If I read the Core session cookie in the HTTP request made by the client to Work1 and inject it into the CURL request from Work1 to Core, Core doesn't accept it as a valid session cookie. I'm not sure if this is due to differing IP addresses (Client vs Work1) or something else.
Unfortunately, I need Work1 to have its own database so sharing a DB is not a viable option. That said, I've used the same encryption key across the sites so can decrypt/parse cookies (or anything else) as required on any site.
Can someone please suggest how I can convince Core that a request from Worker1 with the users' Core session cookie is in fact from the user?

I was eventually able to work around this by reading to cookie and injecting it into curl requests. I also had to handle session cookie updates retrieved via CURL, re-wrap them and pass them back to the client.
I also extended the IP verification to check the the session IP is Either: the client IP for the current request OR is my Work1 site's IP address and the HTTPS request has a MySite-OriginalIp header (which is attached to all my CURL requests) and that this value matches the session.
There are a number of other security enhancements, tweaks and sanity checks that are required to get this going in a robust and reliable way without compromising security - Too many to detail here.

Related

How can a web page send a message to the local network

Our web application has a button that is supposed to send data to a server on the local network that in turn prints something on a printer.
So far it was easy: The button triggered an AJAX POST request to http://printerserver/print.php with a token, that page connected to the web application to verify the token and get the data to print and then printed.
However, we are now delivering our web application via HTTPs (and I would rather not go back to HTTP for this) and newer versions of Chrome and Firefox don't make the request to the HTTP address anymore, they don't even send the request to check CORS headers.
Now, what is a modern alternative to the cross-protocol XHR? Do Websockets suffer from the same problem? (A Google search did not make clear what is the current state here.) Can I use TCP Sockets already? I would rather not switch to GET requests either, because the action is not idempotent and it might have practical implications with preloading and caching.
I can change the application on the printerserver in any way (so I could replace it with NodeJS or something) but I cannot change the users' browsers (to trust a self-signed certificate for printerserver for example).
You could store the print requests on the webserver in a queue and make the printserver periodically poll for requests to print.
If that isn't possible I would setup a tunnel or VPN between the webserver and printserver networks. That way you can make the print request from the webserver on the server-side instead of the client. If you use curl, there are flags to ignore invalid SSL certificates etc. (I still suspect it's nicer to introduce a queue anyway, so the print requests aren't blocking).
If the webserver can make an ssh connection to something on the network where the printserver is on, you could do something like: ssh params user#host some curl command here.
Third option I can think of, if printserver can bind to for example a subdomain of the webserver domain, like: print.somedomain.com, you may be able to make it trusted by the somedomain.com certificate, IIRC you have to create a CSR (Certificate Signing Request) from the printserver certificate, and sign it with the somedomain.com certificate. Perhaps it doesn't even need to be a subdomain for this per se, but maybe that's a requirement for the browser to do it client-side.
The easiest way is to add a route to the webapp that does nothing more than relay the request to the print server. So make your AJAX POST request to https://myapp.com/print, and the server-side code powering that makes a request to http://printerserver/print.php, with the exact same POST content it received itself. As #dnozay said, this is commonly called a reverse proxy. Yes, to do that you'll have to reconfigure your printserver to accept (authenticated) requests from the webserver.
Alternatively, you could switch the printserver to https and directly call it from the client.
Note that an insecure (http) web-socket connection on a secure (https) page probably won't work either. And for good reason: generally it's a bad idea to mislead people by making insecure connections from what appears to them to be a secure page.
The server hosting the https webapp can reverse proxy the print server,
but since the printer is local to the user, this may not work.
The print server should have the correct CORS headers
Access-Control-Allow-Origin: *
or:
Access-Control-Allow-Origin: https://www.example.com
However there are pitfalls with using the wildcard.
From what I understand from the question, printserver is not accessible from the web application so the reverse proxy solution won't work here.
You are restricted from making requests from the browser to the printserver by cross-origin-policy.
If wish to communicate with the printserver from an HTTPS page you will need the printserver to expose print.php as HTTPS too.
You could create a DNS A record as a subdomain of your web application that resolves to the internal address of your printserver.
With those steps in place you should be able to update your printserver page to respond with permissive CORS headers which the browser should then respect. I don't think the browser will even issue CORS requests across different protocol schemes (HTTPS vs HTTP) or to internal domains, without a TLD.

If a website doesn't use HTTPS to do user log in, are the users passwords fairly unprotected?

This question tries to look into whether doing HTTPS log in is very important for any website.
Is it true that for many websites, if the login is done through HTTP but not HTTPS, then anybody can pretty much see the userID and password easily along the internet highway (or by looking between a router and the internet connection in an Internet Cafe)?
If so... do popular frameworks actually use HTTPS by default (or at least as an option), such as Rails 2.3.5 or Django, CakePHP, or .Net?
Yes, any machine on the pathway (that the packets pass through) can just examine the contents of the those packets. All it takes is a capturing proxy or a promiscuous mode network card with something like WireShark. Assuming that the passwords aren't encrypted in some other way (at a higher level), they will be visible.
I can't answer the second part of your question since I have no knowledge of those particular products but I would say that the inability to use secure sockets would pretty much make them useless.
Pax is right about passwords that aren't otherwise encrypted being visible.
Still, most sites don't use SSL still, and it does put the users at a certain degree of risk when accessing sites from public wifi.
HTTPS isn't a framework level option, it would be something you'd do when you set up the webserver. If you were to use an apache configuration for instance, you would open it up to a properly configured https, close http and install a certification. The framework wouldn't have a direct influence on that portion of the release.
If the user credentials are submitted via an HTML webform without HTTPS, then it is unsecure, the data is submitted in plain text. However, if the website uses HTTP authentication instead, then the server can send back a 401 reply (or 407 for proxies) to any request that does not provide valid credentials. 401/407 is the server's way to ask for credentials, and the reply provides a list of authentication schemes (Digest, NTLM, Negotiate, etc) that the server supports, which are usually more secure by themselves. The client/browser sends the same request again with the necessariy credentials in one of the schemes, then the server either sends the requested data, or sends another 401/407 reply if the credentials are rejected.

Authenticating a client-side web service request in a cached environment

We're building a set of external web services to be consumed client-side (using jquery/AJAX) by visitors to our site. The web services need to be publicly available but we'd like to limit access to site visitors.
Importantly, the site in question sits behind a CDN and we cache page content for 24 hours; AJAX requests would preferably be cached as well but I'm conscious doing so will limit our authentication options. Our visitors access the site and services anonymously.
What are some standard "patterns" for authenticating client requests? I'm not dealing with confidential data per-se but do want to deter other users/sites from hijacking these services for liability (think data distribution) and performance reasons.
I'm thinking of a shared secret that's refreshed daily and used site-wide by all clients; any web service request would include the secret. Pretty basic but are there other, better ways for the service to detect the caller's origin in a manner that can't be spoofed?
If the threat to your web service is related to someone automating the client calls, you can implement rate limiting on server side. As you rightly mentioned, client can be required to provide key for each request. Alternatively, if only mortals are going to interact with web service, you can also implement Human Interaction Proof like Captcha etc. One thing to make sure is that "key" which will be used by client needs to given in controlled manner. I once came across a system which basically gave away unlimited keys - this means that automation control will be ineffective as an attacker can request as many keys and make unlimited calls. If you are limiting using IP address, make sure that you throttle requests on network part of ip address (A.B.C.X) as host part (X) can change (when users are behind proxies) If your clients are anonymous, the best/closest "identifier" is indeed address.

Difference between session affinity and sticky session?

What is the difference between session affinity and sticky session in context of load balancing servers?
I've seen those terms used interchangeably, but there are different ways of implementing it:
Send a cookie on the first response and then look for it on subsequent ones. The cookie says which real server to send to.
Bad if you have to support cookie-less browsers
Partition based on the requester's IP address.
Bad if it isn't static or if many come in through the same proxy.
If you authenticate users, partition based on user name (it has to be an HTTP supported authentication mode to do this).
Don't require state.
Let clients hit any server (send state to the client and have them send it back)
This is not a sticky session, it's a way to avoid having to do it.
I would suspect that sticky might refer to the cookie way, and that affinity might refer to #2 and #3 in some contexts, but that's not how I have seen it used (or use it myself)
As I've always heard the terms used in a load-balancing scenario, they are interchangeable. Both mean that once a session is started, the same server serves all requests for that session.
Sticky session means that when a request comes into a site from a client all further requests go to the same server initial client request accessed. I believe that session affinity is a synonym for sticky session.
They are the same.
Both mean that when coming in to the load balancer, the request will be directed to the server that served the first request (and has the session).
Sticky session means to route the requests of particular session to the same physical machine who served the first request for that session.
This article clarifies the question for me and discusses other types of load balancer persistence.
Dave's Thoughts: Load balancer persistence (sticky sessions)
difference is explained in this article:
https://www.haproxy.com/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/
Main part from this link:
Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server. Client's IP address is used in this case. IP address may change during same session and then connection may switch to different server.
Persistence: this is when we use Application layer information to stick a client to a single server. In this case, loadbalancer inject some cookie in response and use same cookie in subsequent request to route to same server.
sticky session: a sticky session is a session maintained by persistence
The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable(when client dont allow cookies like cookie less browser), so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…
They are Synonyms.
No Difference At all
Sticky Session / Session Affinity:
Affinity/Stickiness/Contact between user session and, the server to which user request is sent is retained.

How to determine uniqueness of clients from http requests?

I notice that when http requests are made from clients through a proxy server, then the IP address of the requests is always that of the proxy. So if many clients from a huge corporation with a proxy server access a web site, I cannot tell if the requests are from unique clients or not. Is there any way to determine uniqueness of clients if the http requests are through a proxy? I know that the mac address is not included in the http request, so I have just about ruled that out.
The simplest way would be to set a cookie on the response, and check it in the request. If it's there, then you've seen that client before (and you could include some identification in the cookie). Of course, this relies on the clients being cookie-aware and the user not having disabled cookies (or clearing them manually).
There's also the issue of some clients which may be cookie-aware, but will effectively start from scratch each time - for instance, if someone's running a program to scrape your site, it will probably start with a fresh cookie jar each time, no matter how you set the cookie.
Provide a cookie to each new user with a GUID. You can track that and even include the GUID in your server logs.
We do this with our public web server to track "unique paths" through our site.

Resources