How to setup 2 identical Shibboleth SP on 2 redundant servers - shibboleth

For availability purpose, I have a redundant setup with 2 fronts and 2 backs.
Each front hosts a web server, serving the same pages.
Each front runs a instance of Shibboleth SP, redirecting to the same IdP.
Both fronts are behind a load balancer exposing a unique public address. The Load Balancer will have a session affinity set on the shibboleth cookie.
On the first connection, the user is not authentified and Shibboleth SP redirects to the ADFS with a relay state.
After authentication, the ADFS redirects to the LB public address.
Problem is, there is no shibboleth cookie yet. Can the redirection be handled by either instance of Shibboleth SP?
If not, how to properly manage 2 redundant instances of Shibboleth SP as described?
Thanks!

ADFS redirects the user back to the LB address which passes along the SAMLResponse to whatever node it selects, at which point the SP (either) will see a valid SAMLResponse and initiate a cookie. If the user gets pinged to another SP node, that cookie won't be seen by the SP as valid unless both SPs are sharing a common session store, just as a database, and it'll kick through SSO again. Usually session stickyness would be pegged to user's IP so that they always (or almost always) get redirected to same SP instance... and on the offchance their affinity changes they'll still have a valid IDP session and shouldn't see the login page.
A lot of this depends upon your application and how that's built, too... see: https://wiki.shibboleth.net/confluence/display/SP3/Clustering... TL;DR: avoid clustering the SP by leveraging it on a single entry point since it's lightweight (problematic but what I'd usually recommend), or live with sharing a session DB (which has a lot of it's own problems).

Related

Is sticky sessions are different than cookie based sessions?

I was wondering that session management in cloud environments are available in many options for Microsoft azure/ Amazon Web Services / any private cloud. What I was looking that which is the best session management technique which will fit in all the cloud environments.
I have gone through many site but could not decide which is the most suitable in all cases. I read somewhere that Sticky sessions are also one of the option for session management. So looking for an answer which states that is Sticky sessions are different from cookie based session management?
If yes then how to use it?
Thanks
Ravi
Sticky session are likely to stay on same server when the first request comes and provided from same server for each request. Where as cookie based session are nothing but keeping the data on client machine in browser. can be served from any server which is available.
Yes Sticky Sessions are different than cookie based sessions.
As sticky sessions are nothing but handled by load balancers which handles to get sessions in request from client and passes it to the same server where the first request came to that server. E.g. While loading an website request goes to server A, then sessions get stored on server A, while next request comes from user the request sent to the same server i.e. Server A, irrespective of how many servers present in the farm.
Whereas cookie based sessions are stored on client machine, and it gets added with each new request. So it can be read and supported on any server in farm irrespective which server generated and stored session while first login.

Why sharing session to implement SSO is not good?

Why sharing session to implement SSO is not good? I'm learning SSO system.
Thinking about this scenes:
Assume that all http requests to business services is required Login, the business services need verify the requests is login or not by asking SSO service in CAS or SAML. If there are 10 business services, and each service's request is 1k req/s, so the SSO service's request is 10k req/s. It's hard to image the SSO service can hold on.
SO, may be there is a cache mechanism in the business services to verify login token. But when user logout, the SSO service need remove the verify info, and the business services need remove the cache verify info also. I think that's too complicated. The SSO service need tell the business services some people was logout. So why don't all service sharing the login token verify info? Let SSO service write, and other business read. It remind me the sharing the session to implement SSO. And I thought if I can sharing the login token verify info by redis distributed cluster. But I have hear sharing session is not good? So why?
Whether the SSO server can handle that many requests depends on your deployment of it. There are very large deployments of CAS that handle hundreds of thousands of requests. It varies.
In general, the SSO session is entirely separate from your application session. Once you have logged onto the application via SSO, you have established a session for that application that will last for as long as you configure it to last. When it expires, your application may decide to authenticate against your SSO server again. If the SSO server has still an SSO session, it will simply re-issue the appropriate data and your app will recreate the session. If not, it will challenge the user for credentials, whatever they may, and redo the same.
Session concerns of the application are entirely yours and application's concerns. The SSO server should never get involved. If your application has a requirement to share sessions because it's clustered, then you should share sessions. Nobody said it's a bad idea. However, you generally want to make sure your application is as stateless as possible since that will make clustered deployments easier.
When you log out from your application, your app session is gone, but the SSO session may still exist. As a result, you will get right back into the app because there is no need to provide credentials. If you wish, you could log out of the app AND your SSO server.
If you have all other applications logged in via SSO, and you wish to log out of all by logging out of one, this is called SLO. Your SSO server will need to reach out to every app that it has created a ticket for and contact them to logout. Or, you could destroy the shared sessions for all apps assuming they are all part of the same suite.

Glassfish Cluster Session Problems On Amazon EC2 Using Elastic Load Balancer

First this app works perfectly fine in a non-clustered environment.
The problem we have is when the ELB routes first to one server in a cluster during a session, then to a second server. The second server can't find the session. e.g.
An iOS app passes a login call to a Glassfish 4 server cluster (we're using oAuth/Facebook tokens, so no Glassish security realms).
The Amazon Elastic Load Balancer (ELB) sends to server 1.
Session is authenticated and user logged in and a session cookie passed back to the app.
Immediately the app sends another request which needs authentication (is this a valid session).
The ELB decides to send the request to server 2
In our authenticate servlet filter, server 2 can't find a session with the id passed in with the cookie
The servlet says the user is not authenticated and the call fails.
Our code is pretty typical for finding the session (if no session immediately return fail):
HttpSession session = req.getSession(false);
//psuedocode
if session == null then session not authenticated log and return
else session authenticated, log and return
If the second call gets routed to the same server as the login, the second call works fine. Whenever a call (be it the second, third, fourth, whatever) goes to the second server, authentication fails because it can't find the session on the second server.
I'm looking to see if anyone has encountered something like this and how you have resolved the issue. Is it better to use sticky sessions on the ELB, or is Apache web server using JK or AJP a better choice?
Two potential issues off the top of my head:
Have you specified <distributable/> in your web.xml?
Could be a multicast issue. EC2 does not support multicast, which is what GlassFish uses by default. Check out this stackoverflow thread that discusses the topic, including non-multicast clustering.

Single Sign-On between 2 different platforms but on the same domain

I'm in the process of rearranging our web-based systems, so that users will be able to log on to our systems through a Sharepoint front-end. Our single sign-on server is an Oracle SSO server that authenticates against the same domain as the sharepoint server does, but these two are currently 2 separate logins.
What I'm looking for is to configure this scenario:
A user logs in to the Sharepoint site, authenticating agains Active Directory through the TMG. This gives the user access to the sharepoint site, and this is all standard OOTB functionality. Then the user should be able to navigate into our other systems without a re-login (because the SSO configured for external authentication with the same AD, and therefore uses the same userbase).
So basically the users currently have to login twice with the same domain\user + password. I would like the SSO server to be able to read the cookie that was established in the first login, and use that instead of presenting the SSO login screen all over again.
Is it possible to share such a cookie between 2 different platforms on the same domain?
I have implemented a kerberos "Zero-sign-on" approach for the Oracle SSO server, but this only works as long as the user comes from a computer inside our domain. When the user logs on from the outside world (www) he will be prompted to login to sharepoint first, and then to the Oracle SSO.
I basically need the Oracle SSO Cookie to somehow read the Sharepoint Cookie that was established. Does this make sense?

Is it possible to connect to a site through a proxy and then disconnect in the same session?

I was wondering if it is possible to log into a site with the normal login form (take facebook for example) through a proxy server. Once logged in, can a person disconnect from the proxy and use their normal ISP connection to access the members area on the site without logging in again?
Thanks!
It depends on how the site manages state. If sessions are tied to a particular remote host, then no. Otherwise, there's nothing preventing it. Typically a session is managed via cookies or something similar that the browser sends with every request. Thus whether or not the proxy is there is irrelevant to the maintenance of the session state.

Resources