Sharing security context across multiple glassfish servers - session

My application is distributed in multiple components (Web Applications).
The components are deployed on different glassfish servers.
Each Glassfish server is running on a different host.
I'm using the provided Security Realm for authentication.
Is there a way, that a user that is already authenticated on server x, doesn't need to authenticate again on server y (single-sign-on)?
I was looking into session replication. But if I understand clustering correctly, this means I would have to deploy the applications to the whole cluster (each instance). What I need is a physically distributed solution.
My reason for this setup is not load balancing or high availibility. This is a customer demand.
Any ideas or workarounds? Thanks!

This is an area where products like Oracle Access Manager come in for single-sign-on across multiple services. Oracle GlassFish Server (commercial product that includes GlassFish Server Control features) has a JSR 196 JAAS Provider for Oracle Access Manager. Check out the How-To document on setting it up.
Hope this helps.

Related

ColdFusion 2018 Standard 2 node cluster with J2EE shared sessions for failover

Why we want to configure that setup?
We would like to have a Blue/Green zero downtime setup for our CF2018 App.
We currently have a basic CF Server Install (IIS + CF2018) in one server that connects to another Server for the DB (we are using CF2018 Standard).
Our app uses J2EE sessions
There are posts that explain how to use the External Session Storage feature included in CF (redis) but that won’t work with J2EE sessions, the CF admin interface wont allow it.
How can I setup 2 servers in a cluster (behind a load balancer) with J2EE session failover functionality by using CF2018 Standard Edition?

Backing Services as attached resources

I was looking at 12 factor app principle and saw this statement. I believe this statement states that the application must respond to any backing service such database or message broker and connect to them irrespective of what they are. How does it differ from traditional way of connecting? For eg: in my microservice , I was defined database and kafka broker as user provided service in cloud foundry. It just provides the connection parameters as vcap service variables. I still have code to connect to a database and kafka broker which are entirely different. What does this statement signify and how does it differ from what we do in non-cloud environment?
As stated in the below article:
https://12factor.net/backing-services
It states that :
A backing service is any service the app consumes over the network as part of its normal operation. Examples include datastores (such as MySQL or CouchDB), messaging/queueing systems (such as RabbitMQ or Beanstalkd), SMTP services for outbound email (such as Postfix), and caching systems (such as Memcached).
Microservice can connect to any backing service irrespective of the platform. In PCF, you are binding services to your microservices to connect. In other cloud environment, you can point to any backing service like AWS RDS or other services provided by the Platform.
The real difference is this :
Backing services like the database are traditionally managed by the same systems administrators as the app’s runtime deploy. In addition to these locally-managed services, the app may also have services provided and managed by third parties. Resources can be attached and detached to deploys at will. For example, if the app’s database is misbehaving due to a hardware issue, the app’s administrator might spin up a new database server restored from a recent backup. The current production database could be detached, and the new database attached – all without any code changes.

IBM Worklight session control with back-end server

Got a question on IBM worklight server, thanks in advance for any comments on this subject.
Workflow:
User --> WorkLight Server --> Back-end Server
Scenario:
(1) Designed a mobile application with IBM worklight studio and deployed to worklight server.
(2) Must use worklight server for the first entry point (user authentication by LDAP via worklight server)
(3) This mobile application designed for downloading/uploading huge file size (10mb to 1gb) to the backend server behind the worklight server (refer to the workflow pls.)
Question(s):
(1) How session sharing can be done in between worklight server and the back-end server?
(2) If session sharing cannot be done, what's the safest way that mobile application download/upload the files to the back-end server given that the application and back-end server do not know its the same authenticated transaction by skipping worklight server?
(3) Did I misunderstood anything from the worklight server architecture? as far as I know worklight server is just kind of gateway and presentation layer for deploying mobile application by its framework. Heavy and complex computation logic should be handled by other backend server. As long as the worklight server network IO is wide enough for file transfer, it shouldn't be a problem for the worklight server act as a gateway in this case. For file transfer, it shouldn't be a burden for CPU for computation as it is just simply in and out from the user device to the backend server (worklight as a middle man).
Thank you and sorry for the lengthy question.
(1) How session sharing can be done in between worklight server and the back-end server?
Can I make the assumption that you will be using a worklight adapter in order to do your file transfers? In that case, a "session" between the adapter and the client will be created. To be a bit more specific, if I have a global variable stored in my adapter and I modify the variable based on the adapter call from the client, the state of the variable will be maintained upon subsequent requests, and the state of that variable will only be visible to the calling client. (Just a note, this is not always true in clustered environments, where the client may be calling adapters on separate worklight servers)
(2) If session sharing cannot be done, what's the safest way that mobile application download/upload the files to the back-end server given that the application and back-end server do not know its the same authenticated transaction by skipping worklight server?
As stated above, this can be achieved through adapters. Since adapters can be protected through authentication, it knows that the client is calling through the same authenticated transaction.
(3) Did I misunderstood anything from the worklight server architecture? as far as I know worklight server is just kind of gateway and presentation layer for deploying mobile application by its framework. Heavy and complex computation logic should be handled by other backend server. As long as the worklight server network IO is wide enough for file transfer, it shouldn't be a problem for the worklight server act as a gateway in this case. For file transfer, it shouldn't be a burden for CPU for computation as it is just simply in and out from the user device to the backend server (worklight as a middle man).
You hit the nail on the head. I haven't personally done any performance testing with a high amount of computations on the worklight server, but if it is being used simply as a passthrough then you should be fine. The worklight server has been known to be able to process a relatively high amount of concurrent adapter calls, so I believe you will be fine with your setup.

web server on top of app server

Suppose if a scenario is given where we have put the web server on top of application server in what ways this web server can help
1) on a non-distributed enterprise application(ear
2) on a distributed enterprise application(ear)
3) on a web application(war)...i think in this case we do not require a app server
In a multi-tier architecture, the app-server performs the business-rules and the data-access layers. This way different front-ends can be used against the one app-server, eg web, mobile, native.

Microsoft SQL Server on a VPS for hosting multiple client databases - Is this the right way to go?

Good morning,
I have found that many of my customers have MS Access already installed on their PCs. Although Access is very limited as a data store, I have found that it is great for deploying low-cost front-ends for entry level customers.
I want to start renting a VPS, so I can host customer databases using Microsoft SQL Server 2008, which they can access using a locally stored Access front-end. I do have a few questions though:
In order to access the remotely hosted databases, and use the security features, would the VPS need to be set up as a domain controller, using AD DS? If I am hosting multiple customer databases, this is not an option.
What I envisage is being able to set up a simple MS Access front end, to access a MS SQL Server database on my VPS. For security, I would want the database to use the Windows account on the client machine to authenticate, and also to provide basic data change tracking.
Is this possible? Or, will I need to set up a server for each client and have it configured as a domain controller, etc?
You can have many databases on the same server, so for each client you d not need to setup a separate domain controller. Only the connection strings will be different.
You can use SSL for establishing connection with the remote server to make the process more secure. You can also make a few web services to play with the data (CRUD operations), this would also make things more manageable.
take care :)

Resources