IBM Worklight session control with back-end server - session

Got a question on IBM worklight server, thanks in advance for any comments on this subject.
Workflow:
User --> WorkLight Server --> Back-end Server
Scenario:
(1) Designed a mobile application with IBM worklight studio and deployed to worklight server.
(2) Must use worklight server for the first entry point (user authentication by LDAP via worklight server)
(3) This mobile application designed for downloading/uploading huge file size (10mb to 1gb) to the backend server behind the worklight server (refer to the workflow pls.)
Question(s):
(1) How session sharing can be done in between worklight server and the back-end server?
(2) If session sharing cannot be done, what's the safest way that mobile application download/upload the files to the back-end server given that the application and back-end server do not know its the same authenticated transaction by skipping worklight server?
(3) Did I misunderstood anything from the worklight server architecture? as far as I know worklight server is just kind of gateway and presentation layer for deploying mobile application by its framework. Heavy and complex computation logic should be handled by other backend server. As long as the worklight server network IO is wide enough for file transfer, it shouldn't be a problem for the worklight server act as a gateway in this case. For file transfer, it shouldn't be a burden for CPU for computation as it is just simply in and out from the user device to the backend server (worklight as a middle man).
Thank you and sorry for the lengthy question.

(1) How session sharing can be done in between worklight server and the back-end server?
Can I make the assumption that you will be using a worklight adapter in order to do your file transfers? In that case, a "session" between the adapter and the client will be created. To be a bit more specific, if I have a global variable stored in my adapter and I modify the variable based on the adapter call from the client, the state of the variable will be maintained upon subsequent requests, and the state of that variable will only be visible to the calling client. (Just a note, this is not always true in clustered environments, where the client may be calling adapters on separate worklight servers)
(2) If session sharing cannot be done, what's the safest way that mobile application download/upload the files to the back-end server given that the application and back-end server do not know its the same authenticated transaction by skipping worklight server?
As stated above, this can be achieved through adapters. Since adapters can be protected through authentication, it knows that the client is calling through the same authenticated transaction.
(3) Did I misunderstood anything from the worklight server architecture? as far as I know worklight server is just kind of gateway and presentation layer for deploying mobile application by its framework. Heavy and complex computation logic should be handled by other backend server. As long as the worklight server network IO is wide enough for file transfer, it shouldn't be a problem for the worklight server act as a gateway in this case. For file transfer, it shouldn't be a burden for CPU for computation as it is just simply in and out from the user device to the backend server (worklight as a middle man).
You hit the nail on the head. I haven't personally done any performance testing with a high amount of computations on the worklight server, but if it is being used simply as a passthrough then you should be fine. The worklight server has been known to be able to process a relatively high amount of concurrent adapter calls, so I believe you will be fine with your setup.

Related

Using SSL/TLS is slowing down the performance of Java based client server application

I am working on a trading application where we get streaming rates from the server within seconds or even milliseconds. The server and database run on the same Linux environment. The client is actually accessible from any location using JNLP/ Java web start. The user can directly download the JNLP file from a provided URL and launch the client application on his system and directly connect to the server regardless of where the server is located.
Now, the problem is when we are not using SSL, the application runs just fine. But with SSL slowness is observed which drastically reduces the performance of client application. However, the same thing is not observed when client and server both are running on the same machine (Windows). Does anyone have an idea about possible reasons which can cause this slowness?

Backing Services as attached resources

I was looking at 12 factor app principle and saw this statement. I believe this statement states that the application must respond to any backing service such database or message broker and connect to them irrespective of what they are. How does it differ from traditional way of connecting? For eg: in my microservice , I was defined database and kafka broker as user provided service in cloud foundry. It just provides the connection parameters as vcap service variables. I still have code to connect to a database and kafka broker which are entirely different. What does this statement signify and how does it differ from what we do in non-cloud environment?
As stated in the below article:
https://12factor.net/backing-services
It states that :
A backing service is any service the app consumes over the network as part of its normal operation. Examples include datastores (such as MySQL or CouchDB), messaging/queueing systems (such as RabbitMQ or Beanstalkd), SMTP services for outbound email (such as Postfix), and caching systems (such as Memcached).
Microservice can connect to any backing service irrespective of the platform. In PCF, you are binding services to your microservices to connect. In other cloud environment, you can point to any backing service like AWS RDS or other services provided by the Platform.
The real difference is this :
Backing services like the database are traditionally managed by the same systems administrators as the app’s runtime deploy. In addition to these locally-managed services, the app may also have services provided and managed by third parties. Resources can be attached and detached to deploys at will. For example, if the app’s database is misbehaving due to a hardware issue, the app’s administrator might spin up a new database server restored from a recent backup. The current production database could be detached, and the new database attached – all without any code changes.

Automatically publish internal web application

I have written a web application that is, typically, installed internally by customers (based on IIS/MSSQL server).
When a customer wants to provide external access to the application, we offer the following supported scenarios:
Publish the application in their DMZ (pretty standard deployment).
Use our own platform where we host the application in our own cloud infrastructure for them.
However, because I have more and more customers who misunderstand the requirements for publishing an internal application, I would like to add a "one click" way of providing that service.
My idea is to have a reverse proxy installed on the customer's web server that will connect to a cloud server we control. When the application starts, it will connect to our server, authenticate and maintain the connection. When a user wants to use the application, she will use an URL that directs it to our server (say https://myapp.mycompany.org/CustomerID or https://CustomerID.myapp.mycompany.org). The server will then lookup the list of connections from reverse proxy to find the one matching the customer ID and, if found, use that connection to relay the end user connection.
In essence, that is the same thing as what Azure Application proxy or TeamViewer do, only without the need for using Azure AD or TeamViewer.
Is there an existing framework I can use for building such a service ? I know I can write it on my own but that is quite a large development.

Web is external and client side application, how to make ajax call to reach internal App Server

We have this architecture:
Web Server: Web Application is deployed (html, javascript, css)
Application Server: WebApi is deployed
Problem is , I cannot make ajax request to reach Application Server because its behind firewall.
The Web Application is supposed to be used publicly to the internet users.
What changes should we do to make it work?
Should we move our Web Application to Application Server? But how would this be accessible on internet.
Thanks in advance for suggestions/advice.
You're going to have to put an exception in the firewall for the address of your web server... that way your web server can access the API but nothing else can (well, not quite nothing else - other stuff on that web server can but that can easily be solved by having your web app hosted on it's own/dedicated web server).
If your Web Application makes direct calls to the Web API endpoint (e.g. is a single page application that use a client-side javascript framework like AngularJS and/or it uses AJAX calls to your application server address), there is no way for your clients to access your API if you do not allow public access to your application server.
That's because your client resides inside your users web browsers.
You have to allow incoming connections to your Application Server through internet in your firewall.
Well, it all depends on how you look at things and how distributed your application should be (criteria like load, security).
In general, Web API might be just one more client (from your applications server perspective).
On the other hand, in robust/distributed system, you would have Web API only as an endpoint (controllers, mappers and things like that) that your mobile/ajax clients send requests to and then Web API communicates to Application server (where your business logic is).
Having Web API communicate to DB directly is not a good idea because as you add clients to application server (mvc, web api, services, etc...) then you have as many db access points as you have clients. So, its a code maintenance problem plus a problem of your view tier being aware of DB.
Ideally, you need Application server as a tier where all your business logic is and its the one that all your clients target (mvc web app, web api, desktop, services, etc...) and that is the one that should communicate to your DAL. Also, then you can set firewall rules on your application server to allow incoming traffic from trusted sources (your other servers) instead from the whole internet (ajax).

Sharing security context across multiple glassfish servers

My application is distributed in multiple components (Web Applications).
The components are deployed on different glassfish servers.
Each Glassfish server is running on a different host.
I'm using the provided Security Realm for authentication.
Is there a way, that a user that is already authenticated on server x, doesn't need to authenticate again on server y (single-sign-on)?
I was looking into session replication. But if I understand clustering correctly, this means I would have to deploy the applications to the whole cluster (each instance). What I need is a physically distributed solution.
My reason for this setup is not load balancing or high availibility. This is a customer demand.
Any ideas or workarounds? Thanks!
This is an area where products like Oracle Access Manager come in for single-sign-on across multiple services. Oracle GlassFish Server (commercial product that includes GlassFish Server Control features) has a JSR 196 JAAS Provider for Oracle Access Manager. Check out the How-To document on setting it up.
Hope this helps.

Resources