Application level caching of XACML Authorization Details from WSO2 IDP - caching

We are working on application where we will create and store XACML policies in WSO2 server for authorization.
We are looking for the best way to authorise user whenever he is trying to access anything in application. Now we are not sure by this approach how much performance issue will come?
One way we can deal with this is when user is trying to login, at that time get his all details from IDP so we can cache it at application level and we don't have to make trip to wso2 idp each time user is performing any action. It may cause slow login but from there other application experience will be fast.
We just wanted to confirm that is this the correct approach? Is there any issue with this design or is there any better way we can use?

I think its not the correct approach especially when we are talking about attribute based access control (ABAC) and when the attributes require to change frequently.
Also, when you are doing the policy evaluation its better to let PIP fetch the required attributes instead application sending all attributes and furthermore you may use the caching at WSO2 IS side also for XACML policy decision or attributes.
Apart from that for the better performance you may implement your PEP as thrift based. We did the same implementation and did a successful load testing for one of the most used application.
I would not recommend the caching at application side due the following reasons:
You have to make round trip for policy evaluation even if you cache attributes locally at application.
Caching attributes locally inside application will defeat the purpose in case the same policy to be used by other applications in future.
Allowing PIP to fetch required attributes at WSO2 side is recommended as it will ease the new application integration where you need not to worry fetching attributes for all new application integrations.
Caching can be done centrally at WSO2 IS server instead applying the cache at each application level.
P.S. - These are my personal views and opinions and it may not be perfect or best fit as per different requirements and business needs.

Related

why rate limiting logic should be placed with application code rather then web server

I am exploring to put rate limiting functionality on rest API which are developed using spring boot.
After going through many articles, I came to know that the best way to put rate limiting functionality is with application code, rather then putting it on web servers.
My question is how do you decide that which functionality should go where. Since, its monitoring your incoming calls and nothing to do with business logic, the ideal place should be a web server.
My question is how do you decide that which functionality should go
where. Since, its monitoring your incoming calls and nothing to do
with business logic, the ideal place should be a web server.
Technically the web server could do the job but in the facts, a web server doesn't have necessarily all needed information, it is not specialized for API consuming and it may also make the testability of this feature much harder.
Some practical reasons why the webserver side could be a bad choice :
the developers don't have necessarily the configuration of the HTTP web server in local.
you want to write unit and integration test to check that the rate limitations are applied as specified. Creating a configuration for automated testing is much simpler in the scope of your Java application than with a configuration file defined on a web server.
web servers reasons in terms of HTTP request-response, not in terms of service.
Rate limitations may be applied according to the IP but not only, the username, the user roles, the type of service may influence the limitations. Not sure that you could get all of these easily from an HTTP server.
For example roles are stored on the server side or in a database.
A better option is setting these mechanisms by adding specific and specialized classes or configuration files, which simplifies their reading, their maintenance and their testability.
As you mention Spring Boot in your tags, that and that should interest you.
I recommend spring-cloud-gateway's rate limiter
you could separate this functionality from your business logic by using Filters.
https://www.baeldung.com/spring-boot-add-filter

Authorisation in microservices - how to approach domain object or entity level access control using ACL?

I am currently building microservices based system on java Spring Cloud. Some microservices use PostgreSQL and some of them MongoDB. REST and JMS is used for communication. The plan is to use SSO and OAuth2 for authentication
The challenge I am facing is that authorisation have to be done on domain object/entity level. It means some kind of ACL (Access Control List) is needed. The best practice for this kind of architecture is to avoid something like this and have coarse grained security probably on application/service layer level in every microservice but unfortunately it is not possible.
My final idea is to use Spring Security ACL and have the ACL tables in shared database between all microservices. The database would be accessed only by Spring infrastructure or through Spring api. The DB schema looks stable and unlikely will change. In this case I would simply break the rule about sharing db between microservices.
I was considering different kinds of distributed solutions but left them:
One microservice with ACL and accessing it using rest - The problem is too many http calls and performance degradation. I would have to extend Spring Security ACL to replace db access by rest calls
ACL in every microservice for its own entities - Sounds quite reasonable but imagine a case having some read models of entities synchronised to some other microservices or same entity that exists in different bounded contexts (different microservices). ACLs can become really unmanageable and can be source of errors.
One microservice with ACL tables that are synchronised to other microservices as a read model. The problem is that there is no support in Spring Security ACL for MongoDB. I have seen some custom solutions on github and yes it is doable. But...when creating a new entity I have to create record in the microservice that owns ACL and then it is asynchronously synchronised as a read model to microservice owning the entity. It does not sound as a easy solution
Choose some URL based access control on API gateway. But I would have to modify Spring Security ACL somehow. The API gateway would have to know too much about other services. Granularity of access control is bound to REST api granularity. Maybe I can not imagine all the consequences and other problems that would this approach bring
Finally the solution with shared db that I mentioned is my favorite. Actually it was the first one I have disqualified because it is “shared” database. But after going through possibilities it seemed to me that this is the only one that would work. There is some more additional complexity in case I would like to use some kind of caching because distributed cache would be needed.
I would really use some advice and opinions how to approach the architecture because this is really tricky and a lot of things can go wrong here.
Many thanks,
Lukas
I don't have a full and clear picture of your authorization requirements.
I'm assuming a correlation between authenticated users and domain object/entity permissions.
One option to consider is to define user attributes corresponding to your domain object/entity permissions, and implement an Attribute-based Access Control (ABAC) policy.
The attributes are tied to and stored with the users identity in your repository, and retrieved when performing your authentication.
I think nowadays a Google Zanzibar based approach would be best suited for this.
While tying services closer to each other - because every ACL related request must talk to the zanzibar service to evaluate on permissions - Googles paper on zanzibar describes really well how they solved the problem of latency and eventual consistency (or the "new enemy" problem in this case).
This is pretty much the "Shared Database" approach, but with a problem specific way of storing the database.
OSS implementations exist see SpiceDB (which supports CockroachDB as Backend) or Ory Kratos for example.
Shared Db is the best option with two data sources RO and RW. RO is for regular usage and RW for creating and modifying acl. We can think of storing the ACL in index server for faster look up. One final say for fastness is define / create more accessible fashion so that we can transact less. Especially acl based data approach has this caveat. In micro services approach the way to access data subjected to acl is first get data and filter based on the acl

Keep state between webapi calls

I need to send an information to a user via a web-api only once by session, and I used to do in asmx by storing a variable in the session.
As in web-api I can't use sessions, how can I do this ?
Started as a comment, but ended up being too long...
ASP.NET Web API is mainly used to create HTTP services and, as Microsoft claim, ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. Such services are meant to be stateless so what you're trying to do is technically going against a pretty fundamental design goal. Having said that, things are not as clear-cut as they seem and there's some (almost religious) debate over whether a REST service should be stateless or allow state in some degree.
The following SO questions might give you some help and/or direction about achieving what you want:
ASP.NET Web API session or something?
If REST applications are supposed to be stateless, how do you manage sessions?
How to manage state in REST
Also, the following StrathWeb article gives some additional advice (with a code example) and links to other sources of information:
http://www.strathweb.com/2012/11/adding-session-support-to-asp-net-web-api/
In a project I'm currently working on, I'm having to store some state information for token-based user authentication and, since I have access to a database, I use a table to store the information I need. Technically speaking, and certainly for some people, I'm breaking the rules. But it works for me and, at the end of the day, you have a job to do and you may not always have the time to do things 100% correctly, so you have to be pragmatic in your approach.

using magento apis for ecommerce website

I am a beginner in magento and am working on creating a website using magento. I have noticed that magento has a good number of apis that expose all of the functionality that I would need to create an ecommerce website. So, I would like to use magento's apis to fetch data, but develop the UI separately without any dependencies on magento. I have found a lot of references that develop the website via magento theming, but not those where the UI is developed in a separate MVC and uses magento purely as service layer. Are there any problems/issues in my approach?
Edit: I have gained a lot of clarity on db performance issue in apis and how external caching can alleviate the issue, but I still don't understand the underwhelming use of magento as a service layer (i.e. fueling the website by using magento's apis), are they any other gotchas?
Here is how we overcame slowness in Magento APIs:
Created a Web service provider in J2EE, Spring MVC that acts as a proxy between Magento and end users.
J2EE Web service provider exposes pretty much all the APIs that Magento has but also supports JSON with REST along with SOAP & RPC.
J2EE Web service provider uses a document based database (MongoDB) to store a snapshot of product catalog in MongoDB.
J2EE Web service provider uses native MongoDB caching to serve data fast without running any expensive SQL queries.
To avoid dirty caching issues we created a hook in Magento Admin to push data into MongoDB whenever data changes in Magento.
This might sound like overkill to some but we have been able to achieve pretty high throughput without any slowness.
The Magento APIs are slow, you would encounter serious performance issues trying to run a site off of it.
Due to the complex nature of the EAV model, you may find it difficult to manage products through the API alone.
Are there any particular concerns you have about using Magento's own frontend? It is daunting at first but once you understand the layout system it's actually very powerful and customisable.
Technically it is possible to run a site only through the API.
The issue you might face is a practical one, instead of spending your time trying to learn all the API calls, you can learn how to implement your current UI in Magento.
The advantage to this approach is that you will also better understand how Magento works internally, thus allowing you to leverage it's functionality for your unique business needs.
Another issue is that when using API's you have a little less control over how things are processed / calculated, vs when working in Magento itself there is a lot of control over specifics.
I regularly see "session expiration" issues when accessing Magento's API, through both SOAP and XMLRPC. All my calls require exception handling to avoid halting execution. I imagine that alone would create a nightmare when building everything on top of the API.
The best answer you're going to get is to Load Test the API before you start coding. Log the tests extensively and look for errors. If you see errors on a normal basis that should answer your question. Even if you find documentation that says it's okay to do what you're trying, you're still going to have to tune the API to work properly under the load required to run the store.
It will be good to know what you're up against before sinking hours into development.

How to provision OSGi services per client

We are developing a web-application (lets call it an image bank) for which we have identified the following needs:
The application caters customers which consist of a set of users.
A new customer can be created dynamically and a customer manages it's users
Customers have different feature sets which can be changed dynamically
Customers can develop their own features and have them deployed.
The application is homogeneous and has a current version, but version lifting of customers can still be handled individually.
The application should be managed as a whole and customers share the resources which should be easy to scale.
Question: Should we build this on a standard OSGi framework or would we be better of using one of the emerging application frameworks (Virgo, Aries or upcoming OSGi standard)?
More background and some initial thoughts:
We're building a web-app which we envision will soon have hundreds of customers (companies) with hundreds of users each (employees), otherwise why bother ;). We want to make it modular hence OSGi. In the future customers themselves might develop and plugin components to their application so we need customer isolation. We also might want different customers to get different feature sets.
What's the "correct" way to provide different service implementations to different clients of an application when different clients share the same bundles?
We could use the app-server approach (we've looked at Virgo) and load each bundle once for each customer into their own "app". However it doesn't feel like embracing OSGi. We're not hosting a multitude of applications, 99% of the services will share the same impl. for all customers. Also we want to manage (configure, monitor etc.) the application as one.
Each service could be registered (properly configured) once for each customer along with some "customer-token" property. It's a bit messy and would have to be handled with an extender pattern or perhaps a ManagedServiceFactory? Also before registering a service for customer A one will need to acquire the A-version of each of it's dependencies.
The "current" customer will be known to each request and can be bound to the thread. It's a bit of a mess having to supply a customer-token each time you search for a service. It makes it hard to use component frameworks like blueprint. To get around the problem we could use service hooks to proxy each registered service type and let the proxy dispatch to the right instance according to current customer (thread).
Beginning our whole OSGi experience by implementing the workaround (hack?) above really feels like an indication we're on the wrong path. So what should we do? Go back to Virgo? Try something similar to what's outlined above? Something completely different?!
ps. Thanks for reading all the way down here! ;)
There are a couple of aspects to a solution:
First of all, you need to find a way to configure the different customers you have. Building a solution on top of ConfigurationAdmin makes sense here, because then you can leverage the existing OSGi standard as much as possible. The reason you might want to build something on top is that ConfigurationAdmin allows you to configure each individual service, but you might want to add a layer on top so you can more conveniently configure your whole application (the assembly of bundles) in one go. Such a configuration can then be translated into the individual configurations of the services.
Adding a property to services that have customer specific implementations makes a lot of sense. You can set them up using a ManagedServiceFactory, and the property makes it easy to lookup the service for the right customer using a filter. You can even define a fallback scenario where you either look for a customer specific service, or a generic one (because not all services will probably be customer specific). Since you need to explicitly add such filters to your dependencies, I'd recommend taking an existing dependency management solution and extending it for your specific use case so dependencies automatically add the right customer specific filters without you having to specify that by hand. I realize I might have to go into more detail here, just let me know...
The next question then is, how to keep track of the customer "context" within your application. Traditionally there are only a few options here, with a thread local context being the most used one. Binding threads to customers does tend to limit you in terms of implementation options though, as in general it probably means you have to prohibit developers from creating threads themselves, and it's hard to off-load certain tasks to pools of worker threads. It gets even worse if you ever decide to use Remote Services as that means you will completely loose the context.
So, for passing on the customer identification from one component to another, I personally prefer a solution where:
As soon as the request comes in (for example in your HTTP servlet) somehow determine the customer ID.
Explicitly pass on that ID down the chain of service dependencies.
Only use solutions like the use of thread locals within the borders of a single bundle, if for example you're using a third party library inside your bundle that needs this to keep track of the customer.
I've been thinking about this same issue (I think) for some time now, and would like your opinions on the following analogy.
Consider a series of web application where you provide access control using a single sign-on (SSO) infrastructure. The user authenticates once using the SSO-server, and - when a request comes in - the target web application asks the SSO server whether the user is (still) authenticated and determines itself if the user is authorized. The authorization information might also be provided by the SSO server as well.
Now think of your application bundles as mini-applications. Although they're not web applications, would it still not make sense to have some sort of SSO bundle using SSO techniques to do authentication and to provide authorization information? Every application bundle would have to be developed or configured to use the SSO bundle to validate the authentication (SSO token), and validate authorization by asking the SSO bundle if the user is allowed to access this application bundle.
The SSO bundle maintains some sort of session repository, and also provides user properties, e.g. information to identify the data repository (of some sort) of this user. This way you also wouldn't pass trough a (meaningful) "customer service token", but rather a cryptic SSO-token that is supplied and managed by the SSO bundle.
Please not that Virgo is an OSGi container based on Equinox, so if you don't want to use some Virgo-specific feature, you don't have to. However, you'll get lots of benefits if you do use Virgo, even for a basic OSGi application. It sounds, though, like you want web support, which comes out of the box with Virgo web server and will save you the trouble of cobbling it together yourself.
Full disclosure: I lead the Virgo project.

Resources