Authorisation in microservices - how to approach domain object or entity level access control using ACL? - spring

I am currently building microservices based system on java Spring Cloud. Some microservices use PostgreSQL and some of them MongoDB. REST and JMS is used for communication. The plan is to use SSO and OAuth2 for authentication
The challenge I am facing is that authorisation have to be done on domain object/entity level. It means some kind of ACL (Access Control List) is needed. The best practice for this kind of architecture is to avoid something like this and have coarse grained security probably on application/service layer level in every microservice but unfortunately it is not possible.
My final idea is to use Spring Security ACL and have the ACL tables in shared database between all microservices. The database would be accessed only by Spring infrastructure or through Spring api. The DB schema looks stable and unlikely will change. In this case I would simply break the rule about sharing db between microservices.
I was considering different kinds of distributed solutions but left them:
One microservice with ACL and accessing it using rest - The problem is too many http calls and performance degradation. I would have to extend Spring Security ACL to replace db access by rest calls
ACL in every microservice for its own entities - Sounds quite reasonable but imagine a case having some read models of entities synchronised to some other microservices or same entity that exists in different bounded contexts (different microservices). ACLs can become really unmanageable and can be source of errors.
One microservice with ACL tables that are synchronised to other microservices as a read model. The problem is that there is no support in Spring Security ACL for MongoDB. I have seen some custom solutions on github and yes it is doable. But...when creating a new entity I have to create record in the microservice that owns ACL and then it is asynchronously synchronised as a read model to microservice owning the entity. It does not sound as a easy solution
Choose some URL based access control on API gateway. But I would have to modify Spring Security ACL somehow. The API gateway would have to know too much about other services. Granularity of access control is bound to REST api granularity. Maybe I can not imagine all the consequences and other problems that would this approach bring
Finally the solution with shared db that I mentioned is my favorite. Actually it was the first one I have disqualified because it is “shared” database. But after going through possibilities it seemed to me that this is the only one that would work. There is some more additional complexity in case I would like to use some kind of caching because distributed cache would be needed.
I would really use some advice and opinions how to approach the architecture because this is really tricky and a lot of things can go wrong here.
Many thanks,
Lukas

I don't have a full and clear picture of your authorization requirements.
I'm assuming a correlation between authenticated users and domain object/entity permissions.
One option to consider is to define user attributes corresponding to your domain object/entity permissions, and implement an Attribute-based Access Control (ABAC) policy.
The attributes are tied to and stored with the users identity in your repository, and retrieved when performing your authentication.

I think nowadays a Google Zanzibar based approach would be best suited for this.
While tying services closer to each other - because every ACL related request must talk to the zanzibar service to evaluate on permissions - Googles paper on zanzibar describes really well how they solved the problem of latency and eventual consistency (or the "new enemy" problem in this case).
This is pretty much the "Shared Database" approach, but with a problem specific way of storing the database.
OSS implementations exist see SpiceDB (which supports CockroachDB as Backend) or Ory Kratos for example.

Shared Db is the best option with two data sources RO and RW. RO is for regular usage and RW for creating and modifying acl. We can think of storing the ACL in index server for faster look up. One final say for fastness is define / create more accessible fashion so that we can transact less. Especially acl based data approach has this caveat. In micro services approach the way to access data subjected to acl is first get data and filter based on the acl

Related

Prepare audit events based on domain models

I have application which acts as a proxy between different systems without own database. There are few possible use cases which are covered by the application:
Display data from specific system or systems
Store data to specific system or systems
Actually this application has their own front-end and back-end (with sping boot and angular stack). And back-end is responsible to get/put data from/to external systems and front-end communicates with the back-end and it does not know anything about external systems. Also, the back-end follows hexagonal architecture and has their own defined domain models.
Currently there are requirements to cover auditing for business use cases related to the application. For instance, if user goes to some feature related to the application and make some changes there, it should be audited.
I've googled this topic on the internet but I only found entity based auditing like this https://docs.spring.io/spring-data/jpa/docs/1.7.0.DATAJPA-580-SNAPSHOT/reference/html/auditing.html. For my case I would need something similar but based on domain models rather then on entities.
Could you please recommend some direction to cover this? Actually which library or so can be used for such use case to use state of domain model to prepare audit events. I've found something like this https://logging.apache.org/log4j-audit/latest/gettingStarted.html, but I am really not sure if it is rigth way to go
I would say you can build your own auditing strategy based on events.
Let us take the example you gave: "if user goes to some feature related to the application and make some changes there, it should be audited.".
I assume you have a service that handles these requests from a REST API or something similar. That same service would not only communicate with the external systems but would also publish an event with let's say the information about the user and the performed changes or updated (here you can rely on Redis for example, but there are other options like RabbitMQ or even Kafka, depending on how reliable you want your auditing feature to be).
Then you would have another component of your app listening for these events so that you can store them in a Database (I guess that is the purpose). Or you can even have a separated micro-service only for this purpose, depending on how complex this auditing system is meant to be.
If you want something more "magical" and automated you can try to take a look at Spring Boot Data Audit code to see how it is implemented, but you might end up building an overengineered solution.

What is a well documented caching strategy pattern for a microservice architecture dealing with legacy?

I'm building a microservices architecture that should deal with:
Direct database access
Call to external legacy services
I can think about 2 caching strategies, but can't figure out what is the best considering that I will not have control on what other people could do across the layers.
Caching at application level (#Cacheable)
I only provide a caching feature that everyone can use, by enforcing the spring.cache.redis.key-prefix to the microservice name to limit conflicting keys.
PRO: most flexible way
CONS:
No control over cache except for maximum space: people would just create new cache entries
No control over cache invalidation: we don't know what kind of data is actually stored so if, for example, a legacy system needs to be reloaded we cannot empty some cache keys
Possible redundancy: as caching is at application layer it could happen that microservices store about the same data. While I could have control on the database (one MS should own its own db or at least a subset of tables) I can't guarantee about the legacy SOAP layer
Caching at service layer (connectors)
I don't provide a caching feature but I provide custom soap connectors that will/will not cache response based on a configuration that I will provide (could also be a blacklist/whitelist)
PROS:
cache is controlled
easy to invalidate
CONS:
need to update connectors each time a cache policy changes
dependency between development and architecture
edit: I need suggestion about the theoretical approach, not about a specific technology.
I suppose you should build different microservices (apis) to deal with different set of responsibilities. Like , you can have a one microservice which deals with legacy and other one which deals with database. In order for these two microservices to communicate, you can have a message broker architecture like apache kafka (hazelcast being cost effective or Rabbit MQ).
Communication between these two microservices can be event driven as well.
Once you decide this, then you can finalize where to place your cache.
You will need to place cache at application level and not service level if there is an UI where you are showing these values.

How should I design my Spring Microservice?

I am trying to create a Microservice architecture for a hobby project and I am confused about some decisions. Can you please help me as I never worked using Microservice before?
One of my requirements is that my AngularJS GUI will need to show some drop-down or List of values (example: a list of countries). This can be fetched using a Microservice REST call, but where should the values come from? Can I fetch these from my Config Server? or should it come from Database? If the latter, then should each of the Microservice have their own Database for lookup value or can it be a common one?
How would server-side validation work in this case? I mean, there will certainly be a Microservice call the GUI will make for validation but should the validation service be a common Microservice for all Use Cases/Screens or should it be one per GUI page or should the CRUD Microservice be reused for validation as well?
How do I deal with a use-case where the back-end is not a Database but a Web-service call? Will I need some local DB still to maintain some state in between these calls (especially to take care of scenario where the Web-service call fails) and finally pass on the status to GUI?
First of all, there is no single way design micro-service , one has to choose according to the use case and project requirement.
Can I keep these in a Config Server? or should it come from Database?
Again, it depends upon the use case and requirement. However, because every MS should have their own DB then you can use DB if the countries have only names. But if they have some relationship with City/State then you should use DB only.
If DB should each of the Microservice have their own DB for lookup
value or can it be a common one?
No, IMO multiple MS should not depend on a single DB.Because if the DB fails then all the MS will fail, which should not be done. Each MS should work alone with depending on other DB or MS.
should the validation service be a common microservice for all
UseCases/Screens
Same as point 2
How do I deal with a use-case where the backend is not a Database call
but another Web-service call? Will I need some local DB still to
maintain some state in between these calls and finally pass on the
status to GUI?
If you are using HTTP then you should not save the state of any request. If you want to redirect the request to another MS then you can use Feign client which provides a very good way to call rest-api and other important features like: Load balancing.
Microservice architecture is simple. Here we divide each task into separate services(like Spring-boot application).
Example in every application there will be login function,registration function so on..each of these will a separate services in micro-service architecture.
1.You can store that in database, since in feature if you want add more values it is easy to add.
You can maintain separate or single db. Single db with separate collections or table for each microservices.
Validation means you are asking about who can use which microservice(Role based access)???
3.I think you have to use local db.
Microservices is a collection loosely coupled services. For example, if you are creating an ecommerce application, user management can be a service, order management can be a service and refund & chargeback management can be another service. Now each of these services can be further divided into smaller units, lets call them API Endpoints. For example - user management can have login as an endpoint and signup as another endpoint.
If you want to leverage the power of Microservice architecture in its true sense, here is what I would suggest. For the above example, create 3 Springboot Applications for each service. First thing that you should do after this, is establish trust between those applications. I would prefer JWTs for trust establishment. After that everything is a piece of cake. Here are the answers you are looking for :
You should ideally use a database, as opposed to keeping the values in config server, for fetching a list of countries so that you need not recompile your code every time a new country is added.
You can easily restrict access using #PreAuthorize if Role based access is what you are referring to.
You can use OkHttp or any other HttpClient in this usecase. And you certainly need not maintain any local db. However, you can cache the output of the webservice call if that is a requirement.
P.S.: Establishing trust between microservices can be a complex task if you dont understand all the delicacies. In which case, I would recommend going ahead with a single Springboot application; which is a monolithic architecture. I would still recommend JWTs though.

How to share database connection between in spring cloud

How can I share database connection aong in spring cloud module microservices. If there are many microservices how can i use same db connection or should i use db connection per microservices?
In my opinion, the thing that you've asked for is impossible only because each microservice is a dedicated process and it runs inside its own JVM (probably in more than one server). When you create a connection to the database (assuming you use connection pool) its always at the level of a single JVM.
I understand that the chances are that you meant something different but I had to put it on because it directly answers your question
Now, you can share the same database between microservices (the same schema, tables, etc) so that each JVM will have a set of connections opened (in accordance with connection pool definitions).
However, this is a really bad practice - you don't want to share the databases between microservice. The reason is the cost of change: if you (as a maintainer of microservice A) decide to, say, alter one of the tables, now all microservices will have to support this, and this is not a trivial thing to do.
So, a better approach is to have a service that has a "sole responsibility" for your data in some domain. Now, all the services could contact this service and ask for the required data through well-established APIs that should never be broken. In this approach, the cost of change is much "cheaper" since only this "data service" should be changed in a way that it doesn't break existing APIs.
Now regarding the database connection thing: you usually will have more than one JVM that runs the same microservice (like data microservice) so, it's not that you share connections between them, but rather you share the same way of working with database (because after all its the same code).
When dealing with a mircoservice architecture it is usually the case that you have a distributed system.
Most microservices that communicate with each other are not on the same machine, instance or container. Communication between them is most commonly done via http, though there are many other ways.
I would suggest designing mircoservices around a single concern of your application. For example, in your case, you could have a "persistence microservice" that would be responsible for dealing with data persistence operations on a single or multiple types data-stores. It could possibly deal with relational DBs, noSQL, file storage etc. Then, via REST endpoints, you can expose any persistence functionality to the mircoservices that deal with business logic.
A very easy way to build a REST service like this would be with the help of Spring Data REST project.
To answer your actual question, I'm not aware of any way to share actual connections between processes. Beyond that, having many microservices running on the same instance is not a good practice most of the time.
Mircoservices are very popular these days and everybody is trying to transition to them. My advice would be to make sure you don't "over-engineer" your project.
Hope I didn't misunderstand your question, but to be fair it is a little vague. If you could provide a longer more detailed description of your architecture and use case I can suggest more tools/frameworks you can use to achieve your cloudy goals.
First and most important - your microservice should be responsible for handling all data in a given business domain/bounded context. So the question is - 'Why do you need to share database connection between microservices and isn't this a sign you went too far with slicing your system?' Microservice is a tool and word 'micro' may be misleading a bit :)
For more reading I would suggest e.g. https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries (don' t worry, it's general enough to be applicable also to Spring).

Application level caching of XACML Authorization Details from WSO2 IDP

We are working on application where we will create and store XACML policies in WSO2 server for authorization.
We are looking for the best way to authorise user whenever he is trying to access anything in application. Now we are not sure by this approach how much performance issue will come?
One way we can deal with this is when user is trying to login, at that time get his all details from IDP so we can cache it at application level and we don't have to make trip to wso2 idp each time user is performing any action. It may cause slow login but from there other application experience will be fast.
We just wanted to confirm that is this the correct approach? Is there any issue with this design or is there any better way we can use?
I think its not the correct approach especially when we are talking about attribute based access control (ABAC) and when the attributes require to change frequently.
Also, when you are doing the policy evaluation its better to let PIP fetch the required attributes instead application sending all attributes and furthermore you may use the caching at WSO2 IS side also for XACML policy decision or attributes.
Apart from that for the better performance you may implement your PEP as thrift based. We did the same implementation and did a successful load testing for one of the most used application.
I would not recommend the caching at application side due the following reasons:
You have to make round trip for policy evaluation even if you cache attributes locally at application.
Caching attributes locally inside application will defeat the purpose in case the same policy to be used by other applications in future.
Allowing PIP to fetch required attributes at WSO2 side is recommended as it will ease the new application integration where you need not to worry fetching attributes for all new application integrations.
Caching can be done centrally at WSO2 IS server instead applying the cache at each application level.
P.S. - These are my personal views and opinions and it may not be perfect or best fit as per different requirements and business needs.

Resources