Fine-grained access control in web-api - asp.net-web-api

This is more of an architectural question. Let's assume we have different types of users logging into a system, and we have a 'customer' entity. Depending on the permissions of the user, I may want to return different sub-sets of 'customer' properties. I also might want to allow edits to only certain properties.
Any suggestions on what path to go down? Here are the options I've thought of thusfar:
For each permission level, extend the model - and return the furthest descendant based upon the user permissions. On the input side, accept the furthest descendant and only cast it to the correct descendant. (Seems like a lot of implicit logic - doesn't seem very elegant)
Create different methods (cluttered API, implies more functionality than I might want to expose)
Any other suggestions?
Thanks

What you're describing is a clear-cut use case for XACML. XACML is the eXtensible Access Control Markup Language. It lets you define fine-grained access control using attributes (about the user, the resource, the environment...).
It's policy-based which means you can write things like:
users can view customer records that are in the same region as the user
users can edit customer records they are directly assigned to
auditors can view customer records for the entire business unit except for sensitive fields
There are several XACML engines out there (WSO2, Heras AF for Java; Axiomatics for .NET).
I've developed quite a few ASP .NET web apps in the .NET 4.0 framework and managed to apply authorization at the presentation tier, the WCF tier, and the data tier. Feel free to ping me for additional information.

Related

Attribute Based Access Control (ABAC) in a microservices architecture for lists of resources

I am investigating options to build a system to provide "Entity Access Control" across a microservices based architecture to restrict access to certain data based on the requesting user. A full Role Based Access Control (RBAC) system has already been implemented to restrict certain actions (based on API endpoints), however nothing has been implemented to restrict those actions against one data entity over another. Hence a desire for an Attribute Based Access Control (ABAC) system.
Given the requirements of the system to be fit-for-purpose and my own priorities to follow best practices for implementations of security logic to remain in a single location I devised to creation of an externalised "Entity Access Control" API.
The end result of my design was something similar to the following image I have seen floating around (I think from axiomatics.com)
The problem is that the whole thing falls over the moment you start talking about an API that responds with a list of results.
Eg. A /api/customers endpoint on a Customers API that takes in parameters such as a query filter, sort, order, and limit/offset values to facilitate pagination, and returns a list of customers to a front end. How do you then also provide ABAC on each of these entities in a microservices landscape?
Terrible solutions to the above problem tested so far:
Get the first page of results, send all of those to the EAC API, get the responses, drop the ones that are rejected from the response, get more customers from the DB, check those... and repeat until either you get a page of results or run out of customers in the DB. Tested that for 14,000 records (which is absolutely within reason in my situation) would take 30 seconds to get an API response for someone who had zero permission to view any customers.
On every request to the all customers endpoint, a request would be sent to the EAC API for every customer available to the original requesting user. Tested that for 14,000 records the response payload would be over half a megabyte for someone who had permission to view all customers. I could split it into multiple requests, but then you are just balancing payload size with request spam and the performance penalty doesn't go anywhere.
Give up on the ability to view multiple records in a list. This totally breaks the APIs use for customer needs.
Store all the data and logic required to perform the ABAC controls in each API. This is fraught with danger and basically guaranteed to fail in a way that is beyond my risk appetite considering the domain I am working within.
Note: I tested with 14,000 records just because its a benchmark of our current state of data. It is entirely feasible that a single API could serve 100,000 or 1m records, so anything that involves iterating over the whole data set or transferring the whole data set over the wire is entirely unsustainable.
So, here lies the question... How do you implement an externalised ABAC system in a microservices architecture (as per the diagram) whilst also being able to service requests that respond with multiple entities with a query filter, sort, order, and limit/offset values to facilitate pagination.
After dozens of hours of research, it was decided that this is an entirely unsolvable problem and is simply a side effect of microservices (and more importantly, segregated entity storage).
If you want the benefits of a maintainable (as in single piece of externalised infrastructure) entity level attribute access control system, a monolithic approach to entity storage is required. You cannot simultaneously reap the benefits of microservices.

Monolithic Web API to microservice design

We have a monolithic Web API layer in our application with a hundred end points. I am trying to break it into microservices using Azure Service Fabric.
When we break them into multiple services, we may end up having duplicate code.
Example: Let's say we have an Account Services to create an account. And there is a payment service to apply payments to transactions.
In this case, both services need the Customer class/domain. Probably the Account Services need an exhaustive customer with full details, but the payment might need a light weight one.
The question is do we need to copy several domain entities, and other layers like this? Doesn't that create more maintenance issues?
If we don't we end up copying the code and creating different services, one monolithic service same is the existing Web API.
Any thoughts on this?
2ndly, we have some cases where transactions are mentioned today. If we separate them, is there any good design to record failures and rollback without trying too much to maintain transactions?
Breaking a monolith up into proper microservices with appropriate boundaries for your domain is certainly more of an art than a science. The prerequisite to taking on such a task is a thorough understanding of your domain and the interactions within, and you won't get it right the first time. One of points that Evans makes in his book on Domain-Driven Design is that for any sufficiently complex domain, the domain model continually evolves because your understanding of the domain is continually evolving; you will understand it a little better tomorrow than you do today. That said, don't be afraid to start when you have an understanding that is "good enough" and be willing to adapt/evolve your model.
I don't know your domain, but it sounds to me like you need to first figure out in which bounded context Customer primarily belongs. Yes, you want to minimize duplication of domain logic, and though it may not fit completely and neatly into a single service, to the extent that you make one service take primary responsibility for accessing, persisting, manipulating, validating, and ensuring the integrity of a Customer, the better off you'll be.
From your question, I see two possibilities:
The Account Services bounded context is the primary stakeholder in Customer, and Customer has non-trivial ties to other Account Services entities and services. It's difficult to draw clear boundaries around a Customer in isolation. In this case, Customer belongs in the Account Services bounded context.
Customer is an independent enough concept to merit its own microservice. A Customer can stand alone. In this case, Customer belongs in its own bounded context.
In either case, great care should be taken to ensure that the Customer-specific domain logic stays centralized in the Customer microservice behind strong boundaries. Other services might use Customer, or perhaps a light-weight (even read-only) CustomerView, but their interactions should go through the Customer service to the extent that they can.
In your question, you indicate that the Payments bounded context will need access to Customer, but it might just need a light-weight version. It should communicate with the Customer service to get that light-weight object. If, during Payments processing you need to update the Customer's billing address for example, Payments should call into the Customer microservice telling it to update its billing address. Payments need not know anything about how to update a Customer's billing address other than the single API call; any domain logic, validation, firing of domain events, etc... that need to happen as part of that operation are contained within the Customer microservice.
Regarding your second question: it's true that atomic transactions become more complex/difficult in a distributed architecture. Do some reading on the Saga pattern: https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/. Also, Jimmy Bogard is currently in the midst of a blog series called
Life Beyond Distributed Transactions: An Apostate's Implementation that may offer some good insights.
Hope this helps!

Microservices: model sharing between bounded contexts

I am currently building a microservices-based application developed with the mean stack and am running into several situations where I need to share models between bounded contexts.
As an example, I have a User service that handles the registration process as well as login(generate jwt), logout, etc. I also have an File service which handles the uploading of profile pics and other images the user happens to upload. Additionally, I have an Friends service that keeps track of the associations between members.
Currently, I am adding the guid of the user from the user table used by the User service as well as the first, middle and last name fields to the File table and the Friend table. This way I can query for these fields whenever I need them in the other services(Friend and File) without needing to make any rest calls to get the information every time it is queried.
Here is the caveat:
The downside seems to be that I have to, I chose seneca with rabbitmq, notify the File and Friend tables whenever a user updates their information from the User table.
1) Should I be worried about the services getting too chatty?
2) Could this lead to any performance issues, if alot of updates take place over an hour, let's say?
3) in trying to isolate boundaries, I just am not seeing another way of pulling this off. What is the recommended approach to solving this issue and am I on the right track?
It's a trade off. I would personally not store the user details alongside the user identifier in the dependent services. But neither would I query the users service to get this information. What you probably need is some kind of read-model for the system as a whole, which can store this data in a way which is optimized for your particular needs (reporting, displaying together on a webpage etc).
The read-model is a pattern which is popular in the event-driven architecture space. There is a really good article that talks about these kinds of questions (in two parts):
https://www.infoq.com/articles/microservices-aggregates-events-cqrs-part-1-richardson
https://www.infoq.com/articles/microservices-aggregates-events-cqrs-part-2-richardson
Many common questions about microservices seem to be largely around the decomposition of a domain model, and how to overcome situations where requirements such as querying resist that decomposition. This article spells the options out clearly. Definitely worth the time to read.
In your specific case, it would mean that the File and Friends services would only need to store the primary key for the user. However, all services should publish state changes which can then be aggregated into a read-model.
If you are worry about a high volume of messages and high TPS for example 100,000 TPS for producing and consuming events I suggest that Instead of using RabbitMQ use apache Kafka or NATS (Go version because NATS has Rubby version also) in order to support a high volume of messages per second.
Also Regarding Database design you should design each micro-service base business capabilities and bounded-context according to domain driven design (DDD). so because unlike SOA it is suggested that each micro-service should has its own database then you should not be worried about normalization because you may have to repeat many structures, fields, tables and features for each microservice in order to keep them Decoupled from each other and letting them work independently to raise Availability and having scalability.
Also you can use Event sourcing + CQRS technique or Transaction Log Tailing to circumvent 2PC (2 Phase Commitment) - which is not recommended when implementing microservices - in order to exchange events between your microservices and manipulating states to have Eventual Consistency according to CAP theorem.

How to detect relationships using Microsoft Cognitive services?

Microsoft Cognitive Services offers a wide variety of capabilities to extract information from natural language. However I am not able to find how to use them in order to detect "relationships" where e.g. two (or more) specific "entities" are involved.
For example, detecting company acquisitions / merging.
These could be expressed in News articles as
"Company 1" has announced to acquire "Company2".
Certainly, there are several approaches to address that need, some that include entity detection first (e.g. Company1 and Company2 being companies) and then the relation (e.g. acquire ...).
Other approaches involve identifying first the "action" ( acquire ) and then through grammatical analysis find which is the "actor" and which the "object" of the action.
Machine learning approaches for semantic relation extraction has also been developed, in order to avoid humans to craft formal relation rules.
I would like to know if / how this use case can be performed with the Microsoft Cognitive Services.
Thankyou
Depends on tech used to examine response from the API https://dev.projectoxford.ai/docs/services
I use JQuery to parse the json response (webclient in asp.net code behind) from Luis/Cognitive Services API (I am not using the Bot Framework). I have a rules engine that I can configure for clients and save it, so that when the page loads, they fire functions based on the parsed json response. The rules engine includes various condition functions like contains, begins with, is, etc so I can test the users query for specific entities or virtually anything in the users query. It really comes down to a && or || javascript functions...
For example if intent=product in the json response, I then show a shopping cart widget. Or if entity=coffee black OR entity=double double then it triggers a widget to inject into the chat window (SHOW Shopping Cart). In short you either handle the AND/OR via the Bot Framework or via your tech of choice.

Domain driven design is confusing

1) What are the BLL-services? What's the difference between them and Service Layer services? What goes to domain services and what goes to service layer?
2) Howcome I refactor BBL model to give it a behavior: Post entity holds a collection of feedbacks which already makes it possible to add another Feedback thru feedbacks.Add(feedback). Obviosly there are no calculations in a plain blog application. Should I define a method to add a Feedback inside Post entity? Or should that behavior be mantained by a corresponing service?
3) Should I use Unit-Of-Work (and UnitOfWork-Repositories) pattern like it's described in http://www.amazon.com/Professional-ASP-NET-Design-Patterns-Millett/dp/0470292784 or it would be enough to use NHibernate ISession?
1) Business Layer and Service Layer are actually synonyms. The 'official' DDD term is an Application Layer.
The role of an Application Layer is to coordinate work between Domain Services and the Domain Model. This could mean for example that an Application function first loads an entity trough a Repository and then calls a method on the entity that will do the actual work.
2) Sometimes when your application is mostly data-driven, building a full featured Domain Model can seem like overkill. However, in my opinion, when you get used to a Domain Model it's the only way you want to go.
In the Post and Feedback case, you want an AddFeedback(Feedback) method from the beginning because it leads to less coupling (you don't have to know if the FeedBack items are stored in a List or in a Hashtable for example) and it will offer you a nice extension point. What if you ever want to add a check that no more then 10 Feedback items are allowed. If you have an AddFeedback method, you can easily add the check in one single point.
3) The UnitOfWork and Repository pattern are a fundamental part of DDD. I'm no NHibernate expert but it's always a good idea to hide infrastructure specific details behind an interface. This will reduce coupling and improves testability.
I suggest you first read the DDD book or its short version to get a basic comprehension of the building blocks of DDD. There's no such thing as a BLL-Service or a Service layer Service. In DDD you've got
the Domain layer (the heart of your software where the domain objects reside)
the Application layer (orchestrates your application)
the Infrastructure layer (for persistence, message sending...)
the Presentation layer.
There can be Services in all these layers. A Service is just there to provide behaviour to a number of other objects, it has no state. For instance, a Domain layer Service is where you'd put cohesive business behaviour that does not belong in any particular domain entity and/or is required by many other objects. The inputs and ouputs of the operations it provides would typically be domain objects.
Anyway, whenever an operation seems to fit perfectly into an entity from a domain perspective (such as adding feedback to a post, which translates into Post.AddFeedback() or Post.Feedbacks.Add()), I always go for that rather than adding a Service that would only scatter the behaviour in different places and gradually lead to an anemic domain model. There can be exceptions, like when adding feedback to a post requires making connections between many different objects, but that is obviously not the case here.
You don't need a unit-of-work pattern on top on the NHibernate session:
Why would I use the Unit of Work pattern on top of an NHibernate session?
Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

Resources