It's a code design question :)
I have a DelegatingHandler which takes the http request header and validates the API-key. Pretty common task I guess. In my controller I call my business logic and pass along all business-relevant information. However now I'm challenged with the task to change behavior inside my business logic (separate assemblies) depending on certain api-keys.
Various possible solutions come to my mind...
Change business logic method signatures to ask for an api-key, too.
public void SomeUseCase(Entity1 e1, Entity2 e2, string apiKey);
Use HttpContext.Current to access the current request context. However I read somewhere that using HttpContext restrict my hosting options to IIS. Is there any better suited option for that?
var request = HttpContext.Current.Request; // next extract header information
Use Sessions (don't really want to go that road...)
What's your opinion on that topic?
I'd go for #1 although I don't like the idea of mixing in enivonmental stuff in business logic methods. But depending on your point of view you might argue the api-key is in fact logic-relevant.
Update #1:
I'm using a delegatingHandler to validate the apiKey and once it is validated I add it to the Request's Properties Collection.
The part in question is how the "api-key" or RegisteredIdentifier is passed along to the business logic layer. Right now I am passing the object (e.g. IRegisteredIdentifier) as a parameter to the business logic classes' constructors. I understand there is no more elegant way to solve this(?). I thought about changing the method signatures but I'm not sure whether it's interface pollution or not. Some methods need to work with the api-key, most don't. Experience tells me that the number will more likely grow than drop :) So keeping a reference to it in my bl classes seems to be a good choice.
Thank you for your answers - I think all of them are part of my solution. I'm new to StackOverflow.. but as far as I can see - I cannot rate answers yet. Rest assured I'm still thankful :)
I would suggest two different options.
Promote the value into a custom HTTP header (e.g. something like mycompany-api-key: XXXX ). This makes your delegating handler work more like a standard HTTP intermediary. This would be handy if you ever hand off your request to some secondary internal server.
Put the api-key into the request.Properties dictionary. The idea of the the Properties dictionary is to provide a place to put custom meta information about the request.
HTTP works hard to make sure authentication/authorization is a orthogonal concern to the actual request, which is why I would try and keep it out of the action signature.
I would go for option 1.
But you could introduce the entity RegisteredIdentifier (Enterprise Patterns and MDA by Jim Arlow and Ila Neustadt) in your business logic.
The api-key can be converted to a RegisteredIdentifier.
RegisteredIdentifier id = new RegisteredIdentitief(api-key);
public void SomeUseCase(Entity1 e1, Entity2 e2, RegisteredIdentifier id);
The business logic layer has a dependency on the API key. So I would suggest:
interface IApiKeyProvider
{
string ApiGet { get; }
}
..then have your BLL require that an object implementing that interface is supplied to it (in constructor, setup, or even each method that requires it).
Since in the future it might not be one API key. The key point is that this identifies the BLL is dependent on something, and defining a contract for the something.
Real-world example:
Then, in your DI container (Ninject etc), bind your own ConfigFileApiKeyProvider (or whatever) implementation to that interface, in the "place" (layer) that DOES have the API key. So the app that calls the BLL specifies/configures how the API key is specified.
Edit: I misunderstood the part about this being a "how-to-do-it-over-HTTP" question and not a code architecture/code design question. So:
HTTP header is the way to go in terms of transport
Related
As a newcomer to spring I would like to know the actual difference between:-
#PostMapping
#PutMapping
#PatchMapping
My understanding is PUT is for update but then we have to get the element by its id and then save() it. Similarly the save() method is again used by Post which automatically replaces by its identifier(PRIMARY). In my application I am able to use three of these methods interchangeably.
What is the point of having PATCH, POST, PUT types when we use repository save methods for all?
HTTP method tokens are used to define request semantics in such a way that general purpose components (browsers, reverse proxies, etc) can exploit the information to do intelligent things.
The easiest of these is that PUT has idempotent semantics; if an http response is lost, a general purpose component knows that it may autonomously retry sending the request. This in turn gives you a bit of extra reliability over an unreliable network, "for free".
The fact that your origin server uses the same persistence mechanism for each is an implementation detail, something deliberately hidden behind the "uniform interface".
The difference between PATCH and POST is subtle; PATCH gives you an unambiguous way to designate that the enclosed entity is a patch document, and offers a mechanism for discovering which patch document formats are understood by the origin server, neither of which you get from POST alone.
What's less clear, at least to me, is whether PATCH semantics allow an intermediate component to do something intelligent with a request - in other words, do the additional constraints (relative to POST) allow intermediaries to do anything interesting?
As best I can tell, the semantics of a PATCH request are more specific, but not actionably more specific -- certainly not as obviously as we have in the case of safe or idempotent request semantics.
POST is for creating a brand new object.
PUT will replace all of an objects properties in one go.
Leaving a property empty will empty the value in the datastore.
PATCH does a partial update of an object.
You can send it just the properties which should be updated.
A PATCH request with all object properties included will have the same effect as a POST request. But they are not the same.
The HTTP method is a convention not specific to Spring but is a main pillar of the REST API specification.
They make sure the intent of a request is clear and both the provider and consumer are in agreement of the end result.
Kind of like the pedals or gear shift in our cars. It's a lot easier when they all work the same.
Switching them up could lead to a lot of accidents.
For us as developers, it means we can expect most REST APIs to behave in a similar way, assuming an API is implemented according to or reasonably close to the specification.
POST/PUT/PATCH may look alike but there are subtle differences.
As you mention the PUT and PATCH methods require some kind of ID of the object to be updated.
In an example of a combined POST/PUT/PATCH endpoint, sending a request with an object, omitting some of its properties. How does the API react?
Update only the received properties.
Update the entire object, emptying the omitted properties.
Attempt to create a new object.
How is the consumer of the endpoint to know which of the three actions the server took?
This is where the HTTP method and specification/convention help determine the appropriate course of action.
Spring may facilitate the save method which can handle both creation, updates and partial updates. But this is not necessarily the case for other frameworks in Java or other languages.
Also, your application may be simple enough to handle POST/PUT/PATCH in the same controller method right now.
But over time as your application grows more complex, the separation of concerns makes your code a lot cleaner, more readable and maintainable.
Right now I can't get the concept behind Spring Data REST if it comes to complex aggregate roots. If I understand Domain Driven Design correctly (which is AFAIK the base principle for spring data?), you only expose aggregate roots through repositories.
Let's say I have two classes Post and Comment. Both are entities and Post has a #OneToMany List<Comment> comments.
Since Post is obviously the aggregate root I'd like to access it through a PostRepository. If I create #RepositoryRestResource public interface PostRepository extends CrudRepository<Post, Long> REST access to Post works fine.
Now comments is renderd inline and is not exposed as a sub resource like /posts/{post}/comments. This happens only if I introduce a CommentRepository (which I shouldn't do if I want to stick to DDD).
So how do you use Spring Data REST properly with complex domain objects? Let's say you have to check that all comments does not contain more than X characters alltogether. This would clearly be some invariant handled by the Post aggregate root. Where would you place the logic for Post.addComment()? How do you expose other classes as sub resources so I can access /posts/{post}/comments/{comment} without introducing unnecessary repositories?
For starters, if there is some constraint on Comment, then I would put that constraint in the constructor call. That way, you don't depend on any external validation frameworks or mechanisms to enforce your requirements. If you are driven to setter-based solutions (such as via Jackson), then you can ALSO put those constraints in the setter.
This way, Post doesn't have to worry about enforcing constraints on Comment.
Additionally, if you use Spring Data REST and only define a PostRepository, since the lifecycle of the comments are jointly linked to the aggregate root Post, the flow should be:
Get a Post and its collection of Comment objects.
Append your new Comment to the collection.
PUT the new Post and its updated collection of Comment objects to that resource.
Worried about collisions? That's what conditional operations are for, using standard HTTP headers. If you add a #Version based attribute to your Post domain object, then every time a given Post is updated with a new Comment, the version will increase.
When you GET the resource, Spring Data REST will include an E-Tag header.
That way, your PUT can be conditionalized with an HTTP If-Match: <etag> header. If someone else has updated the entity, you'll get back a 412 Status code, indicating you should refresh and try again.
NOTE: These conditional operations work for PUT, PATCH, and DELETE calls.
Is there a Web API equivalent to the MVC ActionMethodSelectorAttribute?
My specific purpose is this: I have, for example, a ResourceController and when I POST to the controller, I'd like to be able to receive a single resource (Resource) or a list (IEnumerable<Resource>).
I was hoping creating two methods with different parameters would cause the deserialization process to do some evaluation but this doesn't seem to be the case (and frankly, I don't think it's efficiently realistic with the combination of content negotiation and the fact that many data formats, like JSON, make it difficult to infer the data type). So I originally had:
public HttpResponseMessage Post(Resource resource) {...}
public HttpResponseMessage Post(IEnumerable<Resource> resources) {...}
...but this gets the "multiple actions" error. So I investigated how to annotate my methods and came across ActionMethodSelectorAttribute but also discovered this is only for MVC routing and not Web API.
So... without requiring a different path for POSTing multiple resources vs. one (which isn't the end of the world), what would I do to differentiate?
My thoughts along the ActionMethodSelectorAttribute were to require a query parameter specifying multiple, which I suppose is no different than a different path. So, I think I just eliminated my current need to do this, but I would still like to know if there is an equivalent ActionMethodSelectorAttribute for Web API :)
I haven't seen a replacement for that method (there is an IActionMethodSelector interface but it is internal to the DLL). One option (although it seems like it might be overdoing it) is to overload the IHttpActionSelector implementation that is used.
But changing gears slightly, why not always expect an IEnumerable<Resource>? My first guess is that the collection method (that takes IEnumerable<Resource>) would simply loop and call the single value (just Resource) function?
A few general questions to those who are well-versed in developing web-based applications.
Question 1:
How do you avoid the problem of "dependency carrying"? From what I understand, the first point of object retrieval should happen in your controller's action method. From there, you can use a variety of models, classes, services, and components that can require certain objects.
How do you avoid the need to pass an object to another just because an object it uses requires it? I'd like to avoid going to the database/cache to get the data again, but I also don't want to create functions that require a ton of parameters. Should the controller action be the place where you create every object that you'll eventually need for the request?
Question 2:
What data do you store in the session? My understanding is that you should generally only store things like user id, email address, name, and access permissions.
What if you have data that needs to be analyzed for every request when a user is logged in? Should you store the entire user object in the cache versus the session?
Question 3:
Do you place your data-retrieval methods in the model itself or in a separate object that gets the data and returns a model? What are the advantages to this approach?
Question 4:
If your site is driven by a user id, how do you unit test your code base? Is this why you should have all of your data-retrieval methods in a centralized place so you can override it in your unit tests?
Question 5:
Generally speaking, do you unit test your controllers? I have heard many say that it's a difficult and even a bad practice. What is your opinion of it? What exactly do you test within your controllers?
Any other tidbits of information that you'd like to share regarding best practices are welcome! I'm always willing to learn more.
How do you avoid the problem of "dependency carrying"?
Good object oriented design of a BaseController SuperClass can handle a lot of the heavy lifting of instantiating commonly used objects etc. Usage of Composite types to share data across calls is a not so uncommon practice. E.g. creating some Context Object unique to your application within the Controller to share information among processes isn't a terrible idea.
What data do you store in the session?
As few things as is humanly possible.
If there is some data intensive operation which requires a lot of overhead to process AND it's required quite often by the application, it is a suitable candidate for session storage. And yes, storage of information such as User Id and other personalization information is not a bad practice for session state. Generally though the usage of cookies is the preferred method for personalization. Always remember though to never, ever, trust the content of cookies e.g. properly validate what's read before trusting it.
Do you place your data-retrieval methods in the model itself or in a separate object that gets the data and returns a model?
I prefer to use the Repository pattern for my models. The model itself usually contains simple business rule validations etc while the Repository hits a Business Object for results and transformations/manipulations. There are a lot of Patterns and ORM tools out in the market and this is a heavily debated topic so it sometimes just comes down to familiarity with tools etc...
What are the advantages to this approach?
The advantage I see with the Repository Pattern is the dumber your models are, the easier they are to modify. If they are representatives of a Business Object (such as a web service or data table), changes to those underlying objects is sufficiently abstracted from the presentation logic that is my MVC application. If I implement all the logic to load the model within the model itself, I am kind of violating a separation of concerns pattern. Again though, this is all very subjective.
If your site is driven by a user id, how do you unit test your code base?
It is highly advised to use Dependency Injection whenever possible in code. Some IoC Containers take care of this rather efficiently and once understood greatly improve your overall architecture and design. That being said, the user context itself should be implemented via some form of known interface that can then be "mocked" in your application. You can then, in your test harness, mock any user you wish and all dependent objects won't know the difference because they will be simply looking at an interface.
Generally speaking, do you unit test your controllers?
Absolutely. Since controllers are expected to return known content-types, with the proper testing tools we can use practices to mock the HttpContext information, call the Action Method and view the results to see they match our expectations. Sometimes this results in looking only for HTTP status codes when the result is some massive HTML document, but in the cases of a JSON response we can readily see that the action method is returning all scenario's information as expected
What exactly do you test within your controllers?
Any and all publicly declared members of your controller should be tested thoroughly.
Long question, longer answer. Hope this helps anyone and please just take this all as my own opinion. A lot of these questions are religious debates and you're always safe just practicing proper Object Oriented Design, SOLID, Interface Programming, DRY etc...
Regarding dependency explosion, the book Dependency Injection in .NET (which is excellent) explains that too many dependencies reveals that your controller is taking on too much responsibility, i.e. is violating the single responsibility principle. Some of that responsibility should be abstracted behind aggregates that perform multiple operations.
Basically, your controller should be dumb. If it needs that many dependencies to do its job, it's doing too much! It should just take user input (e.g. URLs, query strings, or POST data) and pass along that data, in the appropriate format, to your service layer.
Example, drawn from the book
We start with an OrderService with dependencies on OrderRepository, IMessageService, IBillingSystem, IInventoryManagement, and ILocationService. It's not a controller, but the same principle applies.
We notice that ILocationService and IInventoryManagement are both really implementation details of an order fulfillment algorithm (use the location service to find the closest warehouse, then manage its inventory). So we abstract them into IOrderFulfillment, and a concrete implementation LocationOrderFulfillment that uses IInventoryManagement and ILocationService. This is cool, because we have hidden some details away from our OrderService and furthermore brought to light an important domain concept: order fulfillment. We could implement this domain concept in a non-location-based way now, without having to change OrderService, since it only depends on the interface.
Next we notice that IMessageService, IBillingSystem, and our new IOrderFulfillment abstractions are all really used in the same way: they are notified about the order. So we create an INotificationService, and make MessageNotification a concrete implementation of both INotificationService and IMessageService. Similarly for BillingNotification and OrderFulfillmentNotification.
Now here's the trick: we create a new CompositeNotificationService, which derives from INotificationService and delegates to various "child" INotificationService instances. The concrete instance we use to solve our original problem will delegate in particular to MessageNotification, BillingNotification, and OrderFulfillmentNotification. But if we wish to notify more systems, we don' have to go edit our controller: we just have to implement our particular CompositeNotificationService differently.
Our OrderService now depends only on OrderRepository and INotificationService, which is much more reasonable! It has two constructor parameters instead of 5, and most importantly, it takes on almost no responsibility for figuring out what to do.
Lets say I have an order which got updated on the UI with some values (they could be ok/not ok to ensure save)
1. How do we validate the changes made? Should the DTO which carries the order back to service layer be validated for completeness?
Once the validation is complete? How does the service return the validation errors? Do we compose a ReponseDTO object and return it like
ResponseDTO saveOrder(OrderDTO);
How do we update the domain entity order? Should the DTO Assembler take care of updating the order entity with the latest changes?
If we imagine a typical tiered' approach, ASP .NET on Web Server, WCF on Application Server.
When the Order form is updated with data on the web and saved. The WCF receives a OrderDTO.
Now how do we update the order from DTO? Do we use an assembler to update the domain object with changes from DTO? something like
class OrderDTOAssembler {
updateDomainObject(Order, OrderDTO)
}
I will try answer some of your questions from my experience and how I should approach your problem.
First I should not let DTO conduct any validations, but just plain POCO DTO's usually have different properties with specific datatypes, so some kind of validation is done. I mean you have to apply an integer street number and string for street name etc.
Second as you point out. Let a ORderDTOAssembler convert from OrderDTO to Order and vice versa. This is done in the application layer.
Third I would use Visitor pattern Validation in a Domain Driven Design like the example. The OrderService will use an IOrderRepository to save/update the order. But using the visitor-validation approach the OrderService vill call Order.ValidatePersistance (see link in example - but this is a extension method that is implemented in infrastructure layer since it has "db knowledge") to check its state is valid. If true, then we to IOrderRepository.Save(order).
at last, if Order.ValidatePersistance fails we get one or more BrokenRules messages. These should be returned to client in a ResponseDTO. Then cient can act on messages and take action. Problem here can be that you will have a ResponseOrderDTO messages but maybe (just came up with this now) all your ResponseDTO can inherit from ResponseBaseDTO class that expose necessary properties for delivering BrokenRule messages.
I hope you find my thoughts useful and good luck.