using apicontroller vs odata EntitySetController [closed] - asp.net-web-api

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just started learning about ASP.NET Web API and I have several things that are still unclear to me:
why should I use EntitySetController,which inherits from odata controller instead of ApiController
Why is EF frequently mentioned in the context of OData. I know it "represents" an Entity, but I don't see why the 2 are connected. The first is on Service Layer and EF is Model.
I have read and understood a lot of litereture written about the subject, yes I missed when its the best practice
Thanks a lot,
David

why should I use EntitySetController,which inherits from odata controller instead of ApiController
I agree that it is confusing and that documentation seems to be lacking (at least when I had the same question as you). The way I put my feelings at ease was by simply reading the code. I encourage you to do the same, as it really is very short (concentrate on the EntitySetController class and its helpers); shouldn't take more than 5-10 minutes tops (promise) and you won't have any questions after.
The short story is that it eliminates some boilerplate for the common cases (but continue reading if you want more context and an opinion).
Why is EF frequently mentioned in the context of OData. I know it "represents" an Entity, but I don't see why the 2 are connected. The first is on Service Layer and EF is Model.
This one confused me endlessly too, until I gave up and looked at the origins of OData, the WCF Data Services (previously ADO.NET Data Services) and the OData specifications (the hint was that OData Core protocol versions are still specified with a header called "DataServicesVersion"). There you can find that OData uses EDM, the Entity Data Model, which is the same model specification used by EF, and serializes it in the same format as EF: CSDL (Conceptual Schema Definition Language). This is no coincidence, WCF Data Services has prime support for EF, and although it doesn't require it, one could say that its design was based on it.
Note that WCF Data Services is still was the flagship implementation of OData.
Something that is potentially of high interest (at least it was to me): When using EF with ASP.NET Web API and OData extensions, there is no way (as far as I know) to share the model between the two.
You may skip to the next bullet point for the next answer if you didn't find this interesting.
For example, when using EF in a Code-First setup, you will typically build your model based largely on code conventions and the EF System.Data.Entity.DbModelBuilder ("fluid API"). You will then use the System.Web.Http.OData.Builder.ODataConventionModelBuilder that will do pretty much exactly the same thing to construct the OData model, and arrive pretty much at exactly the same result. In the past, I managed to dig up some random notes from a random meeting from either the EF team or the Web API team which mentioned this briefly, and as far as I can remember (I can't find this document anymore), there were no plans to improve the situation. Thus, they now have two different and incompatible implementations of EDM.
I admit I didn't take the time to go through the code extensively to verify this properly, but I know that Web API + OData extensions depend on EdmLib (which provides Microsoft.Data.Edm initially developed for WCF Data Services), while EF does not, and instead uses its own System.Data.Entity.Edm implementation. I also know that their convention-based model builders are different, as explained above. It becomes ridiculous when you use EF in a DB-First setup; you get a serialized EDM model in CSDL format in the EDMX file, and the OData extensions go on and generate their own serialized CSDL code at runtime from the CLR code (using separate code conventions) itself generated by EF from the initial CSDL via T4 templates. Your head spin much?
Update: This was largely improved a little under two weeks ago (July 19th), sorry I missed that. (Thanks RaghuRam Nadiminti.) I didn't reviewed the patch, but from the sample code it seems that the way it works is that one must serialize the model into CSDL using the EF EDMX serializer, then deserialize it using the EdmLib parser to be used by the OData extensions. It still feels a little bit like a hack in EF Code-First setups (at least the CLR code is only analyzed once, but I would prefer it if both components used the same in-memory model to begin with). A shortcut can probably be taken when using Model-First or Database-First scenarios however, by deserializing the EDMX file generated by VS directly. In this last scenario it actually feels less like a hack, but again, a single model would be best. I don't know that either EF would possibly switch to using EdmLib or that EdmLib would switch to using EF's EDM model, both projects are really strong now, and the blockers are probably not just technical issues. The ASP.NET team unfortunately can't do much about it AFAICT.
Update: Randomly stumbled upon those meeting notes again. They were indeed from the EF team and indicate that they don't plan to work on EdmLib.
However, I now believe this is all a good thing. The reason is that if they close all the gaps, and remove all the boilerplate, and make everything right, they'll essentially end up where WCF Data Services are, which is a fully integrated solution where the programmer injects code in the pipeline via "Interceptors". To me, the only reason to go there is because of open source requirements, but even then, I think it's more reasonable to try and advocate for an open source WCF-DS instead.
The question now becomes: "But what is Web API + OData extensions good for, then?". Well, it's a good fit when you really do want two different models for your data store and your web service. It's a good fit when the "interceptor" design is not flexible enough for you to translate between the two models.
Update: As of March 27th 2014, it's official, they are going to try to close those gaps, deprecating WCF Data Services in the process. Very early talks mention a "handler" to do this, most likely an ASP.NET HTTP handler (see comments on the announcement). It looks like very little planning has gone into this, as they're still brainstorming ideas to make ASP.NET Web API fill the use-cases of WCF Data Services. I mentioned those use-cases above, in a comment to the announcement and in this thread (started a few days before the announcement).
Many other people expressed close to identical concerns (again, see linked discussions), so it's good to see that I haven't been dreaming all this up.
There is some disbelief that ASP.NET Web API can be turned into something useful for the Data Services use-cases in a reasonable time, so some people suggested that MSFT reconsider their decision. The question of whether to use ASP.NET for open source requirements is also moot: WCF Data Services will soon be open-sourced if all goes "well", though not thanks to any advocacy efforts. (It's just a source dump, it's unknown if anyone would maintain it at this point.)
From what I can gather, everything points to a budget cut, and some people talk about it being the result of a company-wide "refocusing", though all of this should be taken with a grain of salt.
These things aside, there is now a possibility that with time, a new solution emerges -- even better that WCF Data Services or Web API when it comes to OData APIs. Although it looks a bit chaotic right now, the MSFT OData team did receive quite a bit of feedback from its customers relatively early, so there's hope (especially if the future solution, should there be one, is itself open-sourced). The transition is probably going to be painful, but be sure to watch discussions around this in the future.
I'm not sure I'll take the time to update this post anymore; I just wanted to highlight that things regarding Web API and Data Services are about to change a lot, since this answer is still being upvoted from time to time.
Update: RESTier (announcement) seems to be the result.
And finally, my (personal) opinion: OData, despite being technically a RESTful HTTP-based protocol, is very, very, very data-oriented. This is absolutely fine (we can define a lot different types of interfaces with HTTP) and I, for one, find all the ServiceStack vs OData debates irrelevant (I believe they operate at different layers in our current, common architectures). What I find worrying is people trying to make an OData-based API act like a behavior-centric (or "process-oriented", or "ServiceStack"-like) API. To me, OData URI conventions and resource representation formats (Atom and JSON) together replace SQL, WCF Data Services "Query Interceptors" and "Change Interceptors" replace DBMS triggers and OData Actions replace DBMS stored procedures. With this perspective, you immediately see that if the domain logic you need to put behind your OData API is too complex or not very data-oriented, you're gonna end up with convoluted "Actions" that don't respect REST principles, and entities that don't feel right. If you treat your OData API as a pure data layer, you're fine. You can stack a service on top of it just like you would put a "service layer" on top of a SQL database.
And thus, I'm not sure Web API + OData extensions is that great anymore. If you need fundamentally different models, it's likely that your application isn't too data-oriented (except if you're simply merging models from various sources or something), and OData is thus not a good fit. This is a sign you should at least consider Web API alone (with either SQL or OData below) or something like ServiceStack.
For better or for worse, Javascript clients can't talk SQL to a remote server. Maybe in the future via browser APIs, or maybe via variants of WebSockets, but right now, OData is the closest thing to a remote data layer anyone is going to get for rich JS clients that have thin or no server-side logic. OData is used by other types of clients of course, but I would say that it's especially useful on the client-side web platform, where things like Breeze.js or JayData are to OData what the Entity Framework is to SQL.
I have read and understood a lot of litereture written about the subject, yes I missed when its the best practice
Don't worry, I looked around, but I don't think anybody really knows what they're doing. Just pretend like everybody else while you make sense of this mess.

Use EntitySetController if you want to create an OData endpoint. Use ApiController if you want to return generic JSON or XML, or some other format (e.g., using a custom formatter).
In Web API, EF and OData are not necessarily connected. You can write an OData endpoint that does not use EF. A lot of the Web API tutorials use EF, because EF code-first is relatively easy to show in a tutorial. :-)

Related

Spring HATEOAS: Practicable for a microservice architecture?

I know this question was already asked but I could not find a satisfying answer.
I started to dive deeper in building a real restful api and I like it's contraint of using links for decoupling. So I built my first service ( with java / spring ) and it works well ( although I struggled a bit with finding the right format but that's another question ). After this first step I thought about my real world use case. Micorservices. Highly decoupled individual services. So I made a my previous scenario and I came to some problems or doubts.
SCENARIO:
My setup consists of a reverse proxy ( Traefik which works as service discovery and api gateway) and 2 Microservices. In addition, there is an openid connect security layer. My services are a Player service and a Team service.
So after auth I have an access token with the userId and I am able to call player/userId to get the player information and teams?playerId=userId to get all the teams of the player.
In my opinion, I would in both responses link the opposite service. The player/userId would link to the teams?playerId=userId and vice versa.
QUESTION:
I haven't found a solution besides linking via a hardcoded url. But this comes with so many downfalls as I can't imagine that this a solution used in real world applications. I mean just imagine your api is a bit more advanced and you have to link to 10 resources. If something changes, you have refactor and redeploy them all.
Besides the synchonization problem, how do you handle state in such a case. I mean, REST is all about state transfer. So I won't offer the link of the player to teams service if the player is in no team. Of course I can add the team ids as attribute to the player to decide whether to include the link or not. But this again increases coupling between the services.
The more I dive in the more obstacles I find and I'm about to just stay with my spring rest docs and neglect the core of Rest which I is a pity to me.
Practicable for a microservice architecture?
Fielding, 2000
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
Fielding 2008
REST is intended for long-lived network-based applications that span multiple organizations.
It is not immediately clear to me that "microservices" are going to fall into the sweet spot of "the web". We're not, as a rule, tring to communicate with a microservice that is controlled by another company, we often don't get a lot of benefit out of caching, or code on demand, or the other rest architectural constraints. How important is it to us that we can use general purpose components to exchange information between different microservices within our solution? and so on.
If something changes, you have refactor and redeploy them all.
Yes; and if that's going to be a problem for us, then we need to invest more work up front to define a stable interface between the two. (The fact that we are using "links" isn't special in that regard - if these two things are going to talk to each other, then they are going to need to speak a common language; if that common language needs to evolve over time (likely) then you need to build those capabilities into it).
If you want change over time, then you have to plan for it.
If you want backwards/forwards compatibility, then you have to plan for it.
Your identifiers don't need to be static - there are lots of possible ways of deferring the definition of an identifier; the most obvious being that you can use another identifier to look up the identifier you want, or the formula for calculating it, or whetever.
Think about how Google works - the links they use change all the time, but it doesn't matter because the protocol (refresh your bookmarked search form, enter your text in "the" one field, click the button) hasn't changed in 20 years. The interface is stable (even though the underlying spellings of the identifiers is not) and that's enough.

Moving from JSF/Spring to Rest API + Angular

There is a project that is built using JSF with Spring Integration.
See https://www.tutorialspoint.com/jsf/jsf_spring_integration.htm to get an idea.
JSP is used for the html templates. Managed beans (part of JSF) make use of Spring beans as a managed property, which in turn drive business logic. The goal is to rip apart this project and split it into a RESTful service and Angular front end.
What is the best way to do this without re-writing everything. Which components can I get rid of, and which components can be re-used? If I use Spring Boot for building the REST API, can I re-use the Spring beans?
Edit: I am new to most of these technologies.
Exposing your domain model through REST should be relatively straight forward using Spring/JPA, whatever. You should learn about DTOs and especially as it relates to problems about "Lazy Initialization" under Hibernate/JPA/Spring Data, etc.
Secondarily understand the concept of views into the domain model. E.g., shipping looks at the database differently than marketing. Same database, different "facades" or business layers with different set of DTOs.
Conceptually, reproducing a JSF front end in Angular is something that is both "the same thing" and "completely different" at the same time. The key difference, IMHO, will be the JavaScript concepts and paradigms underlying Angular/React/Vue or whatever you want to use on the Front End.
Consider that an AngularJS/React/Vue front end might be better off running on top of node.js in a separate container or server, and might have different databases that it accesses on its own such as loyalty points or currency conversion, etc. Don't be afraid to let the frontend folks "be" the application instead of the backend folks. On the backend, try not to lose information. For example, if a customer adds 3 items, then changes 1, then places the order, that's 3 separate pieces of information, not 1 order. This is important for business analytics and customer service, which are business facing services as opposed to client facing services.
As a Java developer I tend to feel Angular/JS developers do a completely different and non-overlapping job than me. I feel the same way towards HTML/CSS folks. As such, I don't recommend you try being both, you will stretch yourself too thin. However, a good working knowledge on a smaller project, such as you are suggesting, is certainly useful.
Welcome to SO. Your post will probably be closed/ignored for being to broad, etc. Very specific questions and answers are what this site is about. GL.

Organizing application in layers

I’m developing a part of an application, named A. The application I want to plug my DLL into, called application B is in vb 6, and my code is in vb.net. (Application B will in time be converted to vb.net) My main question i, how is the best way for me to organize my code (application A)?
I want to split application A into layers (Service, Business, Data access), so it will be easy to integrate application A into B when B is converted to vb.net. I also want to learn about all the topics like layered architecture, patterns, inversion of dependency, entity framework and so on. Although my application (A) is small I want to organize my code in the best way.
The application I’m working with (A) is using web services for authenticating users and for sending schema to an organization. The user of application B is selecting a menu point in application B and then some functions in my application A is called.
In application A I have an auto generated schema class from an xsd schema. I fill this schema object with data and serialize the object to a memory string (is it a good solution to use memory string, I don’t have to save the data yet), wrap the xml inside a CDATA block and return the CDATA block as a string and assign the CDATA block to a string property of a web service.
I am also using Entity framework for database communication (to learn how this is done for the future work with application B). I have two entities in my .edmx, User and Payer.
I also want to use the repository pattern (is this a good choice?) to make a façade between the DAL and the BLL.
My application has functions for GeneratingSchema (filling the schema object with data), GetSchemaContent, GetSchemaInformation, GenerateCDATABlock, WriteToTextFile, MemoryStreamToString, EncryptData and some functions that uses web services, like SendShema, AuthenticateUser, GetAvalibelServises and so on.
I’m not sure where I should put it all?
I think I have to have some Interfaces like IRepository, ISchema (contract for the auto generated schema class, how can I do this?) ICryptoManager, IFileManager and so on, and classes that implements the interfaces.
My DAL will be the Entity framework. And I want a repository façade in my BLL (IRepository, UserRepository, PayerRepository) and classes for management (like the classes I have mention above) holding functions like WriteToFile, EncryptData …..
Is this a good solution (do I need a service layer, all my GUI is in application B) and how can I organize my layers, interfaces, classes an functions in Visual Studio?
Thanks in advance.
This is one heck of a question, thought I might try to chip away at a few parts for you so there's less for the next guy to answer...
For application B (VB6) to call application/assemblies A, I'm going to assume you're exposing the relevant parts of App A as COM Components, using ComVisibleAttributes and similar, much like described in this artcle. I only know of one other way (WCF over COM) but I've never tried it myself.
Splitting your solution(s) into various tiers and layers is a very subjective/debatable topic, and will always come down to a combination of personal preference, business requirements, time available, etc. However, regardless of the depth of your tiers and layers, it is good to understand the how and the why.
To get you started, here's a couple articles:
Wikipedia's general overview on "Multitier Architectures"
MSDN's very own "Building an N-Tier Application in .Net"
Inversion of Control is also a very good pattern to get into right now, with ever increasing (and brilliant!) resources becoming available to the .Net platform, it's definitely worth infesting some time to learn.
Although I haven't explored the full extent of IoC, I do love dependency injection(a type of IoC if I understand correctly though people seem to muddle the IoC/DI terms quite a lot). My personal preference for DI right now is the open source Ninject project, which has plenty of resources online and a reasonable wiki section talking you through the various aspects.
There are many more takes on DI and IoC, so I don't want to even attempt to provide you a comprehensive list for fear of being flamed for missing out somebody's favourite. Just have a search, see which you like the look of and have a play with it. Make sure to try a couple if you have the time.
Again, the Repository Pattern - often complemented well by the Unit of Work Pattern are also great topics to mull over for hours. I've seen a lot of good examples out on the inter-webs, and as many bad examples. My only advice here is to try it for yourself... see what works for you, develop a version of the patterns that suits you best and try to keep things consistent for maintainability.
For organising all these tiers and layers in VS, I recommend trying to keep all your independent tiers/layers in their own Solution Folders (r-click the Solution, Add New Solution Folder), or in some cases (larger projects) there own solutions and preferably an automated build service to update dependent projects with up to date assemblies as required. Again, a broad subject and totally down to personal preference. Just keep an eye out when designing your application for potential upcoming Circular References.
So, I'm afraid that doesn't even slightly answer your question, but hopefully provides you with some resources to check out and a few hours of reading.
Good luck!

web development - MVC and it's limitations

MVC sets up clear distinction between Model, View and Controller.
For the model, now adays, web frameworks provides ability to map the model directly to database entities (ORM), which, IMHO, end up causing performance issues at runtime due to direct database I/O.
The thing is, if that's really the case, why model ORM is so pupular and every web frameworks want to support it either organically or not.
To a web site has huge amount of traffic, it definitely won't work. But what's the work around? Connect directly to database is definitely not a wise solution here.
What's your question?
Is it a good idea to use direct db access from webpages?
A: No.
Is it a good idea to use ORM's?
A: Debatable : See How can I design a Java web application without an ORM and without embedded SQL
Is it a good idea to use MVC model?
A: Yes - it has nothing to do with "Direct" database access - it's about separating your application logic from your model and your display. (Put simply).
And the rationale for not putting database logic inside webpages has nothing to do with performance - it's about security/maintainability etc etc. Calling a usp from a webpage is likely to be MORE performant than using an ORM, but it's bad because the performance gain is negligible, and the cons are significant.
As to workaround: if you mean how do you hook up a database to a web application...?
The simplest way is to use something like Entity Frameworks or Linq-Sql with your Model - there are plenty of examples of this in tutorials on the web.
A better method IMO, is to have a separate Services layer (which may be WCF based), and have all the database access inside that, with DTO's transferring the data to your Web Application which has it's own ViewModel.
Mvc is not about orm but about separation of display logics and business logics. There is no reason your exposed model needs to be identical to you database model and many reasons to ensure that the exposed model closely matches what is to be displayed.
The other part of the solution to scale well would be to implement caching in the control and be able to distribute load on sevaral instances.
I think #BonyT has given a good answer, (and I've voted for it :) ), I'd just add that:
"web frameworks provide the ability to map the model directly to database entities (ORM), which, IMHO, ends up causing performance issues at runtime due to direct database I/O"
Even if this is true, using an ORM can solve a lot of problems with a model being easy to update and translate back and forth between a database. Solving a performance hit by buying extra web servers or cloud instances is much cheaper than having to buy extra developers or extra hours in development to solve things other people have already written ORMs to do for you.

Where is MVC a bad thing?

I've been reading through a couple of questions on here and various articles on MVC and can see how it can even be applied to GUI event intensive applications like a paint app.
Can anyone cite a situation where MVC might be a bad thing and its use ill-advised?
EDIT: I'm specifically talking about GUI applications here!
I tried MVC in my network kernel driver. The patch was rejected.
I think you're looking at it kind of backwards. The point is not to see where you can apply a pattern like MVC, the point is to learn the patterns and recognize when the problem you are trying to solve can naturally be solved by applying the pattern. So if your problem space can be naturally divided into model, view and controller then it is a good candidate for MVC. If you can't easily see which parts of your design fall into the three categories, it may not be the appropriate pattern.
MVC makes sense for web applications.
In web applications, you process some data (on SA: writing questions, adding comments, changing user info), you have state (logged in user), you don't have many different pages, but a lot of different content to fit into those pages. One Question page vs. a million questions.
For making CMS, for example, MVC is useless. You don't have any models, no controllers, just a pages of text with decorations and menus. The problem is no longer processing data - the problem now is serving that text content properly.
Tho, CMS Admin would build on top of MVC just fine, it's just user part that wouldn't.
For web services, you'd better use REST which, I believe, is a distinct paradigm.
WebDAV application wouldn't benefit greatly from MVC, either.
The caveat on Ruby for Web programming is that Rails is better suited for building Web applications. I’ve seen many projects attempt to create a WebDAV server or a content management system CMS with Rails and fail miserably. While you can do a CMS in Rails, there are much more efficient technologies for the task, such as Drupal and Django. In fact, I’d say if you’re looking at a Java Portal development effort, you should evaluate Drupal and Django for the task instead.
Anything where you want to drop in 3rd party components will make it tough to work in the MVC pattern. A good example of this is a CMS.
Each component you get will have their "own" controller objects and you won't be able to share "control" of model -> ui passing.
I don't necessarily know that MVC is ever really a bad idea for a GUI app. But there are alternatives that are arguably better (and also arguably worse depending on whose opinion you're asking). The most common is MVP. See here for an explanation: Everything You Wanted To Know About MVC and MVP But Were Afraid To Ask.
Although I suppose it might be a bad idea to use MVC if you're using a framework or otherwise interacting with software that wasn't designed with MVC in mind.
In other words, it's a lot like comparing programming languages. There's usually not many tasks that one can say that one is better than the other for. It usually boils down to programmer preference, availability of libraries, and the team's experience.
MVC shouldn't be used in applications where performance is critical. I don't know if this still applys with the increase of computing power but one example is a call center application. If you can save .5 seconds per call entering and updating information those savings add up over time. To get the last bit of performance out of your app you should use a desktop app instead of a web app and have it talk directly to the database.
When is it a bad thing? Where ever there is another code-structure that would better fit your project.
There's countless projects where MVC wouldn't "fit", but I don't see how a list of them would be of any benefit..
If MVC fits, use it, if not, use something else..
MVC and ORM are a joke....they are only appropriate when your app is not a database app, or when you want to keep the app database agnostic. If you're using an RDBMS that supports stored procedures, then that's the only way to go. Stored procs are the preferred approach for experienced application developers. MVC and ORM are only promoted by companies trying to sell products or services related to those technologies (e.g. Microsoft trying to sell VS). Stop wasting your time learning Java and C#, focus instead on what really matters, Javascript and SQL.

Resources