Exposing webapi to third party - asp.net-web-api

How third party client would use my api methods who has no knowledge of my DTOs (the objects web service returns or takes as parameter). Do i need to expose my DTO's somehow?

Documentation is your friend here. Publish some docs showing what the DTOs should be. If you know your clients, you could create packages that contain the proper DTOs. We did this for our .NET clients. We published a portable class library to nuget so any of these clients could download the package and use them. However, we have since stopped because this may overwhelm the client app developer. I.e. Let's say you have 100 DTOs, but a simple client app really only needs 5 of them. By including the package, there are now so many options that it might be confusing to know which DTO's to actually use and this leads to the client app maybe doing more than it should. We like to keep our client apps lean by only using DTOs that it needs. Yes, there is a little DTO definition duplication.
On the flip side, if you went the package route, you could essentially build up an SDK for using your API. You'll see Microsoft do this a lot to help with complexity of areas such as Azure Storage or Azure Service Bus. All of these have backing REST APIs, but the SDK ensures it's used in the designed and possibly the most optimized way.

Related

Should Microservices be reusable?

Should Microsercices be reusable?
With reusable I do NOT mean sharing Domain specific Models.
I mean should a microservice created for one application be reuseable in another application?
Is it sufficient if they are reusable within an application?
What is the best way to decouple microservices.
From my point of view as soon a microservices calls another microservice it is tightly coupled, means it can not easily (without modifications) be extracted and put into another microservice application that does not have the same service it refers to/from.
To decouple them, in my opinion, there are following ways:
microservice A needs to talk to the other microservice B with a
standard contract eg. a specfic protocol.
another Microservice C acts as a gateway and asks microservice B for the data and passes it as input to microservice A.
A concrete example for nr. 2 would be:
Coupled:
client -> API GateWay -> UserProfileService -> Authorization Service
Decoupled:
Client -> API Gateway -> Authorization Service -> API Gateway -> UserProfileService
Am I right assuming that this all boils down to the goal of the microservice? There is no wrong and right?
Are there any other strategies i'm missing to decouple a microservice?
I think the responses you're likely to get will represent opinions more than answers, but I'll go ahead and give mine!
The literature for microservices has long said, "decouple, decouple, decouple", but frankly I don't find this to be reality. When someone has created a useful API that would empower the functions of your own (auth, payments, and obviously databases come to mind), is it wrong to suggest that those need to be run alongside yours? Most people don't go through complicated, logic-filled gateways in order to make payments via Stripe or send text messages via Twilio, so why should privately hosted APIs be any different?
It is great to design your own service to be a reusable, easily consumable/deployable component. That shouldn't mean it can't have dependencies, but rather that we should be mindful of the bloat those dependencies introduce. This mindfulness is something devs should practice whenever they introduce dependencies, regardless of whether they are app packages or dependent services/APIs.
**disclosure: I build and run a framework/platform, Architect.io, to help cloud-native teams collaborate and build upon each others services. I've seen first-hand how company's like Facebook use similar tactics to enable service re-use and consumption, and wanted to build a microservices dependency resolver for the general public.
It completely depends on what microservice you are building. For ex; say you are building an email notification service. That can be reused by different applications. Another example say you are building a recommendation system. It's very specific for a single application. It hardly makes sense to design it in such a way that it can reused in different applications.
Choose according to the context. There is no right way. It all depends on the application.

Do Laravel and Vue always use RESTful APIs to communicate?

After coding for a couple of years, I have implemented many different software services into applications I was coding, using API documentation that software owner has provided. And I thought that was all about APIs I need to know, that it's just a way to make to software services communicate with each other.
But now I got a task to create an application, I wont go into detail, but let's say it just needs to implement CRUD operations and that it should use Vue on front and Laravel on back. And in the explanation of a task it is mentioned that I should use REST API for triggering those operations. And that's the part that confuses me!
Since I have never created an application from scratch, I was only working on already stable applications, fixing bugs and implementing new functionalities (and I guess this is the what it looks like for the most of the people who work in big companies today), and that's why I thought that those two frameworks (Vue and Laravel) have already implemented REST APIs since they can communicate between themselves.
Why am I specifically asked to use REST API to trigger those operations? Is there any way other than using an API to make front communicate with back (even I am using frameworks already)? If not, do they want me to create my REST API for communication and not use the one that is already provided by frameworks? I am confused, why did they mention to use REST API as if it wasn't default option, something that shouldn't even even be questionable, just an expected behavior.
why did they mention to use REST API as if it wasn't default option
For many years, offering an API in the backend for JS frontend consumption was not the default option. Traditional "round trip" applications use a form that submits to the server with a full page refresh, and I'd hazard a guess that most web applications live today still work like that.
With the advent of Vue, React, Angular etc, there is an expectation that fetching data and sending data is done via APIs in an AJAX operation. This gives applications a more seamless feel, and they're faster, since only a relatively small amount of data needs to be sent or received.
In small Laravel/Vue applications, the frontend and backend are often in the same repo, and are deployed together as a single unit. However, as the size and complexity of an application increases, there is value in splitting up these pieces into microservices, which can be deployed separately, without tricky system dependencies complicating the deployment pipeline and sign-off process. Using an API lends itself well to that approach.
Indeed, as the backend increases, the API is not one service, but several, split by process area (e.g. user, sign-up, checkout, dashboard, etc).
Do Laravel and Vue always use ... APIs to communicate?
So, to answer your main question, you don't have to use APIs/AJAX with Vue and Laravel. You can still use standard HTTP forms and redraw the whole screen if you want.
Do Laravel and Vue always use RESTful APIs to communicate? [my emphasis]
Another way of interpreting the question is that perhaps you have received instructions from someone who was differentiating a REST API from a different kind of API. On the web, GraphQL is becoming more popular. Server-to-server, SOAP (XML) used to be very common, and is still in use in many enterprises.
FOA, The gap is not going to fill "ASAP" because it requires domain knowledge that you are missing. And yes RESTful API is the best way unless you want multi-dimensional communication across multiple platforms.

What is the simplest way to convert a WCF Data Services OData provider to Web API?

I'm currently looking at the feasibility of converting our current WCF Data Services OData provider to Web API OData.
I'm just a little confused at the OData implementation for Web API. With WCF Data Services it sits over the top of our ADO.Net entity model which exposes a bunch of tables from the SQL Server backend, i.e. you give WCFDS the ADO model to generate and then you have access to all of the tables through the standard OData syntax.
With Web API from all reading so far do we create a controller or separate actions for every table/object that we want to expose? Am I missing something? Is there just a way where the OData Web API controller can just expose the entire model from the ADO Data model? Having to create a action for every table would be a mess and overkill.
Currently if we need to add a table we just map it in the EDMX and WCFDS will automatically expose it as it's mapped to the entire context of the model.
Generating the model(s)
You can:
Use the convention model builder from ASP.NET Web API. This generates a different model than what EF's own convention model builder produces: an EdmLib IEdmModel. See this question though if you're using model-first or database-first. This method seems really backwards, and it is, but it mostly works.
Serialize the EF model and rebuild it as an IEdmModel (see this question). Again this is really inefficient. If you're using model-first or database-first, you'll want to just deserialize the EDMX file to build the IEdmModel. It still produces a different model internally, but at least the CDSL is a more stable format than CLR code conventions, so you'll probably have less surprises than you'd get when using two different convention-based model builders.
The reason for this is that ASP.NET Web API OData extensions use EdmLib, while EF uses its own code, and there is no plan to make them work together. Maybe you'll find this rant useful if you're curious.
Working on the API
Once you've generated the model from a unique source (so that you can work on your model from a single place), you'll indeed have to create a controller per entity, basically. The point of Web API is not to build things automagically, but to offer flexibility to the developer. The EntitySetController helps reducing redundancy, but it won't offer everything out of the box.
Taking a step back
In the rant mentioned above, I also talk about the difference between a service-layer API and a data-layer API. ASP.NET Web API is better suited for services, while OData makes services awkward. On the other hand, OData makes data access a breeze (essentially being like a RESTful SQL) and by virtue of being directly attached to the data model, can automate a lot of things as you saw with WCF Data Services. ASP.NET Web API with OData extensions sits in the middle, and its value is not universally agreed upon (using certain bits of OData URI syntax on service APIs is certainly useful though).
Don't get too hyped up by the recent buzz around ASP.NET Web API, it and WCF Data Services are very different beasts and run on different layers in your design. Indeed, in a multi-tier architecture, you could very well see a service API built using ASP.NET Web API sitting on top of an OData API built using WCF Data Services.
My advice is think carefully about what you're trying to build, and depending on the answer, either choose ASP.NET Web API and embrace the fact that the API you expose will be very different from a data-centric OData API, or stick with WCF Data Services.
A possible plan
You can find a lot of material on the web about service-layer APIs on the web by searching for terms like "non-CRUD web/RESTful/hypermedia API" or by comparing products like ServiceStack, which advocate for less data-oriented APIs.
If you're still unsure about the nature of your project, prototype it.
If you end up with a bunch of essentially identical controllers with Web API, each mapped to exactly one entity, then your API is heavily data-oriented. Go with WCF Data Services.
If you end up with a lot of OData Actions and awkward entities with WCF Data Services, then you need more domain logic on the server-side of the API, and data-orientation doesn't offer you enough. Go with Web API. A good rule of thumb here is to treat OData actions just like you treat stored procedures in a SQL DBMS. Actually, treat any OData server as a DBMS, because that's what they are. If you wouldn't put it behind a SQL interface, don't put it behind an OData interface.
Important (Update)
It was announced on March 27th 2014 that WCF Data Services would be discontinued by Microsoft in favor of ASP.NET Web API. To handle the "data-layer" use-cases I've exposed here, Microsoft has said that it is planning to extend ASP.NET Web API. Some community efforts are also underway. WCF Data Services will also be open-sourced at some point, so it's not impossible that a new maintainer will takeover, though it's uncertain.

How to provision OSGi services per client

We are developing a web-application (lets call it an image bank) for which we have identified the following needs:
The application caters customers which consist of a set of users.
A new customer can be created dynamically and a customer manages it's users
Customers have different feature sets which can be changed dynamically
Customers can develop their own features and have them deployed.
The application is homogeneous and has a current version, but version lifting of customers can still be handled individually.
The application should be managed as a whole and customers share the resources which should be easy to scale.
Question: Should we build this on a standard OSGi framework or would we be better of using one of the emerging application frameworks (Virgo, Aries or upcoming OSGi standard)?
More background and some initial thoughts:
We're building a web-app which we envision will soon have hundreds of customers (companies) with hundreds of users each (employees), otherwise why bother ;). We want to make it modular hence OSGi. In the future customers themselves might develop and plugin components to their application so we need customer isolation. We also might want different customers to get different feature sets.
What's the "correct" way to provide different service implementations to different clients of an application when different clients share the same bundles?
We could use the app-server approach (we've looked at Virgo) and load each bundle once for each customer into their own "app". However it doesn't feel like embracing OSGi. We're not hosting a multitude of applications, 99% of the services will share the same impl. for all customers. Also we want to manage (configure, monitor etc.) the application as one.
Each service could be registered (properly configured) once for each customer along with some "customer-token" property. It's a bit messy and would have to be handled with an extender pattern or perhaps a ManagedServiceFactory? Also before registering a service for customer A one will need to acquire the A-version of each of it's dependencies.
The "current" customer will be known to each request and can be bound to the thread. It's a bit of a mess having to supply a customer-token each time you search for a service. It makes it hard to use component frameworks like blueprint. To get around the problem we could use service hooks to proxy each registered service type and let the proxy dispatch to the right instance according to current customer (thread).
Beginning our whole OSGi experience by implementing the workaround (hack?) above really feels like an indication we're on the wrong path. So what should we do? Go back to Virgo? Try something similar to what's outlined above? Something completely different?!
ps. Thanks for reading all the way down here! ;)
There are a couple of aspects to a solution:
First of all, you need to find a way to configure the different customers you have. Building a solution on top of ConfigurationAdmin makes sense here, because then you can leverage the existing OSGi standard as much as possible. The reason you might want to build something on top is that ConfigurationAdmin allows you to configure each individual service, but you might want to add a layer on top so you can more conveniently configure your whole application (the assembly of bundles) in one go. Such a configuration can then be translated into the individual configurations of the services.
Adding a property to services that have customer specific implementations makes a lot of sense. You can set them up using a ManagedServiceFactory, and the property makes it easy to lookup the service for the right customer using a filter. You can even define a fallback scenario where you either look for a customer specific service, or a generic one (because not all services will probably be customer specific). Since you need to explicitly add such filters to your dependencies, I'd recommend taking an existing dependency management solution and extending it for your specific use case so dependencies automatically add the right customer specific filters without you having to specify that by hand. I realize I might have to go into more detail here, just let me know...
The next question then is, how to keep track of the customer "context" within your application. Traditionally there are only a few options here, with a thread local context being the most used one. Binding threads to customers does tend to limit you in terms of implementation options though, as in general it probably means you have to prohibit developers from creating threads themselves, and it's hard to off-load certain tasks to pools of worker threads. It gets even worse if you ever decide to use Remote Services as that means you will completely loose the context.
So, for passing on the customer identification from one component to another, I personally prefer a solution where:
As soon as the request comes in (for example in your HTTP servlet) somehow determine the customer ID.
Explicitly pass on that ID down the chain of service dependencies.
Only use solutions like the use of thread locals within the borders of a single bundle, if for example you're using a third party library inside your bundle that needs this to keep track of the customer.
I've been thinking about this same issue (I think) for some time now, and would like your opinions on the following analogy.
Consider a series of web application where you provide access control using a single sign-on (SSO) infrastructure. The user authenticates once using the SSO-server, and - when a request comes in - the target web application asks the SSO server whether the user is (still) authenticated and determines itself if the user is authorized. The authorization information might also be provided by the SSO server as well.
Now think of your application bundles as mini-applications. Although they're not web applications, would it still not make sense to have some sort of SSO bundle using SSO techniques to do authentication and to provide authorization information? Every application bundle would have to be developed or configured to use the SSO bundle to validate the authentication (SSO token), and validate authorization by asking the SSO bundle if the user is allowed to access this application bundle.
The SSO bundle maintains some sort of session repository, and also provides user properties, e.g. information to identify the data repository (of some sort) of this user. This way you also wouldn't pass trough a (meaningful) "customer service token", but rather a cryptic SSO-token that is supplied and managed by the SSO bundle.
Please not that Virgo is an OSGi container based on Equinox, so if you don't want to use some Virgo-specific feature, you don't have to. However, you'll get lots of benefits if you do use Virgo, even for a basic OSGi application. It sounds, though, like you want web support, which comes out of the box with Virgo web server and will save you the trouble of cobbling it together yourself.
Full disclosure: I lead the Virgo project.

Should I build a REST backend for GWT application

I am planning a new application and have been experimenting with GWT as a possible frontend. The design question I am facing is this.
Should I use
Option A: GWT-RPC and build the app quickly
Option B: Build a REST backend using Spring MVC 3.0 with all the great #Controller, #Service, #Repository annotations and build a client side library to talk to the backend using the GWT overlay features and the GWT Request builder?
I am interested in all the pros and cons and people experiences with this type of design?
Ask yourself the question: "Will I need to reuse the server-side interface with a non-GWT front-end?"
If the answer is "no, I'll just have a GWT client": You can use GWT-RPC, and take advantage of the fact that you can use your Java objects both on the server and the client-side. This can also make the communication a bit more efficient, at least when used with <inherits name="com.google.gwt.user.RemoteServiceObfuscateTypeNames" />, which shortens the type names to small numeric values. You'll also get the advantage of better error handling (using Exceptions), type safety, etc.
If the answer is "yes, I'll make my service accessible for multiple kinds of front-ends": You can use REST with JSON (or XML), which can also be understood by non-GWT clients. In addition to switching clients, this would also allow you to switch to a different server implementation (maybe non-Java) in the future more easily. The disadvantage is, that you'll probably have to write wrappers (JavaScript Overlay Types) or transformation code on the GWT client side to build nice Java objects from the JSON objects. You'll have to be especially careful when you deploy a new version of the service, which brings us back to the lack of type safety.
The third option of course would be to build both. I'd choose this option, if the public REST interface should be different from the GWT-RPC interface anyway - maybe providing just a subset of easy to use services.
You can do both if use also use the RestyGWT project. It will make calling REST based JSON resources as easy as using GWT-RPC. Plus you can typically reuse the same request response DTOs from the server side on the client side.
We ran into the same issue when we created the Spiffy UI Framework. We chose REST and I would never go back. I'd even say GWT-RPC is a GWT Anti-pattern.
REST is a good idea even if you never intend to expose your REST endpoints. Creating a REST API will make your UI faster, your API better, and your entire application more maintainable.
I would say build a REST backend. In my last project we started by developing using GWT-RPC for the first few months, we wanted fast bootstrapping. Later on, when we needed the REST API, it was so expensive to do the refactoring we ended up with two backend APIs (REST and RPC)
If you build a proper REST backend, and a deserialization infra on the client side (to transform the json\xml to GWT Java objects) then the benefit of the RPC is almost nothing.
Another sometimes forgotten advantage of the REST approach is that it's more natural to the browser running the client, RPC is a propitiatory protocol, where all the requests are using POST. You can benefit from client side caching when reading resources in the standard way.
Answering ams comments:
Regarding the RPC protocol, last time I "sniffed" it using firebug it didn't look like json, so I don't know about that. Though, even if it is json based, it still uses only the HTTP POST method to communicate with the server, so my point here about caching is still valid, the browser won't cache POST requests.
Regarding the retrospective and what could have done better, writing the RPC service in a resource oriented architecture could lead later to easier porting to REST. remember that in REST one usually exposes resources with the basic CRUD operations, if you focus on that approach when writing the RPC service then you should be fine.
The REST architectural style promotes inspectable messages (which aids debugging and security), API evolution, multiple platforms, simple interfaces, failure recovery, high scalability, and (optionally) extensible systems via code on demand. It trades per-interaction performance for overall network efficiency. It reduces the server's control over consistent application behavior.
The "RPC style" (as we speak of it in opposition to REST) promotes platform uniformity, interface variability, code generation (and thereby the ability to pretend the network doesn't exist, but see the Fallacies), and customized interactions. It trades overall network efficiency for high per-interaction performance. It increases the server's control over consistent application behavior.
If your application desires the former qualities, use the REST style. If it desires the latter, use the RPC style.
If you're planning on using using Hibernate/JPA on the server-side and sending the resulting POJO's with relational data in them to the client (ie. an Employee object with a collection of Phones), definitely go with the REST implementation.
I started my GWT project a month ago using GWT RPC. All was well until I tried to serialize an object from the underlying db with a One-To-Many relationship in it. And got the dreaded:
com.google.gwt.user.client.rpc.SerializationException: Type 'org.hibernate.collection.PersistentList' was not included in the set of types which can be serialized by this SerializationPolicy
If you encounter this and want to stay with GWT RPC you will have to use something like:
GWT Request Factory (www.gwtproject.org/doc/latest/DevGuideRequestFactory.html) - which forces you to write 3+ classes/interfaces per POJO you want to share with the client. OUCH!
Gilead (sourceforge.net/projects/gilead/) - which appears to a dead project.
I'm now using RestyGWT. The switch was fairly painless and my POJO's serialize without issue.
I would say that this depends on the scope of your total application. If your backend should be used by other clients, needs to be extendable etc. then create a separate module using REST. If the backend is to be used by only this client, then go for the GWT-RPC solution.

Resources