Spring HATEOAS: Practicable for a microservice architecture? - spring

I know this question was already asked but I could not find a satisfying answer.
I started to dive deeper in building a real restful api and I like it's contraint of using links for decoupling. So I built my first service ( with java / spring ) and it works well ( although I struggled a bit with finding the right format but that's another question ). After this first step I thought about my real world use case. Micorservices. Highly decoupled individual services. So I made a my previous scenario and I came to some problems or doubts.
SCENARIO:
My setup consists of a reverse proxy ( Traefik which works as service discovery and api gateway) and 2 Microservices. In addition, there is an openid connect security layer. My services are a Player service and a Team service.
So after auth I have an access token with the userId and I am able to call player/userId to get the player information and teams?playerId=userId to get all the teams of the player.
In my opinion, I would in both responses link the opposite service. The player/userId would link to the teams?playerId=userId and vice versa.
QUESTION:
I haven't found a solution besides linking via a hardcoded url. But this comes with so many downfalls as I can't imagine that this a solution used in real world applications. I mean just imagine your api is a bit more advanced and you have to link to 10 resources. If something changes, you have refactor and redeploy them all.
Besides the synchonization problem, how do you handle state in such a case. I mean, REST is all about state transfer. So I won't offer the link of the player to teams service if the player is in no team. Of course I can add the team ids as attribute to the player to decide whether to include the link or not. But this again increases coupling between the services.
The more I dive in the more obstacles I find and I'm about to just stay with my spring rest docs and neglect the core of Rest which I is a pity to me.

Practicable for a microservice architecture?
Fielding, 2000
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
Fielding 2008
REST is intended for long-lived network-based applications that span multiple organizations.
It is not immediately clear to me that "microservices" are going to fall into the sweet spot of "the web". We're not, as a rule, tring to communicate with a microservice that is controlled by another company, we often don't get a lot of benefit out of caching, or code on demand, or the other rest architectural constraints. How important is it to us that we can use general purpose components to exchange information between different microservices within our solution? and so on.
If something changes, you have refactor and redeploy them all.
Yes; and if that's going to be a problem for us, then we need to invest more work up front to define a stable interface between the two. (The fact that we are using "links" isn't special in that regard - if these two things are going to talk to each other, then they are going to need to speak a common language; if that common language needs to evolve over time (likely) then you need to build those capabilities into it).
If you want change over time, then you have to plan for it.
If you want backwards/forwards compatibility, then you have to plan for it.
Your identifiers don't need to be static - there are lots of possible ways of deferring the definition of an identifier; the most obvious being that you can use another identifier to look up the identifier you want, or the formula for calculating it, or whetever.
Think about how Google works - the links they use change all the time, but it doesn't matter because the protocol (refresh your bookmarked search form, enter your text in "the" one field, click the button) hasn't changed in 20 years. The interface is stable (even though the underlying spellings of the identifiers is not) and that's enough.

Related

System API in mulesoft

I have a requirement to persist some data in a table (single table). The data is coming from UI. Do i need to write just the system API and persist the data OR i need to write process and system API both? I don't see a use of process API in this case. Please suggest. Is it always necessary to access system API through process API or system API can be invoked without process API as well.
I would recommend a fine-grained approach to this. We should be following it through the experience layer even though we do not have must customization to the data.
In short, an experience layer API and directly calling System layer API (if there is no orchestration/data conversion/formatting needed)
Why we need a system API & experience API? A couple of points.
System API should be more attached to the underlying system. And if
in case, in the future, it changes then it should not impact any of
the clients.
Secondly, giving an upper layer gives us the feasibility to add
different SLAs, policies, logging and lots more, to different
clients. Even if you have a single client right now it's better to
architect for the future. Reusing is the key advantage of these APIs.
Please check Pattern 2 in this document
That is a question for the enterprise architect in your organisation. In this case, the process API would probably be a simple proxy for the system API, but that might not always be the case in future. Also, it is sometimes useful to follow a standard architectural pattern even if it creates some spurious complexity in the implementation. As always, there are design trade-offs and the answer will depend on factors that cannot be known by people outside of your organisation.

Moving from JSF/Spring to Rest API + Angular

There is a project that is built using JSF with Spring Integration.
See https://www.tutorialspoint.com/jsf/jsf_spring_integration.htm to get an idea.
JSP is used for the html templates. Managed beans (part of JSF) make use of Spring beans as a managed property, which in turn drive business logic. The goal is to rip apart this project and split it into a RESTful service and Angular front end.
What is the best way to do this without re-writing everything. Which components can I get rid of, and which components can be re-used? If I use Spring Boot for building the REST API, can I re-use the Spring beans?
Edit: I am new to most of these technologies.
Exposing your domain model through REST should be relatively straight forward using Spring/JPA, whatever. You should learn about DTOs and especially as it relates to problems about "Lazy Initialization" under Hibernate/JPA/Spring Data, etc.
Secondarily understand the concept of views into the domain model. E.g., shipping looks at the database differently than marketing. Same database, different "facades" or business layers with different set of DTOs.
Conceptually, reproducing a JSF front end in Angular is something that is both "the same thing" and "completely different" at the same time. The key difference, IMHO, will be the JavaScript concepts and paradigms underlying Angular/React/Vue or whatever you want to use on the Front End.
Consider that an AngularJS/React/Vue front end might be better off running on top of node.js in a separate container or server, and might have different databases that it accesses on its own such as loyalty points or currency conversion, etc. Don't be afraid to let the frontend folks "be" the application instead of the backend folks. On the backend, try not to lose information. For example, if a customer adds 3 items, then changes 1, then places the order, that's 3 separate pieces of information, not 1 order. This is important for business analytics and customer service, which are business facing services as opposed to client facing services.
As a Java developer I tend to feel Angular/JS developers do a completely different and non-overlapping job than me. I feel the same way towards HTML/CSS folks. As such, I don't recommend you try being both, you will stretch yourself too thin. However, a good working knowledge on a smaller project, such as you are suggesting, is certainly useful.
Welcome to SO. Your post will probably be closed/ignored for being to broad, etc. Very specific questions and answers are what this site is about. GL.

Is it possible to do Hypermedia Driven RESTFul service in a microservices world?

Lets say that we are creating a Ticket processing system. Say there are two distinct bounded contexts within this domain.
Cancelling a Ticket
Changing a Ticket
From what I understand, those two can be two different microservices, without having to know each other. What a ticket to a Cancel service could be completely different from what a ticket is to a Change service.
From a REST API design perspective, i have read a lot about using hypermedia and letting client discover resources by including relevant operations as links within the REST response (Stefan Tilkov's Talk). If thats true, when my Change Service returns a response, it makes sense to include a link to Cancel Service, which the client can use to perform cancel. How can I achieve this when Cancel and Change are two different microservices, which are not aware of each other? Or are my bounded contexts wrong?
Are we losing these hypermedia linking capabilities (or does it become harder) when use microservices in general?
Thanks
Kay
In HATEOAS URIs are discoverable (and not documented) so that they can be changed. That is, unless they are the very entry points into your system (Cool URIs, the only ones that can be hard-coded by clients) - and you shouldn't have too many of those if you want the ability to evolve the rest of your system's URI structure in the future. This is in fact one of the most useful features of REST.
For the remaining non-Cool URIs, they can be changed over time, and your API documentation should spell out the fact that they should be discovered at runtime through hypermedia traversal.
That being said, in your scenario that link would be a Cool URI, and not relative to the current API (because it may reside on a different machine/domain etc). Unless you're using some discovery tool, you're going to have to hardcode that link, and thus lose the benefit of discoverability.

Organizing application in layers

I’m developing a part of an application, named A. The application I want to plug my DLL into, called application B is in vb 6, and my code is in vb.net. (Application B will in time be converted to vb.net) My main question i, how is the best way for me to organize my code (application A)?
I want to split application A into layers (Service, Business, Data access), so it will be easy to integrate application A into B when B is converted to vb.net. I also want to learn about all the topics like layered architecture, patterns, inversion of dependency, entity framework and so on. Although my application (A) is small I want to organize my code in the best way.
The application I’m working with (A) is using web services for authenticating users and for sending schema to an organization. The user of application B is selecting a menu point in application B and then some functions in my application A is called.
In application A I have an auto generated schema class from an xsd schema. I fill this schema object with data and serialize the object to a memory string (is it a good solution to use memory string, I don’t have to save the data yet), wrap the xml inside a CDATA block and return the CDATA block as a string and assign the CDATA block to a string property of a web service.
I am also using Entity framework for database communication (to learn how this is done for the future work with application B). I have two entities in my .edmx, User and Payer.
I also want to use the repository pattern (is this a good choice?) to make a façade between the DAL and the BLL.
My application has functions for GeneratingSchema (filling the schema object with data), GetSchemaContent, GetSchemaInformation, GenerateCDATABlock, WriteToTextFile, MemoryStreamToString, EncryptData and some functions that uses web services, like SendShema, AuthenticateUser, GetAvalibelServises and so on.
I’m not sure where I should put it all?
I think I have to have some Interfaces like IRepository, ISchema (contract for the auto generated schema class, how can I do this?) ICryptoManager, IFileManager and so on, and classes that implements the interfaces.
My DAL will be the Entity framework. And I want a repository façade in my BLL (IRepository, UserRepository, PayerRepository) and classes for management (like the classes I have mention above) holding functions like WriteToFile, EncryptData …..
Is this a good solution (do I need a service layer, all my GUI is in application B) and how can I organize my layers, interfaces, classes an functions in Visual Studio?
Thanks in advance.
This is one heck of a question, thought I might try to chip away at a few parts for you so there's less for the next guy to answer...
For application B (VB6) to call application/assemblies A, I'm going to assume you're exposing the relevant parts of App A as COM Components, using ComVisibleAttributes and similar, much like described in this artcle. I only know of one other way (WCF over COM) but I've never tried it myself.
Splitting your solution(s) into various tiers and layers is a very subjective/debatable topic, and will always come down to a combination of personal preference, business requirements, time available, etc. However, regardless of the depth of your tiers and layers, it is good to understand the how and the why.
To get you started, here's a couple articles:
Wikipedia's general overview on "Multitier Architectures"
MSDN's very own "Building an N-Tier Application in .Net"
Inversion of Control is also a very good pattern to get into right now, with ever increasing (and brilliant!) resources becoming available to the .Net platform, it's definitely worth infesting some time to learn.
Although I haven't explored the full extent of IoC, I do love dependency injection(a type of IoC if I understand correctly though people seem to muddle the IoC/DI terms quite a lot). My personal preference for DI right now is the open source Ninject project, which has plenty of resources online and a reasonable wiki section talking you through the various aspects.
There are many more takes on DI and IoC, so I don't want to even attempt to provide you a comprehensive list for fear of being flamed for missing out somebody's favourite. Just have a search, see which you like the look of and have a play with it. Make sure to try a couple if you have the time.
Again, the Repository Pattern - often complemented well by the Unit of Work Pattern are also great topics to mull over for hours. I've seen a lot of good examples out on the inter-webs, and as many bad examples. My only advice here is to try it for yourself... see what works for you, develop a version of the patterns that suits you best and try to keep things consistent for maintainability.
For organising all these tiers and layers in VS, I recommend trying to keep all your independent tiers/layers in their own Solution Folders (r-click the Solution, Add New Solution Folder), or in some cases (larger projects) there own solutions and preferably an automated build service to update dependent projects with up to date assemblies as required. Again, a broad subject and totally down to personal preference. Just keep an eye out when designing your application for potential upcoming Circular References.
So, I'm afraid that doesn't even slightly answer your question, but hopefully provides you with some resources to check out and a few hours of reading.
Good luck!

Choosing between performance and ease of use in WS API

We're in the process of creating a new API for our product, which will be exposed via web services. We have an internal debate whether the API should be as easy as possible to use (in the price of making more calls) or make it as efficient as possible (rendering it harder to use). For example, here are two issues that came up:
Should we manage session info on the server, or should we pass that info back to the user, and expect him to send it back to us when needed? (Please ignore the security implications of a session)
Should we combine calls that are likely to be consequent, in order to save the time spent on the round trip in between, even if they don't really share the same logical functionality?
Basically, the desktop people are in favor of a clear, easy to use API, while the internet people would like to make it as efficient as possible. This is the first public API we're providing, and we need a strategy.
Personally, I'm in favor of making the API as usable as possible. Other components on the system are probably going to have a much larger affect on the performance, and hard to use APIs are much more error prone. But I'm a desktop programmer…
So, what should be our strategy? What are the common practices when creating such an API?
What are typical usage scenarios? Will the latency of your API determine the UI responsiveness? How big the performance trade-off will be? It's hard to suggest anything without knowing your circumstances.
But
My guess is that passing session info to the client will scale better. In-proc session management won't allow you to share the state between service instances. Managing sessions in DB will make your services more complex. Anyway this all depends on your bandwidth/memory/computational power capabilities.
I would start from the most granular operations and only provide composite methods when performance problem becomes obvious.
IMO, the best is to have the simplest API for people who will have to use your API, but letting these people have deep customizing possibilities, to make it efficient, even if harder to use.
But simplicity for users > all.
I would recommend you create your web service following the RESTful model and that means stateless. Efficiency is more critical than ease of use. You can always create a framework on top of the API later that eases implementation headaches.
You're debating a circular agrument. This is all usability (ease of use).
You're likely going to factor in a bit of both aspects - as they both influence user performance.
Its a case of (i) extent of customsiable features versus (ii) efficiency in manipulating those features. There will be an intersect between the two.
I'd say give a simple API that puts control into the users hands - this is the primary purpose of CMS. The more detailed aspects you could combine initially, introducing them as added control later on.
This way you will manage users learning curve of the system so as (i) not to bombard them with excessive options initially and (ii) allow them to adopt your system quickly at first. Then extend your users control (system functionality from a user perspective) later on by making more of the API features available.
Another good tip would be to ask you users upfront & as you go.
My 2 cents:
First start with a very granular low level API that gives high performance.
Then create a easy-to-use high level API that is "composed" of the above low level API.
This way clients can customize their behavior as they want. The clients can start with using the high-level API but if high performance is expected for certain user actions, they can use the faster-but-difficult low level API on a case-by-case basis.
One more important point to consider in service design. Try to keep them as stateless as possible. The advantage of stateless web services is that they can be easily distributed using Network Load Balancing.

Resources