Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I was talking to someone recently who said they are skipping the development of HATEOAS REST endpoints completely in favor of GraphQL. So I'm curious as to what the criteria set is for deciding when to use GraphQL vs. HATEOAS or is GraphQL just a better choice in general for an API Gateway / Edge Server architecture?
The pros and cons of each are:
GraphQL
Pro:
provides fine control of returned data in avoiding unneeded traffic
eliminates needing to go back to the well over and over for attached / "follow-on" data
following from the above, it allows the software designer to provide excellent performance by reducing latency - each query specifies all the things it needs, and the GraphQL implementation can assemble and deliver it with only one client<->server transaction
possibility of slow deprecations instead of versioned APIs
it's a query language
introspection is built-in
Con:
does not deal with caching (although there are now libraries that will take care of this)
HATEOAS / REST
Pro:
caching is a well-understood matter
relatively mature and well-understood
lots of infrastructure for eg CDNs to spread the load
very suitable for microservices
file uploads are possible
Con:
the "back to the well" problem
not as rigidly specified
each implementation of server and client(s) must make its own decisions
querying is not standard
Conclusions
One interesting comparison is that people use GraphQL as a frontend for REST APIs, but no-one in their right mind would consider doing the converse. If you go for a federated / microservices design, so one GraphQL server fronts for others, they can use a common specification of the API between the frontend and the microservices; this is less certainly true if the microservices are REST.
I think that so long as you have in mind the right questions to ask, GraphQL is going to be an important part of a well-designed system. Whether to skip HATEOAS entirely is unfortunately, "it depends".
Sources
My own experience, plus Phil Sturgeon's GraphQL vs REST: Overview
I love that Ed posted a link to my overview, but there's another article that I believe to be more relevant than that one.
The representation of state is completely different between the two.
https://blog.apisyouwonthate.com/representing-state-in-rest-and-graphql-9194b291d127
GraphQL is entirely unable to offer a series of "next steps" in a meaningful and standardized way, other than maybe shoving an array of strings containing potentially relevant mutations that you should try to hit up.
Even if you do that, it certainly cannot help you communicate with other HTTP APIs, which is a real shame.
Anyway, it's all that article! :)
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
What is the recommended approach to decide the technology to use for creating miroservice?
ex: All 50 microservices running in .NET platform using SQL Server as
DB for each one of them
OR
Mix and match between different technology
ex : 15 Spring-based microservice with MongoDB, 15 .NET with SQL, 20 NodeJS microservice with Redis
Microservice with different technology
I know this will again come down to developers who are familiar with what technology but all I am looking to know is which approach you would have taken if you have more than 50 microservices.
It really depends on the role of each microservice. If all of them are REST APIs with a pretty similar functionality (but completely different scope), then it would be helpful to use the same tech stack, because:
You can optimize your development workflows
You get more homogeneity across your entire system, which translates into a number of benefits down the road (identify/fix issues faster, optimize resource usage, etc).
However, if you have some microservices which have different constraints in terms of performance (or consistency, or any other vector), you can use a different tech stack just for that one. The architectural model of microservices allows that - it doesn't matter what's behind a microservice as long as it exposes an API that can be used by other microservices.
TL;DR - if you have strong reasons to use different tech stacks for some microservices, you should do it, but keep in mind that it doesn't come without a cost.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
After doing rigorous research and analysis I finally arrived to a point which is confusing me "Is Microservice a design pattern or architecture".
Some say it's a pattern evolved as a solution to monolithic applications and hence design pattern
And some confirms no doubt it's an architecture which speaks about their development, management, scalability, autonomous & full stack.
Any thoughts or suggestions I welcome to get myself clarified.
Microservices can be best described as an architectural style. Beside architectural decisions the style also includes organizational and process relevant considerations.
The architectural elements include:
Componentizing by business concern.
Strict decoupling in terms of persistence.
Well defined interfacing and communication.
Aim for smaller service sizes.
The organizational elements include:
Team organization around components (Conway's Law).
Team size limitations (two-pizza team).
The process relevant elements include:
Less centralized governance.
Smaller, more frequent releases.
Higher degree of freedom for technology decisions.
Product oriented development (agile, MVP, lean, etc).
For more details I recommend reading the articles from Martin Fowler.
I would describe it as a software architectural style that require functional decomposition of an application.
Usually, it involves a monolithic application is broken down into multiple smaller services, each deployed in its own archive, and then composed as a single application using standard lightweight communication, such as REST over HTTP or some async communication (of course, at some point micro services are written from scratch).
The term “micro” in microservices is no indication of the line of code in the service, it only indicates the scope is limited to a single functionality.
Each service is fully autonomous and full-stack. Thus changing a service implementation has no impact to other services as they communicate using well-defined interfaces. There are several advantages of such an application, but its not a free lunch and requires a significant effort in NoOps.
It's important to focus on that that each service must have the properties of:
Single purpose — each service should focus on one single purpose and do it well.
Loose coupling — services know little about each other. A change to one service should not require changing the others. Communication between services should happen only through public service interfaces.
High cohesion — each service encapsulates all related behaviors and data together. If we need to build a new feature, all the changes should be localized to just one single service.
I have a question about microservice implementation. right now I am using an api gateway to process all get request to my individual services and using kafka to handle asynchronous post put and delete request. Is this a good way of handling of handling request in a microservice architecture?
Your question is too unspecific to give a good answer. What is a good architecture totally depends on the details of your use cases. Are you serving web pages, streaming media, amass data for analysis, or something completely different? We would also need to know what are you requirements in terms of concurrency, consistency and scalability? What are the constraints for budget/size of development teams, ease of development, dev skills, etc?
For example the decisions you have taken may be considered good if you have strong requirements for a highly scalable input of large data sets and very frequent data collection as well as the team to support it. But it may be considered bad if you have a small team only and are trying to get a quick and cheap MVP for a new service that has limited scalability requirements (because the complexity of the solution slows down your development unnecessarily).
It may be good because the development team is familiar with those technologies and can effectively develop with those. Or it may be bad because your team does not know anything about those and the investment in learning those will not be justifiable by long term gains.
Don't forget that one of the ideas of the microservices architectural style is that each service can be owned by a distinct team that makes its own decisions about what technology to use for implementation (for whatever reason: ease of development, business reasons etc). So in other words the microservices style embraces the old wisdom architecture follows organization.
Here a link to a recommended further read.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm learning Spring Framework. So I want to build the application which architecture will be good enough. For example my application will be some kind of a social network. I'm using Spring Boot container for this web application.
Is this architecture is correct? I mean scalability, future code support, etc. What are advantages and disadvantages? I want to use REST api and microservices. 1 page = 1 controller = 1 service.
1 service, 1 controller, 1 page is not a good thing to limit yourself to. You'll find a page may use a whole bunch of different services. Imagine if your facebook profile was one controller. It would be gigantically large, impossible to maintain. Just break downs things as logically as you can. Sometimes it may make sense to have a page which uses multiple controllers, sometimes you could have a controller which handles multiple pages so you don't have 30 really small controllers. I would say if you have a complex page you'll need multiple controllers, if you have allot of very simple pages one controller may handle many of them.
Can I also suggest you don't break things up when you don't need to. All your micro services your planing can just be components in your application. Otherwise you will find you have a massive overhead of maintaining code which just forwards and receives HTTP requests. This could also cost you an extremely valuable tool: Transactions! You will lose transactions, and this could lead to inconsistencies in maintaining data. Keep in mind your just one person. I have been trying to finish a webapp I have been working on which is 95% done and I'm spending 8 hours a day after work, working on it till 2am. Do your self a favor and don't create more work for yourself.
I agree with most points of Snickers3192's answer. Microservices is not something you should plan up front, your application should exist first, a monolith is fine for the beginning. Martin Fowler has written a good piece about the Microservices yes or no question. Once your app grows and you see the need for either parts of your application being scaled separately or teams needing to be able to develop independently, then you've got a business case for Microservices (and as you'll see from Fowler's article, you must also be ready to support such an architecture). Right now it's overengineering.
That said: If you start with a monolith and plan to evolve to Microservices later, then you need to pay attention to your dependency tree. Different parts of your application will need to access each other, and that's fine, but make sure you don't introduce circular dependencies, otherwise extracting Microservices later will be a nightmare. Ideally, you can identify service interfaces that you will use, and you implement them locally now, but may later implement them by calling a Rest API.
The pattern you suggest (1 service for 1 controller) maps to the Backends for Frontends pattern, which can be a good idea, depending on how complex your web site is. If you have many UI components that are shared between controllers, then you'll probably want to embrace another approach, e.g. Big Pipe. But it does make sense to have one controller that bundles everything a given page needs to know and delegates it to the upstream services, independent of whether all of this is on the same machine or in a Microservice architecture.
Lastly: if you do go with Microservices, pay attention to resilience. Use a circuit breaker like Hystrix or an event-driven architecture, otherwise one dying service can take down the entire architecture.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What would be a good way for Microservices .NET to communicate with each other? Would a peer to peer communication be better (for performance) using NETMQ (port of ZeroMQ) or would it be better via a Bus (NServiceBus or RhinoBus)?
Also would you break up your data access layer into microservices too?
-Indu
A Service Bus-based design allows your application to leverage the decoupling middleware design pattern. You have explicit control in terms of how each Microservice communicates. You can also throttle traffic. However, it really depends on your requirements. Please refer to this tutorial on building and testing Microservices in .NET (C#).
We are starting down this same path. Like all new hot new methodologies, you must be careful that you are actually achieving the benefits of using a Microservices approach.
We have evaluated Azure Service Fabric as one possibility. As a place to host your applications it seems quite promising. There is also an impressive API if you want your applications to tightly integrate with the environment. This integration could likely answer your questions. The caveat is that the API is still in flux (it's improving) and documentation is scarce. It also feels a bit like "vendor lock".
To keep things simple, we have started out by letting our microservices be simple stateless applications that communicate via REST. The endpoints are well-documented and contain a contract version number as part of the URI. We intend to introduce more sophisticated ways of interaction later as the need arises (ie, performance).
To answer your question about "data access layer", my opinion would be that each microservice should persist state in whatever way is best for that service to do so. The actual storage is private to the microservices and other services may only use that data through its public API.
We've recently open sourced our .NET microservices framework, that covers a couple of the needed patterns for microservices. I recommend at least taking a look to understand what is needed when you go into this kind of architecture.
https://github.com/gigya/microdot