I've been reading a lot of articles that state microservices enable CI/CD. However, the articles don't explain how or why this is the case. It seems that you could continuously deploy a monolith as well once all of its automated tests pass.
Thank you!
There are many aspects of this.
It seems that you could continuously deploy a monolith as well once all of its automated tests pass.
A monolith is typically stateful, e.g. sessions is used for short lived user state. Where as modern microservice architecture typically follows The Twelve Factor App principles and are typically deployed on e.g. Kubernetes or other cloud environment. Apps following the Twelve Factor App principles and apps on Kubernetes are stateless, e.g. all user state most be handled outside the app. See https://12factor.net/processes
With stateless apps, it is much easier to scale-out to more instances e.g. 5 instances for an app and it is also easy to scale down to fewer instance, e.g. 2 instances.
When the app is stateless and runs in multiple instances - doing a "rolling deployment" e.g. updating one instance at the time, form version1 to version2 is an easy process and built-in functionality in Kubernetes.
With all the above features in-place, it is now much easier to implement Continuous Deployment compared to how it was with large stateful monolithic apps.
Related
I realize the benefits of workflow engine such as easy to understand communication, easy waiting, parallelism and compensative actions with informative graphical model. The concept is great and more manageable than dogmatic event driven architecture without central coordinator and specified flow.
We are currently using legacy workflow engine to orchestrate microservices in insurance business. Over the time chunks of business logic and little helper scripts has creeped into process model, which is not developer friendly solution to maintain and test with continuous integration standards. Also the lack of available expertise and future support is a huge risk from the project management perspective.
I played around with Camunda and Activiti, but immediately faced compability issues with Spring Boot 3 and a lack of up to date examples and general knowledge outside of relatively small user community. This gives me a bad feeling of drowning into the same swamp as we are now in the future.
We planned design our own Java based orchestrator, which just invokes specified microservices in a specified order when the process is started or user task is completed. The orchestrator will also handle monitoring and versioning of the process flow. It's up to microservices to validate their business context and halt the process by raising user tasks if necessary. When user task is completed, the orchestrator restarts the whole process from the beginning with all tasks cleared. It is the responsibility of microservices to no-op when their work is already done in the previous run. Eventually, the process will reach it's end and finish. This solution would be a good balance of modern DX and coordinated process management.
Is there examples or name for such an idempotent orchestrated architecture?
You only get into the challenge of aligning dependencies between your services and the process engine (and other components) if you tightly couple the orchestration / engine with the services. Happened to me many times in the past, too. If you separate the engine (called remote process engine with Camunda 7, only architecture with Camunda 8), then you are not influenced by its dependencies. Try for instance the Camunda RUN distribution and the external task pattern or C8 SaaS to get to a cleaner, decoupled architecture. See Bernd Ruecker's reasoning here.
Details will depend on your specific requirements, but I would definitely advise anyone against building a homegrown solution. There are enough options in the market and these times are over. Requirements grow over time. There are security vulnerabilities to be aware of and to fix, etc. High maintenance, no market for resources, no synergies, you would need to maintain proprietary knowledge in the company and cannot achieve the same level of quality and feature richness as a more broadly used solution can. For a list of options see for instance Bernd Ruecker's articles. Among the available options I would personally prefer an orchestrator, which uses a graphical process modelling approach based on the BPMN 2 standand. It helps clarity, knowledge transfer, and Business-IT alignment and the standard is a vendor-independent skill set.
There is no need to build your own. Use temporal.io open source project. Besides Java SDK it supports Go, Typescript/Javascript, Python, PHP.
The project started at Uber in 2016. There are hundreds of companies using it for mission critical applications.
When implementing a microservice architecture and keeping services really small, you soon have many services, let's say 100 for simplicity. Now when deploying each service to an AWS nano instance, this would cost ~500$ / month, a rather hefty sum for a smaller project or a hobby developer. What options do I have to reduce this price, while still being able to have many services?
I thought about putting multiple services on one nano instance (maybe dockerized). I can comfortably fit ~5 services on one nano instance, so the price would be 5 times lower. The problem I have with this, is that I have to manage a lot of things and it doesn't seem to scale well. Is there a better way or alternatively a web-service that does this for me?
Microservices as a tool
One thing you may want to think about is if microservices are an architecture for a small project with low traffic.
Microservices architecture is a tool to solve e.g. high traffic challenges and with low traffic a monolith may be a more cost effective approach. Microservices come also at a cost (complexity all over the board - design, deployment, service discovery and relations).
Keep in mind that your microservices shouldn't be too small and as per the best practices you should cover a single domain with them (https://martinfowler.com/articles/microservices.html), not split a business domain into multiple microservices just for the sake of having microservices (unless this is a training project where you want to learn the tools for microservices architecture).
I am not sure how large solution you would need to have to have a challenge of 100 microservices, but maybe you should review their design and make sure that they are not too small :)
Nice and short article about this topic - Microservice Architectures: What They Are and Why You Should Use Them.
Lambda
Microservices aside, as #Ashan suggested, for low ongoing cost you may want to look at functional programming/lambda architecture and serverless framework. Again - there is a complexity (since you go one level deeper in separating your deployment packages than with microservices) that is partially addressed by the serverless framework, but you have tools like AWS Lambda/Azure Functions/Google Functions to run your functions as a service and pay per use (real use, not reservation as in EC2).
Microservices with Docker and AWS ECS
If you want to stick to microservices, please look into Docker and Amazon EC2 Container Service. This will allow you to effectively use AWS EC2 instances for running multiple microservices. You may want to put Application Gateway in front of AWS ECS to manage the traffic.
AWS serverless stack will give you the lowest total cost of ownership for a Microservices project.
It mainly involves AWS API Gateway and Lambda where you will pay only for the Opex rather Capex.
To anyone with real world experience breaking a monolith into separate modules and services.
I am asking this question having already read the MonolithFirst blog entry by Martin Fowler. When taking a monolith and breaking it into microservices the "size" element of the equation is the one that I ponder over the most. Specifically, how to approach breaking a monolith application (we're talking 2001: A Space Oddessy; as in it is that old and that large) into micro services without getting overly fine grained or staying too monolithic. The end goal is creating separate modules that can be upgraded indepenently and scaled independently.
What are some recommended best practices based on personal experience of breaking a monolith into microservices?
The rule of thumb is breaking the monolith based on bounded context . The most common way of defining the bounded context is using BU ( Business Unit) . For example the module which does actual payment is mostly a separate BU .
The second thing to consider is the overhead micro-services bring. You should analyse the hardware , monitoring , infra pieces before completely breaking the service. What I have seen is people taking smaller microservices out of monolith instead of going and writing say 10 new service and depreciating the monolith.
My advice will be have an incremental approach . Take the first BU which is being worked upon out of monolith. This will also give a goos learning curve for the whole team.
You should clearly distinguish sub-domain areas (bounded contexts) from you domain.
Usually (if everything is fine with your architecture) you already have some separate components in your monolith application which responsible for each sub-domain. These components interact with each other in one process
(in monolith application) and you should to think about how to put them into separate processes. Of course you need to produce a lot of refactoring when moving one by one parts of the monolith to microservices.
Always remember that every microservice is responsible for some sub-domain.
I strongly recommend you to learn Domain Driven Design.
Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans
Implementing Domain-Driven Design by Vaughn Vernon
Also learn CQRS pattern
At the beginning you also should decide how your micservices will interact with each other.
There are several options:
Direct calls from one service to another
Send messages through some dispatcher service
which abstracts the client service from the knowledge where the called (destination) services are located.
This approach is similar to how proxy server like NGINX works.
Interact through some messaging bus (middleware), like RabbitMQ
You can combine these options, for example Query requests can be processed through Dispatcher Service, Commands and Events through message bus.
From my experience the biggest problem will be to go away from a single database,
which monolith applications is usually used.
In addition some good practices:
Put each microservice in own repository - this isolates from the ability to directly use the code of one micro service in another.
You also get faster checkouts and builds of each microservice on CI.
Interactions with any service should occur only through its public contracts.
It is necessary to aspire that each microservice has its own database
Example of the sub-domains (bounded contexts) for some Tourism Industry application.
Each bounded context can be serviced by a microservice.
We also started our journey some time back and i started writing a blog series for exactly the same thing: https://dzone.com/articles/how-i-started-my-journey-in-micro-services-and-how
Basically what i understood is to break my problem in diff. microservices, i need a design framework which Domain Driven Design gives(Domain Driven Design Distilled Book by Vaugh Vernon).
Then to implement the design (using CQRS and Event Sourcing and ...) i need a framework which provides all the above support.
I found Lagom good for this.(Eventuate , Spring Microservices are some other choices).
Sample Microservices Domain analysis using Domain Driven Design by Microsoft: https://learn.microsoft.com/en-us/azure/architecture/microservices/domain-analysis
One more analysis is: http://cqrs.nu/tutorial/cs/01-design
After reading on Domain Driven Design i think lagom and above links will help you to build a end to end application. If still any doubts , please raise :)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
What are advantages and disadvantages of microservices and monolithic architecture?
When to chose microservice architecture or monolithic architecture?
This is a very important question because a few people get lured by all the buzz around microservices, and there are tradeoffs to consider. So, what are the benefits and challenges of microservices (when compared with the monolithic model)?
Benefits
Deployability: more agility to roll out new versions of a service due to shorter build+test+deploy cycles. Also, flexibility to employ service-specific security, replication, persistence, and monitoring configurations.
Reliability: a microservice fault affects that microservice alone and its consumers, whereas in the monolithic model a service fault may bring down the entire monolith.
Availability: rolling out a new version of a microservice requires little downtime, whereas rolling out a new version of a service in the monolith requires a typically slower restart of the entire monolith.
Scalability: each microservice can be scaled independently using pools, clusters, grids. The deployment characteristics make microservices a great match for the elasticity of the cloud.
Modifiability: more flexibility to use new frameworks, libraries, datasources, and other resources. Also, microservices are loosely-coupled, modular components only accessible via their contracts, and hence less prone to turn into a big ball of mud.
Management: the application development effort is divided across teams that are smaller and work more independently.
Design autonomy: the team has freedom to employ different technologies, frameworks, and patterns to design and implement each microservice, and can change and redeploy each microservice independently
Challenges
Deployability: there are far more deployment units, so there are more complex jobs, scripts, transfer areas, and config files to oversee for deployment. (For that reason, continuous delivery and DevOps are highly desirable for microservice projects.)
Performance: services more likely need to communicate over the network, whereas services within the monolith may benefit from local calls. (For that reason, the design should avoid "chatty" microservices.)
Modifiability: changes to the contract are more likely to impact consumers deployed elsewhere, whereas in the monolithic model consumers are more likely to be within the monolith and will be rolled out in lockstep with the service. Also, mechanisms to improve autonomy, such as eventual consistency and asynchronous calls, add complexity to microservices.
Testability: integration tests are harder to setup and run because they may span different microservices on different runtime environments.
Management: the effort to manage operations increases because there are more runtime components, log files, and point-to-point interactions to oversee.
Memory use: several classes and libraries are often replicated in each microservice bundle and the overall memory footprint increases.
Runtime autonomy: in the monolith the overall business logic is collocated. With microservices the logic is spread across microservices. So, all else being equal, it's more likely that a microservice will interact with other microservices over the network--that interaction decreases autonomy. If the interaction between microservices involves changing data, the need for a transactional boundary further compromises autonomy. The good news is that to avoid runtime autonomy issues, we can employ techniques such as eventual consistency, event-driven architecture, CQRS, cache (data replication), and aligning microservices with DDD bounded contexts. These techniques are not inherent to microservices, but have been suggested by virtually every author I've read.
Once we understand these tradeoffs, there's one more thing we need to know to answer the other question: which is better, microservices or monolith? We need to know the non-functional requirements (quality attribute requirements) of the application. Once you understand how important is performance vs scalability, for example, you can weigh the tradeoffs and make an educated design decision.
While I'm relatively new to the microservices world, I'll try to answer your question as complete as possible.
When you use the microservices architecture, you will have increased decoupling and separation of concerns. Since you are litteraly splitting up your application.
This results into that your codebase will be easier to manage (each application is independent of the other applications to stay up and running). Therefore, if you do this right, it will be easier in the future to add new features to your application. Whereas with a monolithic architecture, it might become a very hard thing to do if your application is big (and you can assume at some point in time it will be).
Also deploying the application is easier, since you are building the independent microservices separately and deploying them on separate servers. This means that you can build and deploy services whenever you like without having to rebuild the rest of your application.
Since the different services are small and deployed separately, it's obvious easier to scale them, with the advantage that you can scale specific services of your application (with a monolithic you scale the complete "thing", even if it's just a specific part within the application that is getting an excessive load).
However, for applications that are not intended to become too big to manage in the future. It is better to keep it at the monolithic architecture. Since the microservices architecture has some serious difficulties involved. I stated that it is easier to deploy microservices, but this is only true in comparison with big monoliths. Using microservices you have the added complexity of distributing the services to different servers at different locations and you need to find a way to manage all of that. Building microservices will help you in the long-run if your application gets big, but for smaller applications, it is just easier to stay monolithic.
#Luxo is spot on. I'd just like to offer a slight variation and bring about the organizational perspective of it. Not only does microservices allow the applications to be decoupled but it may also help on an organizational level. The organization for example would be able to divide into multiple teams where each may develop on a set of microservices that the team may provide.
For example, in larger shops like Amazon, you might have a personalization team, ecommerce team, infrastructure services team, etc. If you'd like to get into microservices, Amazon is a very good example of it. Jeff Bezos made it a mandate for teams to communicate to another team's services if they needed access to a shared functionality. See here for a brief description.
In addition, engineers from Etsy and Netflix also had a small debate back in the day of microservices vs monolith on Twitter. The debate is a little less technical but can offer a few insights as well.
I'm porting a huge application to Windows Azure. It will have a web service frontend and a processing backend. So far I thought I would use web roles for servicing client requests and worker roles for backend processing.
Managing two kinds of roles seems problematic - I'll need to decide how to scale two kinds of roles and also I'll need several (at least two) instances of each to ensure reasonable fault tolerance and this will slightly increase operational costs. Also in my application client requests are rather lightweight and backend processing is heavyweight, so I'd expect that backend processing would consume far more processing power than servicing client requests.
This is why I'm thinking of using web roles for everything - just spawn threads and do both servicing requests and backend processing in each instance. This will make the role more complicated but will I guess simplify management. I'll have more instances of a uniform role and better fault tolerance.
Is it a good idea to reuse web roles for backend processing? What drawbacks should I expect?
Sounds like you already have a pretty good idea of what to think about when using multiple roles:
Cost for 2 instances to meet SLA (although some background tasks really don't need SLA if the end user doesn't see the impact)
Separate scale units
However: If you run everything in one role, then everything scales together. If, say, you have an administrative web site on port 8000, you might have difficulty reaching it if your user base is slamming the main site on port 80 with traffic.
I blogged about combining web and worker roles, here, which goes into a bit more detail along what we're discussing here. Also, as of some time in March, the restriction of 5 endpoints per role was lifted - see my blog post here for just how far you can push endpoints now. Having this less-restrictive endpoint model really opens up new possibilities for single-role deployments.
From what I understand your are asking if it makes sense to consolidate service layers so that you only have to deal with a single layer. At a high level, I think that makes sense. The simpler the better, as long as it's not so simple that you can't meet your primary objectives.
If your primary objective is performance, and the calls to your services are inline (meaning that the caller is waiting for an answer), then consolidating the layers may help you in achieving greater performance because you won't have to deal with the overhead of additional network latency of additional physical layers. You can use the Task Parallel Library (TPL) to implement your threading logic.
If your primary objective is scalability, and the calls to your services are out-of-band (meaning that the caller implements a fire-and-forget pattern), then using processing queues and worker roles may make more sense. One of the tenets of cloud computing is loosely coupled services. While you have more maintenance work, you also have more flexibility to grow your layers independendly. Your worker roles could also use the TPL mentioned above so that you can deploy your worker roles on larger VMs (say with 4CPUs, or 8), which would keep the number of instances deployed to a minimum.
My 2 cents. :)
I would suggest you to develop them as separated roles: a web role and a worker role, and then just combine them into a single web role.
this way, in the future you can easaly convert to real separated roles, if needed.
for more details:
http://www.31a2ba2a-b718-11dc-8314-0800200c9a66.com/2010/12/how-to-combine-worker-and-web-role-in.html
http://wely-lau.net/2011/02/25/combining-web-and-worker-role-by-utilizing-worker-role-concept/