Migrate from monolith to Micro service architecture [closed] - microservices

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Improve this question
We are on the initial stages of designing a micro service for my client from their standard monolith app that is sitting on 4 JBOSS servers in their own data center. Is micro service architecture target at only cloud based deployment? Can i deploy a micro service on premise production ready tomcat /JBOSS? Is that a good fit?

Sure you can.
Microservice architecture is a concept of having many small interracting components, where each of them performing well defined part of work, but good.
It's extention of the Linux way and the concept of decoupling components.
In your case you can split your service to several smaller services. Each one with own development and deployment cycles, each one with well defined API.

Is micro service architecture target at only cloud based deployment?
no it's is an architecture for application development. basic idea of micro services is separate complex application function to small functions to reduce complexity and get high performance.
there are few reasons you need to consider before moving micro services.
1.scale of you application.
if your application contain high number of complex functions its better go with micro services. and separate them and deploy separate, then easy to do changes and maintains.
2.performance of application
if some application function need high computing power. you can allocate separate hardware resources. if you implement it as micro services.
3.deploy and maintain
if you use micro services you can deploy and maintain service separate without effect other services.
4.data migration
if your databases contain high data table relation it will little bit difficult remove for function databases(each micro services need each DB) so as a first step keep DB as monolithic and separate function to services. then start to reactor DB
5.call each services
fronted end application keep clean and logic free. and wrap your micro services using API gate way and publish all the services as one service.
6.application security
each and every services running in separate no need to session tracking use JWT (oAuth2) API security.
7.multiple services & transnational
if you need to handle one business function but with more than one service you need to check each and every services function work correctly**(ex db operations ,rollbacks)** so need to developed transnational handler
implementing micro services
there is no specific technology stack for it but there are free more technology available
ex :
java spring boot for micro services (with inbuilt tom cat server )
zuul , eureka for API gate way
oAuth 2 and JWT for security
*Note
there is not fix way to implementation for micro services , use correct technology stack to get performance and implement small business function. and doesn't matter hosting in cloud or local servers.
strong text

There is definitely no limitations whether you deploy your microservices on local, physical servers or in the cloud. Both approaches are valid, but they impose different advantages and disadvantages.
With local/physical servers, you will have:
bigger operations overhead (it is better you have good DevOps in your team)
manual scaling (when you experience bigger traffic, you need to manually fire up new instances, or use some management tool for this)
manual fault detection - if a server goes down (this depends on your/company's server enviorenment) someone will need to fix this "manually"
it is cheaper (a friend is buying old server instances on Amazon and running their semi-microservice architecture on them, he calculated they achieve quite big savings this way)
With cloud infrastructure, you get some of the below advantages (in contrary to above disadvantages):
less operations overhead (the cloud will take care of most of operations)
flexible scaling (when your traffic goes up, cloud can automatically fire up new instances, when it goes down, it will shutdown instances)
error/fault handling - if there occurs a problem in the cloud, you do not need to worry
I did not mention all the advantages and disadvantages of given approaches, as it also depends on the project (will it receive different traffic on different times of day, does it need to keep data locally or can it be in a foreign country in a cloud, ...).

Related

How to scale a Spring Boot app to keep the same performance? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 months ago.
Improve this question
My question is theoritical (I am not asking the steps about scaling) and related to keep the same performance.
For example our web site (Spring Boot based) is visited 100 person / day and after a year is şs started 1.000.000 visit per day. In this situation, I have the following ideas basically, but need to know more and if these ideas are good or bad:
Using Cloud services
LOad balancer
Using microservices and applying distributed system techniques.
If read operations are much more than write or update, a NoSQL db can be used.
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think.
... etc.
Could you pls share your ideas and comment the idea above? Any help would be appreciated.
There have been several POC( proof of concept ) and proved deployment strategies for better availability.
Keeping your points, I am summarizing and possibly giving a bit more clarity!
Using Cloud services --> This is the platform you choose for e.g. One can choose on-premise service deployment or on cloud such as AWS,Azure GCP etc. Not related to scalability question at the moment.
Load balancer --> Balance the load when you have multiple instances of your Microservice, so for e.g. You can create docker images of your microservice & deploy as a Pod on Kubernetes platform where you can have more than one Replicas (Replica is copy of your same service). Load balancer will balance the HTTP requests among multiple pods.
Using microservices and applying distributed system techniques --> You can but make sure to adhere to best practices and proven Microservice deployment strategies. Read more about the more about them here https://www.urolime.com/blogs/microservices-deployment-strategies/
If read operations are much more than write or update, a NoSQL db can be used. --> Definitely, infact you can decompose your microservice based on number of transactions or read/write operations & you can use NoSql DB like Couchbase or MongoDb
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think. --> Again such mechanisms are usually centralized and JWT token has some time validity!
So there might be several other options of scaling but most used is the one I mentioned in point 2.
I highly suggest you get a grip on basics, Here are few links which would be helpful!
https://microservices.io/patterns/microservices.html
https://medium.com/design-microservices-architecture-with-patterns/decomposition-of-microservices-architecture-c8e8cec453e

I got some architecture questions

Warning : I'm new in application architecture and officially speaking - this is the first time I'm designing something this big. This is also my own application so I got full authority to change things.
I'm building a serverless application which consists of an on-demand application streaming platform.
Customers who seek to try a specific application (usually large and expensive ones like Photoshop or Solidworks for example) could have the possibility to directly try one from their computers, on their browser, while the application is running on a similar to their computer type of infrastructure.
I'd use the CI/CD pipelines and IaC technology to build EC2 infrastructure that will host these applications and use those same technologies to destroy that infrastructure, since it's volatile.
So to create/destroy that EC2 infrastructure I use the GitLab API.
I've thus decided to go with AWS Lambda & GitLab for now.
Now the architecture questions :
Is it better to have one serverless function that handles everything or several functions ?
I'm planning to destroy the EC2 infrastructure after a certain amount of time (10-15 minutes). How should I schedule HTTP communication? Should I use a queue like SQS? Should I use some database and check every minute?
Again, thanks a lot for your wisdom!
Edit : Clarification on some stuff.

Microservices: How to integrate the UI? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
today I started reading about Microservice architectures - and it seems to be very interesting!
But I have one doubt I need some explanation on:
Assume I want to create a blog and would build 4 microservices for that: User/login Service, Article Service, Comments Service and Reporting/analytics Service(not a realistic example, I know...).
The Reporting/Analytics service is purely backend - no issue here for my understanding.
But the three others involve some UI part - and as to my understanding this UI part should also be part of the microservice itself, right?
How would the UI integration work? Would I then have a 5th "front door" service that collects the user requests, forwards them to the other services which then answer with HTML/CSS and the front door service would then compose the individual responses into what is returned to the user?
Any chance, you have an example/use case for such a scenario?
Thanks and regards!
From my experience, in a microservices architecture, it is often useful to have a service that acts like an API gateway that front loads to the more domain specific microservices that does the work. The responsibility of the API gateway could be to aggregate results and return them to the front end but consolidating responses that are returned from the microservices would be coupling the knowledge of the two services and leaking some domain knowledge into the API gateway layer. The API gateway should probably be as thin as possible and should reach out to services to accomplish something.
The use case here that you're describing would be trying to authenticate the user before reaching out to the login service and then the article or comments service. Altogether the front end would still stay monolithic if they are a part of the same application.
If the application becomes big enough, the application would be separated by products but probably still rely on a core set of services. In that case, they would probably live in different UIs so that would make it less complex (kind of like microservices on the back end). Just as a side note, that a microservices architecture usually introduces a set of core services that can be utilized by different teams and therefore different applications that have different UIs. An example being an ecommerce application, that has customer service department editing orders for servicing customers and customers using an orders service to make purchases. In effect, these are two applications and they will have two different UIs. Hope this helps!
The other thing that I'd like to point out is that a microservices architecture is only great when the application becomes too large and complex. A microservices architecture requires more resources as it has some additional overhead. Start with a monolithic first :).
There are a couple of different approaches that you can take. If it makes sense each microservice can have its own pages that it can render. Then you only need a front end that can create the appropriate navigation for the involved services. The menu is built for the application, each service presents its own UI. This approach works well when you need to have the ability to include or exclude services from the application, for instance, based on licensing.
Alternately, each microservice can provide a set of HTML Fragments. Then you need a front end service to compose the pages and navigation. The fragments must all use the same vocabulary for CSS or whatever means you use to define the look and feel. This approach can lead to odd pages when HTML fragments are composed without one or more service that might be included.
Finally, a complete application UI can be built on top of the microservices. This can result in a "tighter" UI with a better flow. It also will typically take longer and be more difficult to change as new services are added.
What is the best? As with most cases in software development, it depends on what you are building. In the case of the Blog application, you described I suspect that each service could have its one full page UI. More commonly having a full UI is the approach I have seen. The HTML Fragment approach is more versatile but takes longer to develop initially. Once it is built though you will have more flexibility in how you deploy your application. This could be a real benefit for a Software Product company.
Hope that helps.

Clearing up misconceptions about amazon(EC2) and rackspace [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm friends with an owner of a small creative business (with multiple departments) and until now they have been using a dedicated server (via a 3rd party) for a lot of internal projects and they've been known to iframe a few small dev projects (like photo galleries, one page sites etc...) off and on for some of their clients (some with hi traffic sites).
They're looking to switch from the dedicated server to a cloud environment. The owner is enamored with amazon's cloud services, but still wanted some alternative options they also want the new environment to mirror the current one as much as possible (linux/centOS, PHP 5.3, mysql databases) but with the ability to scale when desired.
So the misconceptions I need cleared up and questions I have are:
1) I always assumed amazon's cloud service was more suitable for high end high traffic complex web application (Netflix, pinterest, instagram etc...) rather than the typical server use listed above. Is this correct?
2) Is it possible to mirror their current setup on amazon?
3) If number 1 is not true, but they instead chose rackspace, could they run heavy web apps like Netflix, pinterest, instagram on a rackspace cloud server if they ever decided to do something that advanced (is rackspace scaleable in the same way ec2 is)?
1) Amazon AWS is also suitable for this environment, or even smaller ones (they offer instances as small as "Micro", which are far less capable than what you are describing all the way up to GPU compute clusters).
2) Yes. That is a very common setup for an AWS-based solution. In fact, I recently migrated something similar from Rackspace to AWS.
3) #1 is true. However, you can certainly mix what runs on Rackspace and in the AWS cloud. Keep in mind latency and security issues if the two component solutions need to communicate with each other. Rackspace also has a cloud offering, but it is not as mature as Amazons.

Hosting, deploying and running web applications in the cloud [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
So far I've read some blog articles about cloud computing and services for hosting applications in the grid.
If I'd wanted to have a web application running in the cloud for as little cost as possible, what would be the best solution?
Let's assume the following configuration:
J2EE web application
Any free database (MySQL, PostgreSQL)
Any web container to deploy the web application to
What application stack would you suggest to be the best combination of services to
host
deploy
run
web applications?
As an additional requirement, the services chosen shouldn't require a lot about server management like firewall settings etc.
This space is changing very quickly right now so I think you will find a lot of different good answers. If I where to do something on the cheap right now I would probably pick the following stack:
Web server: apache
App server: tomcat - use the clustering support if you need to grow or split at the apache level or even introduce a load balancer box at the very front
DB server: MySql - mainly because it is easy to cluster
Platform: scalr - The cloud setup is simple and cheap. It uses Amazon's cloud on the backend and that gets you a lot of extras like putting servers in different datacenters for redundancy.
Now you can add in or remove parts of this. You may not need a web tier out there and can just expose tomcat directly. You may need EJBs and in that case you can just fire up more nodes for that and create another tier. You may want to add a tier for load balancing in front of apache. You may want to use the Amazon cloudfront service to push static files to their edge network.
I have investigated Amazon's ec2 solution recently. It is quite good and there are many pre-built boxes that you can use if you find one that suits your need. I think there will still be some server management involved...you cannot get away from that. But the pre built boxes will make it easier.
The cost is reasonable as you only pay for what you use.
[EDIT] The pre-built boxes are called Amazon Machine Images (AMIs).
I think you can get no where closer to Jelastic. It has all the stuffs that #carson mentioned. Specially I will mention their unique web console and they do not have any dependency for any API or console to be installed. I use their platform for many of the clients for my startup. Also additionally you get a nginx support for load balancing and configuring it right away from the console.

Resources