How to design microservices where I need to scale up the specific service with its own DB? - microservices

I am relatively new to DB, Design and Microservices. Lets consider in a ecommerce app, i create three services as below each with its own DBs.
products
customers
orders
Consider I am running single container for each of services.
Now suppose I am getting huge numbers of orders and I want to scale up the orders service, then the new containers are created on different nodes in Kubernetes or Docker Swarm. How will the two containers of the same service use the same DB instance?
What is the way to design this. I might be missing some basics. Any pointers will also help.

Scaling a service to use multiple instances is effectively the same as launching the same application multiple times. There is no effective difference between running two containers in Kubernetes and starting the application twice on your local machine.
If the database connection parameters are identical, each service instance will connect to the same database. Of course, the database server would need to be completely independent from the service, so every service instance can share the database. You could think of the database as another service offering.

Related

Running multiple instances of same springboot application

I've a spring-boot application which i want to deploy on OVH public cloud.
I need t achieve the goal of deploying multiple instances of the same application, and each instance has to have its own resources (such as MySQL database).
Each instance has to be accessed with a special url. For example:
The first instance is accessable from http://domainname/instance1/index.html
The second instance is accessable from http://domainname/instance2/index.html
I'm really new to everything which concerns cloud computing and deployments.
From what i read on the internet, my doubt is to
Use Docker where each instance has to be running inside its own container (to have the resources separated for each instance)
Use Kubernetes to achieve the goal of having each instance accessable from a specific url.
Am i wrong ? any online courses / resources / videos which can help would be awsome.
Thanks in advance.
Basically, Docker is a platform to develop, deploy, and run applications inside containers, therefore containers represent run-time environment for images. Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
There are some essential concepts in Kubernetes that describe a cluster core components and application workload, thus define a desired state of the cluster.
Kubernetes objects represent abstraction level of cluster management operations and containerized applications run-time environment within associated resources in Kubernetes API.
I would focus on the Kubernetes resources that are most crucial in application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
Ingress exposes Kubernetes service outside the cluster with some exclusive benefits like load balancing, SSL/TLS ceritficate termination, etc.
You can get more relevant information about Kubernetes implementation in OVH within particular guide chapter.
Ideally, if it's a single application it should connect to one backend database, not 3 different databases.
If your use case is very specific and you really want to connect 3 instances of an application to 3 different databases then consider each deployed application as an independent application with 3 different deployments.
Talking about Docker and kubernenets, I don't feel you need these initially rather deploy your application directly to the cloud instances. To achieve the high availability of the application, deploy them as a part of autosacing group and map an ELB to each autoscaling group. Finally, map the ELB CNAME in your DNS record and start using your application.
Docker and K8s come with there own learning curve and adds overhead if you are new to this area. Though they have a lot of pros and are extremely beneficial if you have a lot of microservices to manage and have an agile environment.
My preference starts with VM first and then slowly move to the container world. :)

best EC2 instance type for a management console of other instances

I have an application which requires a strong GPU, and it runs on an EC2 instance of type p2.xlarge which is ideal for that kind of tasks. Because the p2.xlarge instances are quiet expensive though, I keep them offline and only start them when necessary.
Sometimes I do multiple calculations on 1 instance, and sometimes I even use multiple instances at the same time.
I've written an application in Angular that can visualize the results of these calculations. Which I've only tested in an environment where the angular application is hosted on that same instance.
But since I have multiple instances, it would be ideal to visualize them all on a single webpage. So that leads me to the diagram below, where a single instance is a like a portal or management console that controls the other instances.
Now, to get things moving, I would like to setup this front-end server as soon as possible. But there are so many instance types to choose from. What would be the best instance type for this front-end server for a dashboard / portal that controls other aws instances. The only requirements are:
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
it should be able to start/stop other aws instances.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Well ,
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
Sounds like you need a small machine .
I would suggest using the T2/T3 family . very cheap and can be configured without bursting limits which gives you all the power you need for a very low price .
it should be able to start/stop other aws instances.
Not a problem ,
Create an IAM role which have permissions to EC2 and when you
launch your instance , give it that IAM role.
It will be able to do what ever you grant it to do with the api.
Pay attention to the image you use ,
if you take the Amazon Linux 2 you get the aws-cli preinstalled ,
it's pretty nice .
Read more about IAM roles here.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Just make sure you launch all instances in the same VPC .
when machines are in the same vpc they can communicate with each other only with internal ips .
You can create a new VPC like here
Or , just use the default one .
after you launch the instance you will
get it's internal IP.

Separate microservice for database access

I'm managing a very large enterprise application in that I've implemented microservice architecture. Standalone microservices have been created based on business entities & operations.
For example,
User Operations Service
Product Operations Service
Finance Operations Service
Please note that each service implemented using an n-tier architecture with WCF. i.e have separate tiers(which is independently deployable to separate server) for business and data access.
There is a centralized database which is accessed by all the microservices. There are a couple of common entities like 'user' accessed by all the services, so we have redundant database calls in multiple services. More efforts required due to database access from many places(i.e a column rename requires deployment of all the apps)
To reduce & optimize code, I'm planning to create separate microservice and move all the database operations into it. i.e services can call "Database Operations Service" for any database operations like add/update/select.
I want to know if there are any hidden challenges that I'm not aware of. Whether should I go with this thought? What can I consider as improvements in this concept?
I'm planning to create separate microservice and move all the database operations into it
That's how you will lose all benefits from microservice architecture. One service is down — the whole application is down. Unless you have replication on several nodes.
If your app does not work if one service went down(not implying that it's that service that connects to database), then it's still bad architecture and you are not using benefits of microservice architecture.
Correct for of communication would be if service would have their own databases. Or at least that every service that wants, for example, entity User, will not fetch it from DB, but will fetch it from appropriate service. And that appropriate service could fetch it from common DB at the beginning.
Next step (improvement) in the process of accommodation to microservice architecture would be creation of separate databases for each service. And by “separate” I mean that temporal fault of one service or temporal fault of one database will allow the rest of the app to be alive and functioning.
Generally, there are no hidden challenges in your approach. It just does not give any benefits, as an intermediate form between monolith application and microservice-based.

Akka, AMI - discover remote actors for database access

I am working on a prototype for a client where, on AWS auto-scaling is used to create new VMs from Amazon Machine Images (AMIs), using Akka.
I want to have just one actor, control access to the database, so it will create new children, as needed, and queue up requests that go beyond a set limit.
But, I don't know the IP address of the VM, as it may change as Amazon adds/removes VMs based on activity.
How can I discover the actor that will be used to limit access to the database?
I am not certain if clustering will work (http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html), and this question and answers are from 2011 (Akka remote actor server discovery), and possibly routing may solve this problem: http://doc.akka.io/docs/akka/2.4.16/scala/routing.html
I have a separate REST service that just goes to the database, so it may be that this service will need to do the control before it goes to the actors.

Service Fabric multi-tenant

We are planning to use Azure Service Fabric for a data-oriented multi-tenant application. Typically 100+ customers each with 5 - 100 users.
Looking at the documentation, I concluded that the best approach is to use an Application instance for each customer, rather than trying to use Profiles to achieve multi-tenancy.
Is this the best way to go ?
An application instance for each customer is a good way to handle multi-tenant situations on a single cluster, yes. There are Service Fabric applications that do this today (Azure DB is a notable one).
Here are some things you get with this approach:
Each application instance gets its own process, which means you have process-level isolation per tenant.
Each application instance is composed of one or more services, which means you can use a "microservices" architectural style for the application.
Each application instance can be created with unique parameters, so you can have various setups for each tenant. For example, you can do things like offer higher availability to certain tenants by using higher replica set size settings for the services in their application instance, or you can offer higher data capacity by using a higher partition count setting for the services in their application instance.
These are generally good things for data-oriented multi-tenant situations but whether or not its the best way to go of course depends on your specific requirements.

Resources