I've a spring-boot application which i want to deploy on OVH public cloud.
I need t achieve the goal of deploying multiple instances of the same application, and each instance has to have its own resources (such as MySQL database).
Each instance has to be accessed with a special url. For example:
The first instance is accessable from http://domainname/instance1/index.html
The second instance is accessable from http://domainname/instance2/index.html
I'm really new to everything which concerns cloud computing and deployments.
From what i read on the internet, my doubt is to
Use Docker where each instance has to be running inside its own container (to have the resources separated for each instance)
Use Kubernetes to achieve the goal of having each instance accessable from a specific url.
Am i wrong ? any online courses / resources / videos which can help would be awsome.
Thanks in advance.
Basically, Docker is a platform to develop, deploy, and run applications inside containers, therefore containers represent run-time environment for images. Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
There are some essential concepts in Kubernetes that describe a cluster core components and application workload, thus define a desired state of the cluster.
Kubernetes objects represent abstraction level of cluster management operations and containerized applications run-time environment within associated resources in Kubernetes API.
I would focus on the Kubernetes resources that are most crucial in application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
Ingress exposes Kubernetes service outside the cluster with some exclusive benefits like load balancing, SSL/TLS ceritficate termination, etc.
You can get more relevant information about Kubernetes implementation in OVH within particular guide chapter.
Ideally, if it's a single application it should connect to one backend database, not 3 different databases.
If your use case is very specific and you really want to connect 3 instances of an application to 3 different databases then consider each deployed application as an independent application with 3 different deployments.
Talking about Docker and kubernenets, I don't feel you need these initially rather deploy your application directly to the cloud instances. To achieve the high availability of the application, deploy them as a part of autosacing group and map an ELB to each autoscaling group. Finally, map the ELB CNAME in your DNS record and start using your application.
Docker and K8s come with there own learning curve and adds overhead if you are new to this area. Though they have a lot of pros and are extremely beneficial if you have a lot of microservices to manage and have an agile environment.
My preference starts with VM first and then slowly move to the container world. :)
Related
I am relatively new to DB, Design and Microservices. Lets consider in a ecommerce app, i create three services as below each with its own DBs.
products
customers
orders
Consider I am running single container for each of services.
Now suppose I am getting huge numbers of orders and I want to scale up the orders service, then the new containers are created on different nodes in Kubernetes or Docker Swarm. How will the two containers of the same service use the same DB instance?
What is the way to design this. I might be missing some basics. Any pointers will also help.
Scaling a service to use multiple instances is effectively the same as launching the same application multiple times. There is no effective difference between running two containers in Kubernetes and starting the application twice on your local machine.
If the database connection parameters are identical, each service instance will connect to the same database. Of course, the database server would need to be completely independent from the service, so every service instance can share the database. You could think of the database as another service offering.
I use three services in my docker-compose file: Django API, Celery, and Redis. I can run this using AWS ECS for which I have to create 3 services (one for each service) and at least three tasks for each service created which seems to be expensive when compared to AWS EC2 instance where I can deploy 3 services by running docker-compose command and also I can attach Auto-scaling groups for horizontal scaling by creating an Image of my instance and attach a Load balancer to it.
I don't understand why should one use ECS which is mainly used for auto-scaling multi-service dockerized applications when I can do the same with simple EC2 auto-scaling groups. Am I missing something?
The "why should one use ECS" is a question of "why to use an abstraction" at its heart. To answer it, ECS bills itself as a "fully managed container orchestration service" and provides an abstraction on top of a cluster of EC2 instances (run and managed by you or by AWS, depending upon the choices you make). It will come down to these choices which you want to consider before choosing to use (or NOT choosing ) ECS:
Do you want to do mapping of containers to EC2 instances yourself OR do you want your container orchestration engine to take care of it for you?
Do you want to be even aware of underlying EC2 instances and hence be responsible for running and patching them OR do you want your container orchestration engine to take care of it for you?
Do you want to own responsibility of redeploying newly built and tested containers on all your EC2 instances individually OR do you want your container orchestration engine and associated compute platform (eg. Fargate) to take care of it for you?
I am developing an application that needs to be clustered in Liberty Profile on BlueMix. I need to have a shared List of objects accessible to all nodes of the cluster. The app will perform update, add and remove operations on them, as one node does not cope with the big load need of the application. How can I do this with Liberty Profile? Is there a best practice or recommended approach before looking for 3rd party solutions for this? Thanks
I would suggest looking at the services in Bluemix, for example Data Cache or Redis. An external (to your app) service would be the cloud best practice for sharing data between multiple instances of the same application or multiple applications that need to communicate.
A traditional Liberty cluster doesn't make sense in Bluemix because the Cloud Foundry platform upon which it is based is already providing ways to achieve high availability and scale.
I've already got an RDS instance configured and running, but currently we're still running on our old web host. We'd like to decrease latency by hosting the code within AWS.
Two concerns:
1) Future scalability
2) Redundancy ... not a huge concern but AWS does occasionally go down.
Has anyone had this problem where they just need to cheaply run what is essentially a database interface via a language such as PHP/Ruby, in 2 regions? (with one as a failover)
Does Amazon offer something that automatically manages resources, that's also cost effective?
Amazon's Elastic Beanstalk service supports both PHP and Ruby apps natively, and allows you to scale your app servers automatically.
In a second region, run a slave RDS instance off of your master (easy to set up in RDS) and have another beanstalk setup there ready as a failover.
Suppose i have created an amazon load balancer with three ec2 instances :A,B,C.
Whenever i deploy an asp.net app to the instance A ,Does Load balancing deploy/updates it in the B,C instances?
If no,how one can achieve that?
The ELB (load balancer) has nothing to do with the deployment, at least nothing useful. It's just there to split traffic to different instances for incoming (http) requests from clients.
AWS Elastic Beanstalk (the easy way)
Create an Elastic Beanstalk Application, where you deploy your asp.net application via the Amazon Visual Studio Toolkit. The process is very straightforward and it's not your part to do the deployment to all instances. After building and uploading the project to S3 beanstalk will distribute the new version to all instances within 20-60 seconds, depending on the size.
Advantage:
very easy
existing Tools
it's Platform as a Service (PaaS), not Infrastructure as a Service (IaaS) which applies if you setup all AWS resources yourself.
Disadvantage:
you can just deploy ONE "project" per Beanstalk App which means you are not able to deploy different "virtual directorie apps"
maybe you need to build your own custom Beanstalk AMI, but usually you can use a AWS default image (beanstalk host manager needs to run on the instance, otherwise it will not work)
you may lose a little bit of control, but imho that's not an issue
More Infos:
https://aws.amazon.com/elasticbeanstalk/
https://forums.aws.amazon.com/forum.jspa?forumID=86
http://aws.amazon.com/visualstudio/
The harder way
If you have already setup a load balancer with 3 instances behind it's your task to deploy your code to all instances.
You could do that e.g. via Visual Studio and Web Deploy deploying to all 3 instances after another... you need to use the public dns name here of he instances itself, not the load balancer/app dns name.
You could use some kind of web server sync software for the app directories, e.g. MS web deploy, which should work, but the setup is not like fun imho and definitely more complex than Beanstalk.
More Infos:
http://www.iis.net/downloads/microsoft/web-deploy