Suppose i have created an amazon load balancer with three ec2 instances :A,B,C.
Whenever i deploy an asp.net app to the instance A ,Does Load balancing deploy/updates it in the B,C instances?
If no,how one can achieve that?
The ELB (load balancer) has nothing to do with the deployment, at least nothing useful. It's just there to split traffic to different instances for incoming (http) requests from clients.
AWS Elastic Beanstalk (the easy way)
Create an Elastic Beanstalk Application, where you deploy your asp.net application via the Amazon Visual Studio Toolkit. The process is very straightforward and it's not your part to do the deployment to all instances. After building and uploading the project to S3 beanstalk will distribute the new version to all instances within 20-60 seconds, depending on the size.
Advantage:
very easy
existing Tools
it's Platform as a Service (PaaS), not Infrastructure as a Service (IaaS) which applies if you setup all AWS resources yourself.
Disadvantage:
you can just deploy ONE "project" per Beanstalk App which means you are not able to deploy different "virtual directorie apps"
maybe you need to build your own custom Beanstalk AMI, but usually you can use a AWS default image (beanstalk host manager needs to run on the instance, otherwise it will not work)
you may lose a little bit of control, but imho that's not an issue
More Infos:
https://aws.amazon.com/elasticbeanstalk/
https://forums.aws.amazon.com/forum.jspa?forumID=86
http://aws.amazon.com/visualstudio/
The harder way
If you have already setup a load balancer with 3 instances behind it's your task to deploy your code to all instances.
You could do that e.g. via Visual Studio and Web Deploy deploying to all 3 instances after another... you need to use the public dns name here of he instances itself, not the load balancer/app dns name.
You could use some kind of web server sync software for the app directories, e.g. MS web deploy, which should work, but the setup is not like fun imho and definitely more complex than Beanstalk.
More Infos:
http://www.iis.net/downloads/microsoft/web-deploy
Related
I use three services in my docker-compose file: Django API, Celery, and Redis. I can run this using AWS ECS for which I have to create 3 services (one for each service) and at least three tasks for each service created which seems to be expensive when compared to AWS EC2 instance where I can deploy 3 services by running docker-compose command and also I can attach Auto-scaling groups for horizontal scaling by creating an Image of my instance and attach a Load balancer to it.
I don't understand why should one use ECS which is mainly used for auto-scaling multi-service dockerized applications when I can do the same with simple EC2 auto-scaling groups. Am I missing something?
The "why should one use ECS" is a question of "why to use an abstraction" at its heart. To answer it, ECS bills itself as a "fully managed container orchestration service" and provides an abstraction on top of a cluster of EC2 instances (run and managed by you or by AWS, depending upon the choices you make). It will come down to these choices which you want to consider before choosing to use (or NOT choosing ) ECS:
Do you want to do mapping of containers to EC2 instances yourself OR do you want your container orchestration engine to take care of it for you?
Do you want to be even aware of underlying EC2 instances and hence be responsible for running and patching them OR do you want your container orchestration engine to take care of it for you?
Do you want to own responsibility of redeploying newly built and tested containers on all your EC2 instances individually OR do you want your container orchestration engine and associated compute platform (eg. Fargate) to take care of it for you?
I've a spring-boot application which i want to deploy on OVH public cloud.
I need t achieve the goal of deploying multiple instances of the same application, and each instance has to have its own resources (such as MySQL database).
Each instance has to be accessed with a special url. For example:
The first instance is accessable from http://domainname/instance1/index.html
The second instance is accessable from http://domainname/instance2/index.html
I'm really new to everything which concerns cloud computing and deployments.
From what i read on the internet, my doubt is to
Use Docker where each instance has to be running inside its own container (to have the resources separated for each instance)
Use Kubernetes to achieve the goal of having each instance accessable from a specific url.
Am i wrong ? any online courses / resources / videos which can help would be awsome.
Thanks in advance.
Basically, Docker is a platform to develop, deploy, and run applications inside containers, therefore containers represent run-time environment for images. Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
There are some essential concepts in Kubernetes that describe a cluster core components and application workload, thus define a desired state of the cluster.
Kubernetes objects represent abstraction level of cluster management operations and containerized applications run-time environment within associated resources in Kubernetes API.
I would focus on the Kubernetes resources that are most crucial in application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
Ingress exposes Kubernetes service outside the cluster with some exclusive benefits like load balancing, SSL/TLS ceritficate termination, etc.
You can get more relevant information about Kubernetes implementation in OVH within particular guide chapter.
Ideally, if it's a single application it should connect to one backend database, not 3 different databases.
If your use case is very specific and you really want to connect 3 instances of an application to 3 different databases then consider each deployed application as an independent application with 3 different deployments.
Talking about Docker and kubernenets, I don't feel you need these initially rather deploy your application directly to the cloud instances. To achieve the high availability of the application, deploy them as a part of autosacing group and map an ELB to each autoscaling group. Finally, map the ELB CNAME in your DNS record and start using your application.
Docker and K8s come with there own learning curve and adds overhead if you are new to this area. Though they have a lot of pros and are extremely beneficial if you have a lot of microservices to manage and have an agile environment.
My preference starts with VM first and then slowly move to the container world. :)
I've already got an RDS instance configured and running, but currently we're still running on our old web host. We'd like to decrease latency by hosting the code within AWS.
Two concerns:
1) Future scalability
2) Redundancy ... not a huge concern but AWS does occasionally go down.
Has anyone had this problem where they just need to cheaply run what is essentially a database interface via a language such as PHP/Ruby, in 2 regions? (with one as a failover)
Does Amazon offer something that automatically manages resources, that's also cost effective?
Amazon's Elastic Beanstalk service supports both PHP and Ruby apps natively, and allows you to scale your app servers automatically.
In a second region, run a slave RDS instance off of your master (easy to set up in RDS) and have another beanstalk setup there ready as a failover.
My application contains 25 C# projects, these projects are divided into 5 solutions.
Now I want to migrate these projects to run under Windows Azure, I realized that I should create one solution that contains all my web roles and worker roles.
Is this the correct way to do so, or still I can divide my projects into several solution.
The Projects are as shown below:
One Web application.
5 Windows Services.
The others are all class libraries.
Great answers by others. Let me add a bit more about one vs. many hosted services: If the Web and Worker roles need to interact directly (e.g. via TCP connection from Web role instance to a specific worker role instance), then those roles really should be placed in the same hosted service. External to the deployment, your hosted service listeners (web, wcf, etc.) are accessed by IP+Port; you cannot access a specific instance unless you also enable Azure Connect (VPN).
If your Web and Worker roles interact via Azure Queues or Service Bus, then have the option of deploying them to separate hosted services and still have the ability to communicate between them.
The most important question is: How many of these 25 projects are actual WebSites/Web Applications or Windows Services, and how many of them are just Class Libraries.
For the Class Libraries, you do not have to convert anything.
Now for the Cloud project(s). You have to decide how many hosted services you will create. You can read my blog post to get familiar with terms like "Hosted Service", "Role", "Role Instance", if you need to.
Once you decided your cloud structure - the number of hosted services and roles per each service, you can create a new solution for each hosted service.
You can also decide that you can host multiple web sites into a single WebRole, which is totally supported and possible, since WebRoles run in full IIS environment since SDK 1.3. You read more about hosting multiple web sites in single web role here and here, and even use the Windows Azure Accelerator for Web Roles.
If you have multiple windows services, or a background worker processes, you can combine them into a single Worker Role, or define a worker role for each separate worker process should you desire separate elasticity for each process, or your worker require a lot of computing power and memory.
UPDATE with regards to question update:
So, the Web Application is clear - it goes to one Web Role. Now for the Windows Services. There are two main considerations that you have to answer in order to find whether to put them into a single or more worker roles:
Does any of your Windows Services require excessive resources (i.e. a lot of computing power, or
lot of RAM)?
Does any of your Windows Services require independent scale?
If the answer for any of the questions is "yes", then put this Windows Service in a single Worker Role. Put all the Windows Services that the answer for both questions is "no" in a single Worker Role. That means that you will scale all of them or none of them (by manipulating the number of instances).
As for Cloud Service (or the Hosted Service), it is up to you to decide whether to use a single cloud service to deploy all the roles (Web and Workers) or use one Hosted service to deploy the Web Role and another one to deploy the Worker Roles. There is absolutelly no difference from billing prospective. You will still have your Web Role and Worker Role(s), and you will be charged based on instances count and data traffic. And you can independently scale any role (change the number of instances for a particular role) regardless of its deployment (within the same hosted service, or another hosted service).
At the end I suggest that you have single solution per Hosted Service (Cloud Project). So if you decide to have the Web Role and Worker Roles into a single Hosted Service, than you will have a single solution. If you have two Hosted Services (Cloud Projects), you will have two solutions.
Hope this helps.
You are correct ! and all projects goes to under 1 hosted service if you create only one cloud project for all your webrole and worker role project
Still you can divide your projects into several solution and you have to create that much cloud project and hosted service on azure platform
You can do both.
You can keep your 5 separate solutions as they are. Then, create a new solution that contains all 25 projects.
Which solution you choose to contain your Cloud (ccproj) project(s) will depend on how you want to distribute your application.
Each CCPROJ corresponds to 1 hosted service. So you could put all of your webs and workers into a single hosted service. Or you could have each web role as a different hosted service, and all of your worker roles together on another hosted service. Or you could do a combination of these. A definitive answer would require more information about your application, but in VS, a project can belong to more than 1 solution.
Let’s say I've already deployed a web app on EC2, maybe thru FTP or Remote desktop. So from now on, what would be the best way to update to a new version of my web app?
My main concern would be when running several instances of that web app behind the load balancer: is there a way to update all instances at once so that there are never two instances running with different versions of the web app?
Thanks.
Yeah. Remove each instance from the load balancer (using the API or AWS management console) and update its software, until there is only one instance left. Upgrade that one without removing it, then re-add all the other instances.
There will be no time when the load balancer sends your traffic to two different versions of the software.