Azure Devops Deploy Docker image to ec2 instance - amazon-ec2

I hope somebody can direct to the best approach to solve this matter.
I have an azure account on which I have an azure container registry holding my docker images. Just for personal education purpose I want to try and deploy one of azure docker images into a aws ec2 instance.
Reading some aws documentation, I understand that I need to create a ecr container and with azure DevOps, using the service connection to build and deploy the docker images to ecr, this seems to be pretty straight forward. But after this step its plain darkness as I cannot find a best approach on how to implement a continuous delivery every time there is a new docker image in my ecr.
One of the solutions I thought and found, is to install an azure DevOps agent on the ec2 to run a docker pull but I am not 100% sure about if this is the best approach.
So I am asking to you experts to enlighten me about this and I do apology for the basic question.
Thank you so much in advance for any help you can provide, and please if my question is not 100% clear, do not hesitate to ask more infos.

You should be able to authenticate to your Azure Container Registry instance from EC2 using standard docker login command. You do not even need Azure DevOps agent for that, since you should be able to configure regular service principal with set of standard docker registry credentials.
Then you can pull and use your images normally.
It is absolutely not required to replicate your images in the ECR.

Unless you want to do full ECS. Use a shell script to do what you would do manually on the EC2 command line.

Related

How to execute some script on the ec2 instance from the bitbucket-pipelines.yml?

I've bitbucket repository, bitbucket pipeline there and EC2 instance. EC2 have access to the repository ( can perform pull and docker build/run)
So it seems I only need to upload to EC2 some bash scripts and call it from bitbucket pipeline. How can I call it? Usually ssh connection is used to perform scripts on EC2, is it applicable from bitbucket pipeline? Is it a good solution?
two ways to solve this problem, I will leave it up to you.
I see you are using AWS, and AWS has a nice service called CodeDeploy. you can use that and create a few deployment scripts and then integrate it with your pipeline. Problem with it is that it is an agent that needs to be installed. so it will consume some resource not much but if u are looking at an agentless design then this solution wont work. you can check the example in the following answer https://stackoverflow.com/a/68933031/8248700
You can use something like Python Fabric (its a small gun) or Ansible (its a big canon) to achieve this. it is an agentless design works purely on SSH.
I'm using both the approaches for different scenarios. For AWS I use CodeDeploy and for any other cloud vendor I use Python Fabric. (We can use CodeDeploy on other than AWS but then it comes under on-premise pricing for which it charges for per deployment)
I hope this brings some clarity.

How do developers typically use Docker with a Java Maven project and AWS EC2?

I have a single Java application. We developed the application in Eclipse. It is a Maven project. We already have a system for launching our application to AWS EC2. It works but is rudimentary and we would like to learn about the more common and modern approaches other teams use to launch their Java Maven apps to EC2. We have heard of Docker and I researched the tool yesterday. I understand the basics of building an image, tagging it and pushing to either Docker Hub or Amazon's ECS service. I have also read through a few tutorials describing how to pull a Docker image into an EC2 instance. However, I don't know if this is what we are trying to do, given that I am a bit confused about the role Docker can play in our situation to help make our dev ops more robust and efficient.
Currently, we are building our Maven app in Eclipse. When the build completes, we run a second Java file that uses the AWS JDK for Java to
launch an EC2 instance
copy the.jar artifact from the build into this instance
add the instance to a load balancer and
test the app
My understanding of how we can use Docker is as follows. We would Dockerize our application and push it to an online repository according to the steps in this video.
Then we would create an EC2 instance and pull the Docker image into this new instance according to the steps in this tutorial.
If this is the typical flow, then what is the purpose of using Docker here? What is the added benefit, when we are currently ...
creating the instance,
deploying the app directly to the instance and also
testing the running app
all using a simple single Java file and functions from the AWS SDK for Java?
#GNG what are your objectives for containerization?
Amazon ECS is the best method if you want to operate in only AWS environment.
Docker is effective in hybrid environments i.e., on physical servers and VMs.
the Docker image is portable and complete executable of your application: it delivers your jar, but it can also include property files, static resources, etc... You package everything you need and deploy to AWS, but you could decide also to deploy the same image on other platforms (or locally).
Another benefit is the image contains the whole runtime (OS, jdk) so you dont rely on what AWS provides ensuring also isolation from the underlying infrastructure.

Edit Docker Container On AWS EC2

I have a docker container running on AWS EC2. I am able to ssh into EC2 and edit the code running in the container, the problem is that those changes are not reflected on the respective website. It behaves like there is some kind of caching taking place with EC2 but can't find any caching going on. How do I get those changes to take effect? I am at a loss; any help is greatly appreciated.

Possibility of migration Amazon EC2 Windows images to Google Compute Engine

Maybe anyone know the way how to migrate Windows images from Amazon EC2 to Google Compute Engine and back. I've read about the Linux images migration in GCE documentation, but there are no any info about the Windows images. I also seen this Is it possible to upload a windows image to Google compute engine? question, but reference to google group is banned, so I can't read it.
Thanks.
Sorry there was a problem with that Groups link. The answer I wrote there was that at least two companies offer migration tools for moving a Windows VM into GCE. One is Racemi, and the other is CloudEndure.
I wanted to update this to reflect that GCE now has an "Import VM" option on the Cloud Console. This will direct you to a service that will enable free migration of Windows VMs

Continuous deployment & AWS autoscaling using Ansible (+Docker ?)

My organization's website is a Django app running on front end webservers + a few background processing servers in AWS.
We're currently using Ansible for both :
system configuration (from a bare OS image)
frequent manually-triggered code deployments.
The same Ansible playbook is able to provision either a local Vagrant dev VM, or a production EC2 instance from scratch.
We now want to implement autoscaling in EC2, and that requires some changes towards a "treat servers as cattle, not pets" philosophy.
The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.
The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :
Bake a new fully-deployed AMI for each deploy, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, so up & running more quickly.
Use a base AMI that doesn't change very often, automatically get the latest app code from git upon bootup, start webserver. Once it's up just do manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc) ? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice ?
Use Docker ? I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance ?
How do you do it ? Any insights / best practices ?
Thanks !
This question is very opinion based. But just to give you my take, I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling UP will happen faster.
Docker is another approach but in my opinion it adds an extra layer in your application that you may not need if you are already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have some extra capacity in a server and Docker will allow you to run that extra application on the same server without interfering with existing ones.
Having said that some people find Docker useful not in the sort of way to optimize the resources in a single server but rather in a sort of way that it allows you to pre-bake your applications in containers. So when you do deploy a new version or new code all you have to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.
My two cents.
A hybrid solution may give you the desired result. Store the head docker image in S3, prebake the AMI with a simple fetch and run script on start (or pass it into a stock AMI with user-data). Version control by moving the head image to your latest stable version, you could probably also implement test stacks of new versions by making the fetch script smart enough to identify which docker version to fetch based on instance tags which are configurable at instance launch.
You can also use AWS CodeDeploy with AutoScaling and your build server. We use CodeDeploy plugin for Jenkins.
This setup allows you to:
perform your build in Jenkins
upload to S3 bucket
deploy to all the EC2s one by one which are part of the assigned AWS Auto-Scaling group.
All that with a push of a button!
Here is the AWS tutorial: Deploy an Application to an Auto Scaling Group Using AWS CodeDeploy

Resources