Deploying from Azure Devops to AWS machine behind VPN - amazon-ec2

We have our machines that we want to deploy to on AWS private subnet and we connect to them via VPN.
We want to use Azure Devops to build and deploy our code. Is there a way to deploy from Azure Devops to AWS machines through VPN?

Make sure your machine can access dev.azure.com, then you can try AWS Toolkit for Azure DevOps extension to work with AWS services.
Also, you can install one self-agent on your machine so that you can run the pipeline in your local environments.

Related

How to determine if an asp.net app is running on an azure vm

We have an asp.net app that gets deployed to both On-Prem and on Azure VMs. We are trying to figure out how to configure the app so that when deployed on an Azure VM it will use Azure App Configuration Service, but when deployed On-Prem it will continue to use the settings in the config files?
How can we know on app start up whether or not we are deployed on an Azure VM?
If you can, I would recommend you add a special environment variable when you provision your Azure VM or deploy your application. If not, you may use Azure Instance Metadata Service to tell the code is running in Azure VMs.

deploy laravel application to azure kubernetes using azure devops

i'm struggling a lot with a Laravel application that i want to deploy to azure AKS using azure DevOps
the thing is i'm reading a lots of non-accurate tutorials and docs, i didnt found anything related on how to push or deploy a Laravel docker image to azure aks using azure DevOps and it's frustrating,
can someone help me with that by giving me some hints or tutorials ?
thank you !
how to push or deploy a Laravel docker image to azure aks using azure DevOps
Agree with Daniel, you need to build the container, push it to a container registry, here is the detailed official tutorial. And then either run a series of kubectl commands or create Kubernetes manifests that tell Kubernetes how to run the containers.
In addition, this doc: Deploy a Docker container app to Azure Kubernetes Service will show you how to set up continuous deployment of your containerized application to an Azure Kubernetes Service (AKS) using Azure Pipelines.
BTW, videos: Getting started with CI/CD & Azure Container Service (AKS) powered by VSTS and CI-CD for Azure Kubernetes Service AKS using Azure DevOps are also helpful.

Azure pipeline How to use bigger server for hosted agent builds?

I don't understand but how do I set in the hosted azure pipeline build server to be a bigger machine with more ram and more CPU's?
i want to avoid installing a self-hosted agent on one of the Azure VM's I just want to use more stronger hosted agent, where can i configure this?
I'm using a hosted MAC agent.
This is not possible. Please check this documentation
If Microsoft-hosted agents don't meet your needs, then you can deploy your own self-hosted agents or use scale set agents.

Continuous Integration With VSTS and AWS EC2 Instances

We are managing source code in the VSTS there we had the Main Git Branch. And now we want to automate the deployment/release process with AWS EC2 instance.
We installed the AWS Toolkit for Visual Studio 2017 from the marketplace.
Can anyone guide how to deploy to AWS EC2 instance from VSTS?
You are talking about IaaS (Infrastructure as a code) there are several ways which you can able to achieve this.
Tools like terraform, ansible, cloud formation we're some popular tools which you can able to create an Ec2 instance from VSTS or Azure DevOps.
For an instance you can see here on how you can create an ec2 instance using terraform.
You can install the terraform extension from market place into your azure devops account.

Deploy application on AWS VPC

I am planning to migrate from Ec2 classic to EC2 VPC. My application reads messages from SQS, download assets from S3 and perform actions mentioned in the SQS messages and then updates RDS. I have following queries
Is it beneficial for me to migrate to Amazon VPC from Classic
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy code using capistrano. But in VPC there is a concept of private IP address and you cannot access a machine inside a subnet.So my question is:
How should I deploy code on the EC2 instances or rather how should I connect to them?
Thank You.
This questions is pretty broad but I'll take stab at it:
Is it beneficial for me to migrate to Amazon VPC from Classic
It's beneficial if you care about security of your data in transit and at rest. In a VPC none of your traffic is exposed to the outside and you can chose which components you want to expose in case you want to receive traffic/data from the outside. i.e Your ELB or ELBs.
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy
code using capistrano. But in VPC there is a concept of private IP
address and you cannot access a machine inside a subnet. So my question
is: How should I deploy code on the EC2 instances or rather how should
I connect to them?
You can actually assign a public IP to your EC2 machines in a VPC if you choose to. You can use that IP to deploy your code from the outside.
You can read about it here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
If you want more security you can always deploy from a machine in your VPC (that has SSH access to the outside). You can ssh to that machine and then run cap deploy from there.

Resources