Variables in a Azure botframework Composer CICD Pipeline - botframework

Long time lurker, first-time question so apologies if I do this wrong.
I have successfully used the following to create a continuous deployment pipeline in Azure DevOps:
Composer CICD Pipeline Sample
However, I would like to use additional pipeline variables to insert into the appsettings.json file: such as additional API keys and the ApplicationInsights connectionString.
Does anyone have experience of doing this or can someone point me in the right direction?
Google has shone no light on this and unfortunately, I have found the botframework documentation to be lacking.

Azure deployments by the pipeline you reference do not use the appsettings.json file. Those settings are ignored.
The pipeline installs pipeline variable values in Azure as App Service Configuration Application Settings using the task "Configure App Service Settings". You might start there.

Related

Oracle objects deployment using Azure Devops Pipeline

I have a requirement to automate deployment to different environment(dev, stage and prod) using Azure devops. I am not able to find a task for the same. Azure devops has task for SQLServer database deploy, MySql Database deploy but not for the Oracle database deploy.
I am very new in Azure devops. Please guide me how can I achieve this.
For this issue, Red Gate has a set of deployment tools for Oracle, but integrated in Azure Devops is the SQL Change Automation extension, which is only applicable to SQL Server database.
So AFAIK, there is currently no task for Oracle database deploy. You could add your request for this feature on our UserVoice site, which is our main forum for product suggestions. You could also vote the suggestion ticket and share your comment there, so product team would provide the updates if they view it.
As a workaround , you could try to use the PowerShell on Remote machine task to deploy your Oracle changes and place them in an Azure DevOps CI/CD pipeline without having to install an extension from the Marketplace. For details ,please refer to this blog.

Block deployment to azure function app from visual studio

We have enabled a CI/CD pipeline using azure pipeline. Whenever someone check in to the master, the build should happen and deployment should follow. I wanted to understand how can I disallow someone to deploy to azure function web app from local visual studio
You could use RBAC Rules which may require a lot of config work.
Once you have CI/CD pipeline enabled, setting up RBAC(Role Based Access Control) helps to prevent users from getting the publishing profile, setting deployment credentials etc.,
There will definitely be some config work involved in doing this because you would have to only allow permission to one user so that user could set up the service principal connection between azure and DevOps but also prevent users from creating a deployment user.

What’s the best way to deploy multiple lambda functions from a single github repo onto AWS?

I have a single repository that hosts my lambda functions on github. I would like to be able to deploy the new versions whenever new logic is pushed to master.
I did a lot of reasearch and found a few different approaches, but nothing really clear. Would like to know what others feel would be the best way to go about this, and maybe some detail (if possible) into how that pipeline is setup.
Thanks
Welcome to StackOverflow. You can improve your question by reading this page.
You can setup a CI/CD pipeline using CircleCI with its GitHub integration (which is an online Service, so you don't need to maintain anything, like a Jenkins server, for example)
Upon every commit to your repository, a CircleCI build will be triggered. Once the build process is over, you can declare sls deploy, sam deploy, use Terraform or even create a script to upload the .zip file from your GitHub repo to an S3 Bucket and then, within your script, invoke the create-function command. There's an example how to deploy Serverless applications using CircleCI along with the Serverless Framework here
Other options include TravisCI, AWS Code Deploy or even maintain your own CI/CD Server. The same logic applies to all of these tools though: commit -> build -> deploy (using one of the tools you've chosen).
EDIT: After #Matt's answer, it clicked that the OP never mentioned the Serverless Framework (I, somehow, thought he was already using it, so I pointed the OP to tutorials using the Serverless Framework already). I then decided to update my answer with a few other options for serverless deployment
I know that this isn't exactly what you asked for but I use Serverless Framework (https://serverless.com) for deployment and I love it. I don't do my deployments when I push to my repo. Instead I push to my repo after I've deployed. I like this flow because a deployment can fail due to so many things and pushing to GitHub is much less likely to fail. I this way, I prevent pushing code that failed to deploy to my master branch.
I don't know if you're familiar with the framework but it is super simple. The website describes the simple steps to creating and deploy a function like this.
1 # Step 1. Install serverless globally
2 $ npm install serverless -g
3
4 # Step 2. Create a serverless function
5 $ serverless create --template hello-world
6
7 # Step 3. deploy to cloud provider
8 $ serverless deploy
9
10 # Your function is deployed!
11 $ http://xyz.amazonaws.com/hello-world
There are also a number of plugins you can use to integrate easily with custom domains on APIGateway, prune older versions of lambda functions that might be filling up your limits, etc...
Overall, I've found it to be the easiest way to manage and deploy my lambdas. Hope it helps!
Given that you're using AWS Lambda, you may want to consider CodePipeline to automate your release process. [SAM(https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) may also be interesting.
I too had the same problem. I wanted to manage 12 lambdas with 1 git repository. I solved it by introducing travis-ci. travis-ci saved the time and really useful in many ways. We can check the logs whenever we want and you can share the logs to anyone by sharing the URL. The sample documentation of all steps can be found here. You can go through it. 👍

Best practise/way to deploy Laravel + Vue SPA application to AWS

I have 2 repositories residing in Bitbucket - Backend (Laravel app as the API and entry point) and Frontend (Main application front-end - VueJs app). My goal is to set up continuous deployment so whenever something is pushed in either of the repos in master (or other branch selected by me) branch it triggers something so that the whole app builds and reaches the AWS EC2 server.
I have considered/tried the following:
AWS CodePipeline and/or CodeDeploy. This looked like a great option
since the servers are in AWS as well. However, there is no support
for Bitbucket out of the box, so it would have to go to Bitbucket
Pipeline -> AWS Lambda -> AWS S3 -> AWS CodePipeline/CodeDeploy ->
AWS EC2. This seems like a very lengthy journey and I am not sure if
that's a good practice whatsoever.
Using Laravel Forge to deploy the Laravel app, and add additional steps to build the VueJS app. This seemed like a very basic solution,
however, the build process seems to fail there as it just takes long
time and crashes with no errors (whereas I can run exact same process
on my local machine or a different server hosted elsewhere). I am not
sure if this is issue with the way server is provisioned, the way
Forge runs deployment script or the server is too weak to handle it.
The main question of mine would be what are the best pracises for deploying the app of such components? I have read many tutorials/articles about deploying a NodeJS app, or a Laravel app, but haven't gotten good information about a scenario like this.
Would it be better to build the front-end app locally and version control the built JS file? Or should I create a Pipeline in Bitbucket that would build the app and then deploy it? Or is it the best to just version control and deploy the source files and leave the whole build process as the last step in the deployment process that will be done by the server that is hosting the app itself? There are also some articles suggesting hosting the whole front-end app in S3 bucket - would that be bad practise as well?
Appreciate any help and resources that would help!
From the sounds of things it sounds like you have two types of deployments you might want to run.
Laravel API: If you're using Laravel Forge already then this is a great way to go about deploying your Laravel App, takes care of most of the process and easy server management.
Vue.js App: Few things you can do here, I personally prefer using a provider like Vercel or Netlify who let you deploy your static sites/frontends for free-low costs. You can write custom build steps but they have great presets that should work out the box.
If you really want to keep everything on AWS then look into how to host static sites on AWS

How to keep deployment history of azure cloud services?

I'm using Jenkins to produce cspkg files using msbuild. It stores build results in azure blob storage. Then I use management portal to deploy them.
The biggest drawbacks I see are:
1. Deployments can be accidentally deleted easily.
2. There is no straightforward* way to check which version the cloud service has.
Is there a better way to manage deployments?
Its definitely not the best experience is it?
The approach I tend to use is as follows:
Build the deployment package and add the version number to the package filename (taken from AssemblyInfo.cs) e.g. MyCloudService-1.2.0.0.cspkg - this should be trivial using msbuild.
Push the package to Cloud Storage.
Perform the deployment of the package from Storage, with the Deployment Label '[CLOUD SERVICE NAME]-[VERSION] # [DATE & TIME]' e.g. 'MyCloudService-1.2.0.0 # 10-09-2015 16:30'
Check the deployment package into a 'Packages' directory in source control.
If you need to identify the version of the package deployed to the cloud service, you can see the Deployment Label on the Azure Management Portal:
'Old' Portal (manage.windowsazure.com):
'New' Portal (portal.azure.com):

Resources