condition to check whether the deployment is present or not in jenkinsfile? - jenkins-pipeline

need to write Jenkinsfile in this way that first we will check deployment already existed then we will delete that deployment then deploy the new one ....
if old deployment not existed then deploy new one directly on cluster

Related

Delete previous deployments in Kubernetes

I want to delete all my previous deployments[pods] after a new deployment is successful in Kubernetes. I can do it manually through the 'kubectl' command but want to write a script which automictically finds all previous deployments and deletes them. The deployment happens through a Jenkins pipeline. I wanted to know how I can do that. Whether I need to write it in the deployment definition file [deployment.yaml] or make it a stage in the jenkins file i.e. write groovy scripts to handle the deletion.

Checking the deployment status in azure

I am using below API to get the latest deployment status(If any release going on) from 3 pipeline's before starting an deployment in each pipeline.
"https://vsrm.dev.azure.com/ABC/DEF/_apis/release/deployments?definitionId=100&deploymentStatus=inProgress"
"https://vsrm.dev.azure.com/ABC/DEF/_apis/release/deployments?definitionId=101&deploymentStatus=inProgress"
"https://vsrm.dev.azure.com/ABC/DEF/_apis/release/deployments?definitionId=102&deploymentStatus=inProgress"
Based on count I am deciding whether the any run going on in pipeline if count > 0. I am working on logic where if any deployment going on in other pipeline then Deployment should wait for other to finish since all are deploying to same environment.
The status checking task also is in progress so it is going to infinite loop to wait if task is running. Is there any way to achieve this.
Based on your requirement, you need to check the Deployment status of the release pipeline.
To alleviate your issue, I suggest that you can add an additional stage to each release pipeline and then you can add the check deployment status in the new stage.
Refer to my steps:
Add a new stage before the deployment stage and add the status check steps.
In the Rest API, you can add the definitionEnvironmentId parameter in the URL to check the specify stage.
For example:
"https://vsrm.dev.azure.com/ABC/DEF/_apis/release/deployments?definitionId=102&deploymentStatus=inProgress&definitionEnvironmentId=xx"
In this case, you can separate the check status step and the deployment process.
For more detailed info, you can refer to this doc: Deployments - List

Automatically trigger a Cloud Build once it is created

I am deploying a series of Cloud Build Triggers through Terraform, but I also want Terraform to trigger once every deployed Cloud Build so that it can do the initial deployment.
The Cloud Build Triggers are used to deploy Cloud Functions (and also Cloud Run and maybe Workflows). We could deploy the functions in the Terraform but we want to keep the command easy to modify so we don't want to duplicate it on both Terraform and the Cloud Build config.
It's important for the clarity and the evolutivity/maintainability of your pipeline to separate clearly the concern of each step.
You have a (set of) step to deploy the infrastructure of your project (here, your terraform)
You have a (set of) step that run process on your project (can be an Ansible script on VM, trigger Cloud Functions, Cloud Run, or a Cloud Build trigger).
I'm pretty sure that you can add this trigger in Terraform, but I strongly don't recommend you to do this.
Edit 1
I wasn't clear. You have to run your trigger by API after the terraform deployment, in your main pipeline. Then, the subsequent trigger will be done by Push to the Git repository.

How do I retry a failed AWS Greengrass deployment

I have revised an existing deployment with a new version of the component.
However the deployment failed on a core device because there was a local deployment with a different version that was used for testing.
I have removed that local deployment and now I want to rerun the deployment on that device.
Is that possible? Or do I need to revise the existing deployment (keeping everything the same, so not really a revision)?
For Greengrass v2 there is no such thing as 'restart' deployment (not sure about Greengrass v1). If it fails, read the error messages in log files, fix the issue, revise the deployment with correct parameters and deploy again, or create new deployment.
Here you can find common deployment issues:
https://docs.aws.amazon.com/greengrass/v2/developerguide/troubleshooting.html#greengrass-core-deployment-issues

How to serialize VSTS CI branch builds

VSTS allows you to select which branches automatically trigger a CI build by specifying a branch pattern.
However, my unit tests are using a real database which causes a problem when more than one build triggers e.g. master and feature-123 as they will clash on the database tests.
Is there a way of specifying that only one such build should be run at at time; I don't want to go away from executing tests against a real database as there are significant differences between an in-memory database and SQL Azure.
VSTS already serialize builds which are triggered by the same CI build.
Even CI build can be triggered by multiple branches, but for a certain time, only one build is running by default (unless you use pipelines to run builds concurrently).
Such as if both master branch and feature-123 branch are pushed to remote repo at the time time, the CI build definition will trriger two builds serially (not concurrently).
If you are using pipeline and need to run the triggered builds serially, you should make sure only one agent is used for the CI builds. You can use the way below:
In your CI build definition -> Options Tab -> add demands to specify which agent you want to use for the CI build.
Assume in default agent pool, there are three agents with the agent name: default1, default2 and default3.
If you need to specify default2 agent to run the CI build, then you can add the demands as below:
Now even multiple branches have been pushed at the same time, they will be triggered one by one since only one agent is available for the CI build.
If you use a YAML pipeline, you can use a deployment job in stead of a regular job.
With a deployment job you select a named Environment you want to deploy to.
You can configure the Environment in azure devops under Pipelines->Environments and you can choose to add an Exclusive Lock.
Then only one run can use the environment at a time and this serializes your runs.
Unfortunately, if you have multiple runs waiting for the environment (because one run currently has it locked), when the environment becomes unlocked only the latest run will continue. All the others waiting for the lock will be cancelled.
If you want to do it via .yml or .yaml file you can do following
- phase: Build
queue:
name: <Agent pool name>
demands:
- agent.name -equals <agent name from agent pool>
steps:
- task: <taskname>
displayName: 'some display name'
inputs:
value: '<input variable based on type of task'
variableName: '<input variable name>'

Resources