Delete previous deployments in Kubernetes - spring-boot

I want to delete all my previous deployments[pods] after a new deployment is successful in Kubernetes. I can do it manually through the 'kubectl' command but want to write a script which automictically finds all previous deployments and deletes them. The deployment happens through a Jenkins pipeline. I wanted to know how I can do that. Whether I need to write it in the deployment definition file [deployment.yaml] or make it a stage in the jenkins file i.e. write groovy scripts to handle the deletion.

Related

condition to check whether the deployment is present or not in jenkinsfile?

need to write Jenkinsfile in this way that first we will check deployment already existed then we will delete that deployment then deploy the new one ....
if old deployment not existed then deploy new one directly on cluster

Automatically trigger a Cloud Build once it is created

I am deploying a series of Cloud Build Triggers through Terraform, but I also want Terraform to trigger once every deployed Cloud Build so that it can do the initial deployment.
The Cloud Build Triggers are used to deploy Cloud Functions (and also Cloud Run and maybe Workflows). We could deploy the functions in the Terraform but we want to keep the command easy to modify so we don't want to duplicate it on both Terraform and the Cloud Build config.
It's important for the clarity and the evolutivity/maintainability of your pipeline to separate clearly the concern of each step.
You have a (set of) step to deploy the infrastructure of your project (here, your terraform)
You have a (set of) step that run process on your project (can be an Ansible script on VM, trigger Cloud Functions, Cloud Run, or a Cloud Build trigger).
I'm pretty sure that you can add this trigger in Terraform, but I strongly don't recommend you to do this.
Edit 1
I wasn't clear. You have to run your trigger by API after the terraform deployment, in your main pipeline. Then, the subsequent trigger will be done by Push to the Git repository.

How to prevent GitLab CI/CD from deleting the whole build

I'm currently having a frustrating issue.
I have a setup of GitLab CI on a VPS server, which is working completely fine, I have my pipelines running without a problem.
The issue comes after having to redo a pipeline. Each time GitLab deletes the whole folder, where the build is and builds it again to deploy it. My problem is that I have a "uploads" folder, that stores all user content, that was uploaded, and each time I redo a pipeline everything gets deleted from this folder and I obviously need this content, because it's the purpose of the app.
I have tried GitLab CI cache - no luck. I have also tried making a new folder, that isn't in the repository, it deletes it too.
Running my first job looks like so:
Job
As you can see there are a lot of lines, that says "Removing ..."
In order to persist a folder with local files while integrating CI pipelines, the best approach is to use Docker data persistency, as you'll be able to delete everything from the last build while keeping local files inside your application between your builds, while maintains the ability to start from stretch every time you start a new pipeline.
Bind-mount volumes
Volumes managed by Docker
GitLab's CI/CD Documentation provides a short briefing on how to persist storage between jobs when using Docker to build your applications.
I'd also like to point out that if you're using Gitlab Runner through SSH, they explicitly state they do not support caching between builds when using this functionality. Even when using the standard Shell executor, they highly discourage saving data to the Builds folder. so it can be argued that the best practice approach is to use a bind-mount volume to your host and isolate the application from the user uploaded data.

Go CD Trigger User always gets value of 'changes' in multi-staged pipelines

We are using Go CD for our CI pipelines. We have created multiple Go CD users and all of them have access to trigger any pipeline in Go CD.
We use multiple pipelines for completing the dev-to-prod cycle, however, we noticed that we can achieve the same by using a single pipeline with multiple stages (Saving disk space caused by multiple pipelines), each representing deployment to an environment (i.e. stag, prod).
Our requirement is to get the GO_TRIGGER_USER, set up by Go CD system, and based on this information do some decision making in a custom script. It works perfectly fine for single stage builds i.e. if changes are pushed to repository triggering the pipelines, GO_TRIGGER_USER environment variable gets the value of 'changes' otherwise its set to the user name of the Go CD user who manually triggered the pipeline.
Problem occurs on multi-staged builds; starting from 2nd stage in the pipeline, which always gets GO_TRIGGER_USER set to 'changes' even if the pipeline is triggered manually by a Go CD user.
Any idea or workaround to avoid this behavior?
Two options:
Use the API to fetch the data about the first stage in the pipeline Stages API. I use this from Python and it's not too onerous.
Write the meta-data you want to a flat file and export as an artefact between the stages.

How can I configure the Bluemix Pipeline to either tag builds or create a work item (defect) according to the state of the build?

I have a Build & Deploy Pipeline in Bluemix, I would like to create a condition where, if the build fails, it will automatically assign a defect (i.e., work item in the "Track & Plan" page) to whoever delivered the very latest change (or just assign to the main owner of the App/Project), also, if the build is completed successfully, I would like to tag it.
Tagging is ok, that's general GIT knowledge, I just wanted to solve 2 Problems with that plan:
How do we trigger a specific subsequent Stage in the pipeline if the current build fails/passes?
How do I create a work item from the pipeline? Do I need to create a separate GIT repo and build some sort of API package that allows me to invoke a mechanism that creates the ticket?
I guess I'm going too maverick with this Pipeline, please share your thoughts.
As of right now you can not create a work item from the pipeline. That is a great feature improvement and I can take it back to the team.
For your question about triggering a stage if something passes or fails... The way it works now only the next stage will be triggered if the previous is successful. The pipeline is based on Jenkins and Jenkins doesn't allow you to trigger a specific job if a job passes or fails. You would want to detect the pass or fail in your stage and do your logic based on that.

Resources