Feedback Loop implementation in CI/CD pipeline using Jenkins and kubernetes - maven

Currently I am trying to implement CI/CD pipeline using the DevOps automation tools like Jenkins and kubernetes. And I am using these for deploying my micro services creates using spring boot and maven projects.
Now I am successfully deployed my spring boot micro services using Jenkins and Kubernetes. I am deployed to different namespaces using kubernetes. When I am committing , one post commit hook will work from my SVN repository. And that post commit hook will trigger the Jenkins Job.
My Confusion
When I am implementing the CI/CD pipeline , I read about the implementation of feed back loops in pipeline. Here I had felt the confusion that , If I need to use the implementation of Feedback Loops then which are the different ways that I can follow here ?
Can anyone suggest me to find out any useful documentations/tutorials for implementing the feed back loops in CI/CD pipeline please?

The method of getting deployment feedback depends on your service and your choice.
For example, you can check if the container is up or check one of the rest URL.
I use this stage as a final stage to check the service:
stage('feedback'){
sleep(time:10,unit:"SECONDS")
def get = new URL("192.168.1.1:8080/version").openConnection();
def getRC = get.getResponseCode();
println(getRC);
if(getRC.equals(200)) {
println(get.getInputStream().getText());
}
else{
error("Service is not started yet.")
}
}
Jenkins can notify users about failed tests(jobs) with sending email or json notify. read more:
https://wiki.jenkins.io/display/JENKINS/Email-ext+plugin
https://wiki.jenkins.io/display/JENKINS/Notification+Plugin
https://wiki.jenkins.io/display/JENKINS/Slack+Plugin
If you want continuous monitoring for the deployed product, you need monitoring tools which are different from Jenkins.
This is a sample picture for some popular tools of each part of DevOps:

Related

Using both Apache Airflow and Github Actions for dBT

I'm trying to get my head around if replacing our current CI/CD tool with Github Actions is a pro or con.
My team is currently using GitHub and TeamCity for CI/CD. I find this limiting as it's a separate team that configures the different stages. We use Airflow to schedule our DBT production, so we don't use DBT Cloud.
I'm wondering if there's any benefit to replacing TeamCity with Github Actions as we can write the configuration of the stages (compile, run, test, lint) ourselves. I'm interested in implementing Slim CI , which I don't believe is possible with TeamCity.
Would anyone be able to with the pros and cons of using GitHub Actions alongside Airflow? Is it possible? I'm struggling to find any documentation on using both Actions whilst running DBT production on Airflow.
Right now, my source of information comes from this: https://towardsdatascience.com/how-to-deploy-dbt-to-production-using-github-action-778bf6a1dff6
It mentions "Well, it depends. If you don’t have Airflow running in productions already, you will probably not need it now. There are more simple/elegant solutions than this (dbt Cloud, GitHub Actions, GitLab CI). Also, this approach shares many disadvantages with using a compute instance, such as waste of resources and no easy way for CI/CD."
My main goal is to get rid of TeamCity and implement Slim CI with Github Actions.
Thank you in advance!

Automatically trigger a Cloud Build once it is created

I am deploying a series of Cloud Build Triggers through Terraform, but I also want Terraform to trigger once every deployed Cloud Build so that it can do the initial deployment.
The Cloud Build Triggers are used to deploy Cloud Functions (and also Cloud Run and maybe Workflows). We could deploy the functions in the Terraform but we want to keep the command easy to modify so we don't want to duplicate it on both Terraform and the Cloud Build config.
It's important for the clarity and the evolutivity/maintainability of your pipeline to separate clearly the concern of each step.
You have a (set of) step to deploy the infrastructure of your project (here, your terraform)
You have a (set of) step that run process on your project (can be an Ansible script on VM, trigger Cloud Functions, Cloud Run, or a Cloud Build trigger).
I'm pretty sure that you can add this trigger in Terraform, but I strongly don't recommend you to do this.
Edit 1
I wasn't clear. You have to run your trigger by API after the terraform deployment, in your main pipeline. Then, the subsequent trigger will be done by Push to the Git repository.

What’s the best way to deploy multiple lambda functions from a single github repo onto AWS?

I have a single repository that hosts my lambda functions on github. I would like to be able to deploy the new versions whenever new logic is pushed to master.
I did a lot of reasearch and found a few different approaches, but nothing really clear. Would like to know what others feel would be the best way to go about this, and maybe some detail (if possible) into how that pipeline is setup.
Thanks
Welcome to StackOverflow. You can improve your question by reading this page.
You can setup a CI/CD pipeline using CircleCI with its GitHub integration (which is an online Service, so you don't need to maintain anything, like a Jenkins server, for example)
Upon every commit to your repository, a CircleCI build will be triggered. Once the build process is over, you can declare sls deploy, sam deploy, use Terraform or even create a script to upload the .zip file from your GitHub repo to an S3 Bucket and then, within your script, invoke the create-function command. There's an example how to deploy Serverless applications using CircleCI along with the Serverless Framework here
Other options include TravisCI, AWS Code Deploy or even maintain your own CI/CD Server. The same logic applies to all of these tools though: commit -> build -> deploy (using one of the tools you've chosen).
EDIT: After #Matt's answer, it clicked that the OP never mentioned the Serverless Framework (I, somehow, thought he was already using it, so I pointed the OP to tutorials using the Serverless Framework already). I then decided to update my answer with a few other options for serverless deployment
I know that this isn't exactly what you asked for but I use Serverless Framework (https://serverless.com) for deployment and I love it. I don't do my deployments when I push to my repo. Instead I push to my repo after I've deployed. I like this flow because a deployment can fail due to so many things and pushing to GitHub is much less likely to fail. I this way, I prevent pushing code that failed to deploy to my master branch.
I don't know if you're familiar with the framework but it is super simple. The website describes the simple steps to creating and deploy a function like this.
1 # Step 1. Install serverless globally
2 $ npm install serverless -g
3
4 # Step 2. Create a serverless function
5 $ serverless create --template hello-world
6
7 # Step 3. deploy to cloud provider
8 $ serverless deploy
9
10 # Your function is deployed!
11 $ http://xyz.amazonaws.com/hello-world
There are also a number of plugins you can use to integrate easily with custom domains on APIGateway, prune older versions of lambda functions that might be filling up your limits, etc...
Overall, I've found it to be the easiest way to manage and deploy my lambdas. Hope it helps!
Given that you're using AWS Lambda, you may want to consider CodePipeline to automate your release process. [SAM(https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) may also be interesting.
I too had the same problem. I wanted to manage 12 lambdas with 1 git repository. I solved it by introducing travis-ci. travis-ci saved the time and really useful in many ways. We can check the logs whenever we want and you can share the logs to anyone by sharing the URL. The sample documentation of all steps can be found here. You can go through it. 👍

Continuous Integration with VSTS

I am trying to do a PoC on how to achieve continuous integration and deployment using VSTS.
I have been successful in the build process i.e. from VSTS it will pull the code (asp.net based application) and build. The build process is also getting successful.
Now after the build is done I want to deploy the application and run my maven based selenium test cases written in java on the application. This is the part where I am struck. As in the deployment step it is not able to put the artifacts to the remote path that I am mentioning.
Can anyone please provide me some pointers on how to achieve the deployment on a remote machine and then run the java based test cases on this application?
Any pointers would be greatly appreciated.
Ok..here is the complete scenario...
1. I have the asp.net code on cloud in my vsts
2. I have been able to add a build step and create the artifacts successfully
3. Now I have a IIS server where i want to deploy these artifacts, and the server is not accessible from the public network and is behind a firewall.
Hence I am looking for any task that would help me achieve this. I am not sure of the complications that might arise due to the firewall and hence am trying out different methods to understand the complete big picture.
I received a reply here to use the Win RM tasks. I used that but it is giving a 53 error and not able to connect to the server that I am trying to deploy the code on.
To deploy asp.net based application, you can use IIS Web App Deployment step/task to deploy to your server or deploy to azure web site by using Azure App Service Deploy step/task.
To do Java test, there is a Maven step/task.

Can Laravel-Forge work with a CI cloud service?

My team and I were setting everything up so that Forge was in charge of deployment exclusively, while a CI cloud service would run unit/integration tests on each push to develop or master (staging or production, respectively).
Given the fact that Forge will trigger a deployment on each push to master (or any other branch), where does the CI server takes place in this model? Can I get a quick explanation of the workflow (and if possible an example CI cloud that would work with it)
Next to the auto deploy trigger Forge provides you a deploy-hook-url that can be called to trigger the deployment script. Usually the ci cloud service provides a way to customize the test/deployment process with some sort of bash scripts (curl) or gives an option to call an url after a successful run.
For example I used to use codeship for ci and they have an option in the settings called deployment where i could insert a custom script which calls the trigger url like curl -X GET https://forge.laravel.com/servers/xxx/sites/xxx/deploy/http?token=xxx
deactivate the aug-deploy trigger
customize the ci settings and call the forge-hook after successful run

Resources