Webhooks for Oracle Cloud Infrastructure - container registry - jenkins-pipeline

Looking for a solution to this use case
Docker image is pushed to Oracle Cloud Infrastructure - container registry (OCIR)
Jenkins has a webhook on the OCIR and Jenkins pipeline gets triggered as a new image is available in OCIR
How is it possible to have a webhook or some kind of mechanish for letting Jenkins know there is a new push to OCIR?

This blog post walks you thru how to set up a continuous pipeline that may be able to be used in full or in part to accomplish this
https://blogs.oracle.com/cloud-infrastructure/build-a-continuous-integration-pipeline-using-github,-docker-and-jenkins-on-oracle-cloud-infrastucture

We can listen to the OCI container registry events via Service Connector. You can configure Service Connector to invoke your custom functions on a specific event 'Container Image - Upload' under service name 'Registry'.
You can find a sample illustration below to perform some custom tasks during an image upload to OCI Container Registry.
Ref: https://github.com/RahulMR42/oci-devops-deploy-on-imageupload

Related

How Can we Configure Camunda Process Engine in Golang?

I found links on process engine Configuration
https://docs.camunda.org/manual/7.16/reference/deployment-descriptors/tags/process-engine/
But how to use this in Golang?
You would not use the embedded process engine approach. Instead run a remote process engine and implement service tasks in Go using the external task patter. To get such a remote engine you can either just sign up for a free developer account on Camunda Cloud. If your prefer on premises, then you can use a self-managed deployment (e.g. via docker compose). For both options check here: https://camunda.com/get-started
Once you have an engine and configured credentials (follow https://docs.camunda.io/docs/guides/getting-started/).
The section https://docs.camunda.io/docs/guides/getting-started/implement-service-task/ shows how to configure the service task on the process engine side. On the Go side you can now implement an external worker (instead of zbctl in the example) as described here: https://github.com/camunda/zeebe/tree/main/clients/go

Feedback Loop implementation in CI/CD pipeline using Jenkins and kubernetes

Currently I am trying to implement CI/CD pipeline using the DevOps automation tools like Jenkins and kubernetes. And I am using these for deploying my micro services creates using spring boot and maven projects.
Now I am successfully deployed my spring boot micro services using Jenkins and Kubernetes. I am deployed to different namespaces using kubernetes. When I am committing , one post commit hook will work from my SVN repository. And that post commit hook will trigger the Jenkins Job.
My Confusion
When I am implementing the CI/CD pipeline , I read about the implementation of feed back loops in pipeline. Here I had felt the confusion that , If I need to use the implementation of Feedback Loops then which are the different ways that I can follow here ?
Can anyone suggest me to find out any useful documentations/tutorials for implementing the feed back loops in CI/CD pipeline please?
The method of getting deployment feedback depends on your service and your choice.
For example, you can check if the container is up or check one of the rest URL.
I use this stage as a final stage to check the service:
stage('feedback'){
sleep(time:10,unit:"SECONDS")
def get = new URL("192.168.1.1:8080/version").openConnection();
def getRC = get.getResponseCode();
println(getRC);
if(getRC.equals(200)) {
println(get.getInputStream().getText());
}
else{
error("Service is not started yet.")
}
}
Jenkins can notify users about failed tests(jobs) with sending email or json notify. read more:
https://wiki.jenkins.io/display/JENKINS/Email-ext+plugin
https://wiki.jenkins.io/display/JENKINS/Notification+Plugin
https://wiki.jenkins.io/display/JENKINS/Slack+Plugin
If you want continuous monitoring for the deployed product, you need monitoring tools which are different from Jenkins.
This is a sample picture for some popular tools of each part of DevOps:

Nifi - Update Remote Process Group through Rest API

We are using templates to package up some data transfer jobs between two nifi clusters, one acting as a sender, the other as the receiver. One of our jobs contains a remote process group and all worked fine at the point the template was created.
However when we deploy the template through our environments (dev, test, pre, prod), it is tedious and annoying to have to manually delete and a recreate a remote process group in the user interface. I'd like to automate this to simplify deploying templates and reduce the manual intervention.
Is it possible to update a remote processor group and its port configuration through the rest-api ?
Do I just use the REST api to create a new RPG with the correct configuration ?
Does anyone have any experience with this?
There is a JIRA to address this issue [1] which will be worked in conjunction with some of the ongoing Flow Registry (SDLC for flows) efforts. Until then, the best option would be (2) above.
[1] https://issues.apache.org/jira/browse/NIFI-4526

Not terminating but stopping cloud build agent

TeamCity supports creating VMs in the cloud for running builds. It can also terminate instance after it is idle for defined period of time.
Is it possible not to terminate but shutdown the instance and to start it again when needed?
If your BA is a VM on Azure then I was able to achieve this by using Azure Automation account (There is a free version).
Basically, you trigger WebHooks for 2 runbooks you import from the gallery - StartAzureV2VM & StopAzureV2VM. The gallery can be found in "Automation Account-> Runbooks -> Browse Gallery"
Then on your TeamCity server you just periodically run some powershell which monitors the build queue on Teamcity and trigger appropriate webhook (with some timeouts etc.).
Here is the script I'm using, feel free to amend it to your needs:
https://gist.github.com/milanio/b300f23883afa9c6288f9365dfb98252

execute dashDB jobs from Workload Scheduler

I have created some stored procedures on my dashDB instance on Bluemix, to manipulate data in tables in the same instance.
I can run these from Data Studio and they work as intended.
Next, I created a process in Workflow Scheduler, which I provisions as a service in the same app, where the dashDB is also a service.
While creating the job step in the process, I noticed a message in the dialog window. I have attached a screen shot here:
http://i.stack.imgur.com/EI2b7.jpg
When I did try to run the process step from Workflow Scheduler, the process failed with a JDBC not found error.
I do realize that the Workflow agent I'm using is hosted on Bluemix, so I am puzzled how I can install the JDBC client there.
Should I be setting up an agent on a local machine outside of bluemix, in a hybrid mode?
Currently there are 2 possibilities:
Open a ticket to ask for a dedicated cloud agent and then download
the JDBC driver on the agent.
Download and install an agent on a VM or on-prem.
It looks like you have the incorrect value for "JDBC jar class path" the correct value is /home/wauser/utils/
I'm not sure why it is required that we put enter this but I was able to get the connection to dashDB working with this change.
JDBC jar class path: /home/wauser/utils/

Resources