Jenkins with jFrog Artifactory push Docker images - jenkins-pipeline

I'm trying to configure new pipeline in Jenkins. I have purchased and installed jFrog artifactory pro on Windows Server and it's up and running at: https://artifactory.mycompany.com
I found this sample here:
https://github.com/jfrog/project-examples/blob/master/jenkins-examples/pipeline-examples/declarative-examples/docker-push-example/Jenkinsfile
More specifically this section:
stage ('Push image to Artifactory') {
steps {
rtDockerPush(
serverId: "ARTIFACTORY_SERVER",
image: ARTIFACTORY_DOCKER_REGISTRY + '/hello-world:latest',
// Host:
// On OSX: "tcp://127.0.0.1:1234"
// On Linux can be omitted or null
host: HOST_NAME,
targetRepo: 'docker-local',
// Attach custom properties to the published artifacts:
properties: 'project-name=docker1;status=stable'
)
}
}
It's building and creating docker image but when it gets to push image it fails to push the image and errors out. Not sure what should go in the following:
ARTIFACTORY_DOCKER_REGISTRY
host: HOST_NAME
I've created a new local repo in artifactory "docker-local". Tried omitting host and getting
"Unsupported OS".
Putting host back in with "host: 'tcp://IP ADDRESSS" or "artifactory.mycompany.com:80/artifactory" generates
"Unsupported protocol scheme"
How would one configure jenkins pipeline to work with jFrog artifactory?

Found the solution:
ARTIFACTORY_DOCKER_REGISTRY should be IP/Artifactory-Repo-Key/Image:Tag
HOST should be docker daemon (Docker for windows is localhost:2375)
stage('Build image') { // build and tag docker image
steps {
echo 'Starting to build docker image'
script {
def dockerfile = 'Dockerfile'
def customImage = docker.build('10.20.111.23:8081/docker-virtual/hello-world:latest', "-f ${dockerfile} .")
}
}
}
stage ('Push image to Artifactory') { // take that image and push to artifactory
steps {
rtDockerPush(
serverId: "jFrog-ar1",
image: "10.20.111.23:8081/docker-virtual/hello-world:latest",
host: 'tcp://localhost:2375',
targetRepo: 'local-repo', // where to copy to (from docker-virtual)
// Attach custom properties to the published artifacts:
properties: 'project-name=docker1;status=stable'
)
}
}

Related

Failure loading tarball from GitHub to Heroku using Terraform heroku_build resource

I am working on creating a CI Pipeline using Github Actions, Terraform and Heroku. My example application is a Jmix application from Mario David (rent-your-stuff) that I am building according to his Youtube videos. Unfortunately, the regular Github integration he suggests has been turned off due to a security issue. If you attempt to use Heroku's "Connect to GitHub" button, you get an Internal Service Error.
So, as an alternative, I have changed my private repo to public and I'm trying to directly download via the Terraform heroku_build Source.URL (see the "heroku_build" section):
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 5.0"
}
herokux = {
source = "davidji99/herokux"
version = "0.33.0"
}
}
backend "remote" {
organization = "eraskin-rent-your-stuff"
workspaces {
name = "rent-your-stuff"
}
}
required_version = ">=1.1.3"
}
provider "heroku" {
email = var.HEROKU_EMAIL
api_key = var.HEROKU_API_KEY
}
provider "herokux" {
api_key = var.HEROKU_API_KEY
}
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
resource "heroku_build" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
buildpacks = ["heroku/gradle"]
source {
url = "https://github.com/ericraskin/rent-your-stuff/archive/refs/heads/master.zip"
}
}
resource "heroku_formation" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
type = "web"
quantity = 1
size = "Standard-1x"
depends_on = [heroku_build.eraskin-rsys-staging]
}
Whenever I try to execute this, I get the following build error:
-----> Building on the Heroku-20 stack
! Push rejected, Failed decompressing source code.
Source archive detected as: Zip archive data, at least v1.0 to extract
More information: https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive
My assumption is that Heroku can not download the tarball, but I can successfully download it without any authentication using wget.
How do I debug this? Is there a way to ask Heroku to show the commands that the build stack is executing?
For that matter, is there a better approach given that the normal GitHub integration pipeline is broken?
I have found a workaround for this issue, based on the notes from Heroku. They suggest using a third-party GitHub Action Deploy to Heroku instead of Terraform. To use it, I removed my heroku_build and heroku_formation from my main.tf file, so it just contains this:
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
My GitHub workflow now contains:
on:
push:
branches:
- master
pull_request:
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: Terraform Format
id: fmt
working-directory: ./infrastructure
run: terraform fmt
- name: Terraform Init
id: init
working-directory: ./infrastructure
run: terraform init
- name: Terraform Validate
id: validate
working-directory: ./infrastructure
run: terraform validate -no-color
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
working-directory: ./infrastructure
run: terraform plan -no-color -input=false
continue-on-error: true
- name: Update Pull Request
uses: actions/github-script#v6
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ️⚙️\`${{ steps.init.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
#### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`\n
${process.env.PLAN}
\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
- name: Terraform Plan Status
if: steps.plan.outcome == 'failure'
run: exit 1
- name: Terraform Apply
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
working-directory: ./infrastructure
run: terraform apply -auto-approve -input=false
heroku-deploy:
name: 'Heroku-Deploy'
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
runs-on: ubuntu-latest
needs: terraform
steps:
- name: Checkout App
uses: actions/checkout#v3
- name: Deploy to Heroku
uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: ${{secrets.HEROKU_APP_NAME}}
heroku_email: ${{secrets.HEROKU_EMAIL}}
buildpack: https://github.com/heroku/heroku-buildpack-gradle.git
branch: master
dontautocreate: true
The workflow has two "phases". On the pull request, it runs the tests in my application, followed by terraform fmt, terraform init and terrform plan. On a merge to my master branch, it runs the terraform apply. When that completes, it runs the second job that runs the akhileshns/heroku-deploy#v3.12.12 GitHub action.
As far as I can tell, it works. YMMV, of course. ;-)

GIT push step failing in Jenkins

Below code was working fine and all of the sudden it broken. I am using windows box.
stage('Push')
{
withCredentials([usernamePassword(credentialsId: 'gitlogin', passwordVariable: 'GIT_PASSWORD', usernameVariable: 'GIT_USERNAME')]) {
sh('git push --tags origin $BRANCH_NAME')
}
if ("${BRANCH_NAME}"=="develop" || ("${BRANCH_NAME}".startsWith("release")))
{
sshagent (credentials: ['GitSSHLOGIN']) {
sh("git tag -a PBCS_${BRANCH_NAME}_${ReleaseNumber}_${BUILD_NUMBER} -m 'Tag the build ${BRANCH_NAME}_${ReleaseNumber}_${BUILD_NUMBER}'")
sh('git push --tags origin $BRANCH_NAME')
}
}
}
Below is the error we are getting.
+ git push --tags origin release/21.04
Could not create directory '/c/Jenkins/jobs/branches/release-21-04.3rkqb4/workspace/nullnull/.ssh'.
ssh_askpass: exec(/usr/lib/ssh/ssh-askpass): No such file or directory
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

helm test via Jenkins pipeline

I'm running a basic groovy Jenkins pipeline as code for establishing a successful connection to a kubernetes cluster. Below is the code snippet which is trying to connect to a k8s cluster and listing all the releases.
stage('Helm list'){
steps{
withCredentials([file(credentialsId: "kubeconfig-gke", variable:"kubeconfig")])
{
helm list -a
}
}
}
I get the following error on Jenkins console output :
groovy.lang.MissingPropertyException: No such property: list for class: groovy.lang.Binding
Possible solutions: class
at groovy.lang.Binding.getVariable(Binding.java:63)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:270)
at org.kohsuke.groovy.sandbox.impl.Checker$6.call(Checker.java:289)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:293)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:269)
Run it inside a shell command
steps{
withCredentials([file(credentialsId: "kubeconfig-gke", variable:"kubeconfig")])
{
sh """
helm list -a
"""
}
}
}

How can I use vSphere Cloud Plugin inside Jenkins pipeline code?

I have a setup at work where the vSphere host are manually restarted before execution of a specific jenkins job, as a noob in the office I automated this process by adding a extra build step to restart vm's with the help of https://wiki.jenkins-ci.org/display/JENKINS/vSphere+Cloud+Plugin! (vSphere cloud plugin).
I would like to now integrate this as a pipeline code, please advise.
I have already checked that this plugin is Pipeline compatible.
I currently trigger the vSphere host restart in pipeline by making it to remotely trigger a job configured with vSphere cloud plugin.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
script {
sh "curl -v 'http://someserver.com/job/Vivin/job/executor_configurator/buildWithParameters?Host=build-114&token=bonkers'"
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
I would like to integrate the vSphere cloud plugin directly in the Pipeline code itself, please help me to integrate.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere cloud plugin code that is requested
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
Well I found the solution myself with the help of 'pipeline-syntax' feature found in the menu of a Jenkins pipeline job.
'Pipeline-syntax' feature page contains syntax of all the possible parameters made available via the API of the installed plugins of a Jenkins server, using which we can generate or develop the syntax based on our needs.
http://<jenkins server url>/job/<pipeline job name>/pipeline-syntax/
My Jenkinsfile (Pipeline) now look like this
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere buildStep: [$class: 'PowerOff', evenIfSuspended: false, ignoreIfNotExists: false, shutdownGracefully: true, vm: 'brewery-133'], serverName: 'vspherecentral'
vSphere buildStep: [$class: 'PowerOn', timeoutInSeconds: 180, vm: 'brewery-133'], serverName: 'vspherecentral'
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(1)
}
}
}
}
}
}
}

Terraform stuck on `Refreshing state...` when running against `localstack`

I am using Terraform to publish lambda to AWS. It works fine when I deploy to AWS but stuck on "Refreshing state..." when running against localstack.
Below is my .tf config file as you can see I configured the lambda endpoint to be http://localhost:4567.
provider "aws" {
profile = "default"
region = "ap-southeast-2"
endpoints {
lambda = "http://localhost:4567"
}
}
variable "runtime" {
default = "python3.6"
}
data "archive_file" "zipit" {
type = "zip"
source_dir = "crawler/dist"
output_path = "crawler/dist/deploy.zip"
}
resource "aws_lambda_function" "test_lambda" {
filename = "crawler/dist/deploy.zip"
function_name = "quote-crawler"
role = "arn:aws:iam::773592622512:role/LambdaRole"
handler = "handler.handler"
source_code_hash = "${data.archive_file.zipit.output_base64sha256}"
runtime = "${var.runtime}"
}
Below is docker compose file for localstack:
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4583:4567-4583"
- '8055:8080'
environment:
- SERVICES=${SERVICES-lambda }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker-reuse }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
Does anyone know how to fix the issue?
This is how i fixed similar issue :
Set export TF_LOG=TRACE which is the most verbose logging.
Run terraform plan ....
In the log, I got the root cause of the issue and it was :
dag/walk: vertex "module.kubernetes_apps.provider.helmfile (close)" is waiting for "module.kubernetes_apps.helmfile_release_set.metrics_server"
From logs, I identify the state which is the cause of the issue: module.kubernetes_apps.helmfile_release_set.metrics_server.
I deleted its state :
terraform state rm module.kubernetes_apps.helmfile_release_set.metrics_server
Now run terraform plan again should fix the issue.
This is not the best solution, that's why I contacted the owner of this provider to fix the issue without this workaround.
The reason I failed because terraform tries to check credentials against AWS. Add below two lines in your .tf configuration file solves the issue.
skip_credentials_validation = true
skip_metadata_api_check = true
I ran into the same issue and fixed it by logging into the aws dev profile from the console.
So don't forget to log in.
provider "aws" {
region = "ap-southeast-2"
profile = "dev"
}

Resources