I am trying to create and build an image by using bmuschko/gradle-docker-plugin. The Dockerfile is created but I can't seem to be able to build an image from it. I am on Centos 7.
Create the Dockerfile:
task createBaseImage(type: Dockerfile) {
destFile = project.file('docker/base/Dockerfile')
from 'java:8'
runCommand 'apt-get update'
runCommand 'apt-get -qq -y install python3 python3-dev python3-pip'
}
Build the image:
task buildBaseImage(type: DockerBuildImage) {
dependsOn createBaseImage
inputDir = createBaseImage.destFile.parentFile
tag = 'the/tag'
}
When running the buildBaseImage task ./gradlew buildBaseImage --info, the execution hangs and finally fails with:
org.apache.http.conn.ConnectTimeoutException: Connect to 192.168.59.103:2376 [/192.168.59.103] failed: Connection timed out
I suspect there's a problem with my docker closure which is copied from the examples:
docker {
url = 'http://192.168.59.103:2376'
registryCredentials {
url = 'https://index.docker.io/v1'
username = '${docker_user}'
password = '${docker_password}'
email = 'email#example.com'
}
}
I've tried different urls, ports etc. but the problem persists. Any ideas about what is causing this problem?
Worked around this by using a different Gradle plugin. The images can be built in a similar and simple fashion:
task buildBaseImage(type: Docker) {
baseImage 'java:8'
applicationName = "app-name"
tagVersion = 'latest'
runCommand "apt-get update"
runCommand "apt-get -qq -y install python3 python3-dev python3-pip"
}
And then pushed to Docker Hub via:
task pushBaseImage(type: Exec, dependsOn: 'buildBaseImage') {
executable 'docker'
args = ['push','-f', getGroup() + '/my-base']
}
Using force to skip the push confirmation.
docker push --help
-f, --force=false Push to public registry without confirmation
This was my way of working around the connection issues.
Here's the required configurations:
apply plugin: 'docker'
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'se.transmode.gradle:gradle-docker:1.2'
}
}
group = "myGroup"
docker {
maintainer 'Maintainer "maintainer#example.com"'
}
Related
I need to migrate from Transmode gradle docker plugin to Palantir docker gradle plugin.
Old task is as below,
task buildDocker(type: Docker, dependsOn: build) {
group "docker"
push = false
applicationName = awsContainerRepositoryName()
dockerfile = pickDockerfile()
tagVersion = "local"
doFirst {
copy {
from jar
into stageDir
rename { String fileName ->
fileName.replace("${jar.baseName}-${project.version}", "app")
}
}
new File(stageDir, "service_version.txt").text = "${project.version}"
copy {
from "${buildDir}/asciidoc/html5/index.html"
into "${stageDir}/api-documentation/"
}
if (project.hasProperty('customDockerRunStep')) {
runCommand project.customDockerRunStep
}
exec {
commandLine 'bash', 'shared-build/aws/retrieve_aws_ecr_login.sh'
}
}
}
I am trying to add doFirst block as below,
docker {
dependsOn assemble
name awsContainerRepositoryName()
dockerfile file(pickDockerfile())
files bootJar.archivePath
doFirst {
// some code
}
}
But it is giving below error,
* What went wrong:
A problem occurred evaluating root project 'service-abc'.
> No signature of method: build_dgdrt7rh9kp8lojc44jud6em6.docker() is applicable for argument types: (build_dgdrt7rh9kp8lojc44jud6em6$_run_closure7) values: [build_dgdrt7rh9kp8lojc44jud6em6$_run_closure7#8431bf9]
Please help
I have a Jenkins pipeline which is running fine but it depends upon JDK and maven installed tools. There were few instances in the past that these JDK and maven tool's name was changed(e.g. Maven 3.6.2 -> Maven 3.6.3 and it results in my pipeline failure.
stage ("build") {
withMaven(jdk: 'Java SE 8u221', maven: 'Maven 3.6.3', tempBinDir: '') {
sh 'mvn clean package jib:dockerBuild verify'
}
}
I want my pipeline to be independent of what tools are installed. So I rewrite my Jenkins pipeline like below to provide docker image of maven(since JDK is bundled with it)
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Checkout') {
steps {
git branch: "master", url: "repo url", credentialsId: 'id'
}
}
stage ("build") {
steps {
sh 'mvn clean package jib:dockerBuild verify'
}
}
}
}
But now I am getting an error Failed to execute goal com.google.cloud.tools:jib-maven-plugin:2.3.0:dockerBuild (default-cli) : Build to Docker daemon failed, perhaps you should make sure Docker is installed and you have correct privileges to run it
It seems that docker daemon is not visible after I provided a maven docker image.
I did solve this by adding docker agent inside of my maven docker image
pipeline {
agent any
stages {
stage('build Dockerfile') {
steps {
sh '''echo "FROM maven:3-alpine
RUN apk add --update docker openrc
RUN rc-update add docker boot" >/var/lib/jenkins/workspace/Dockerfile'''
}
}
stage('run Dockerfile') {
agent{
dockerfile {
filename '/var/lib/jenkins/workspace/Dockerfile'
args '--user root -v $HOME/.m2:/root/.m2 -v /var/run/docker.sock:/var/run/docker.sock'
}
}
steps {
sh 'docker version'
sh 'mvn -version'
sh 'java -version'
}
}
}
}
i wanna create jenkins declarative pipeline for deploying on xl-deploy using maven command. i am not using xl-deploy plugin i am just using maven command for this.
pipeline {
agent {
label 'java8'
}
tools {
maven 'M3'
}
options {
skipDefaultCheckout()
timestamps()
disableConcurrentBuilds()
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr: "${env.BRANCH_NAME}"=='master'?'10':''))
}
environment {
ARTIFACTORY = credentials('artifactory-credentials')
CF = credentials('cf-credentials')
SONAR = credentials('Sonar_Credentials')
}
stages {
stage ('Checkout') {
steps {
checkout scm
sh "git rev-parse HEAD > .git/commit-id"
script {
commit_id = readFile('.git/commit-id').trim()
pom = readMavenPom file: 'pom.xml'
currentBuild.displayName = commit_id.take(7) + "-" + pom.version
}
}
}
stage ('Build') {
steps {
sh "mvn -U -s settings.xml -gs settings.xml clean install -DskipTests=true"
}
}
stage('Publish Artifacts') {
when {
branch 'master'
}
steps {
sh "echo 'Publish JAR to Artifactory !'"
sh "mvn -s settings.xml -gs settings.xml versions:set -DnewVersion=$commit_id"
sh "mvn -s settings.xml -gs settings.xml deploy -DskipTests=true"
}
}
stage('Deploy') {
steps {
sh "wget --user ${ARTIFACTORY_USR} --password ${ARTIFACTORY_PSW} -O ${pom.artifactId}.war -nv <repo url>/${pom.artifactId}/${commit_id}/${pom.artifactId}-${commit_id}.war --server-response --"
sh "mvn org.apache.maven.plugins:maven-dependency-plugin:2.8:copy -Dartifact=<app package>-$commit_id:war -DoutputDirectory=target -Dmdep.useBaseVersion=true"
}}
}
post {
always {
deleteDir()
}
}
}
i am getting following exception:
Failed to execute goal com.xebialabs.xldeploy:xldeploy-maven-plugin:5.0.2:generate-deployment-package.
till publish, it is working fine. but it is giving exception while executing deploy stage
I would suggest upgrading to version 6.0.1 of the plugin as that version fixes some connectivity issues.
The problem might also be related to an incorrect pom.xml file, however for that to exclude as the root cause, you should at least share your pom.xml, your XL Deploy version and the loaded plugins in XL Deploy.
we just migrate all our MVN jobs to multibranch Pipeline job.
the original Maven command was clean deploy -P integration-tests deploy -DupdateReleaseInfo=true
we used the updateReleaseInfo flag to make sure the metadat.xml will be updated.
once we migrated to PIPELINE we run the same command , but looks like the flag updateReleaseInfo has no affect , and the metadata.xml is not updated.
<latest>1.156</latest>
<release>2.1</release>
once I run the original Maven job again the metadata in artifactory updated to
<latest>2.1</latest>
<release>2.1</release>
we used Jenkins 2.46.3
Maven Integration plugin 3.0
Maven version 3.3.9
any idea for this issue ?
here is my code
stage('Deploy') {
when {
anyOf { branch 'master' }
}
steps {
mavenTask tasks: 'clean deploy -DupdateReleaseInfo=true', localRepo: 'true'
}
}
and my maven task from the shared library
def call(Map map = [:]) {
def params = [
tasks : '',
localRepo : ''
] << map
def tasks = params.tasks
def localRepo = params.localRepo
if (localRepo == 'true') {
mvnLocalRepo = '-Dmaven.repo.local=$WORKSPACE/.repository'
} else {
mvnLocalRepo = ''
}
configFileProvider(
[configFile(fileId: '11111-22222-33333', targetLocation: './settings.xml')]) {
sh "mvn -s $WORKSPACE/settings.xml $mvnLocalRepo $tasks "
}
}
I have a jenkins container which triggers a maven container to build a java code.
Now I want to test the same code with selenium.
I have another container for selenium server.
But the command to run the test using selenium is a mvn command which needs Maven.
Since I am executing this inside the selenium container, mvn is not recognized.
How can I use maven container inside the selenium container ? Is nesting possible ?
I don't want to install everything in the same container.
This is the Jenkinsfile :
node {
stage('checkout') {
git credentialsId: 'Gitlab_cred', url: 'urlhere'
}
stage('Build') {
docker.image('mvn_custom_image').inside('-u root'){
//mvn commands to build the project
}
}
stage('Archive and Packaging') {
archiveArtifacts '**/*.war'
}
stage('Integration Test') {
docker.image('selenium/node-firefox:3.4.0-dysprosium').inside{
//I want to run mvn command using selenium here
// This image doesn't have mvn
}
}
}
Can I use "docker.image('mvn_custom_image').inside" inside another container (in my case selenium) ?
Or any other way which has a similar result.
There is good example on https://www.jenkins.io/doc/book/pipeline/docker/
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}