Jenkins declearative Pipeline Script - jenkins-pipeline

I want to write a script that does the following
Pull a repository from git, if repository already exists means running it second then remove the old folder and pull the repository
then my repository contain docker-compose file, if docker compose already working then stop it, now docker-compose up -d
Same code below:
pipeline {
agent any
stages {
stage('Pull the repo') {
steps {
sh "sudo rm -r devops1"
sh "git clone https://github.com/xyz/devops1.git"
}
}
stage('run it :D'){
steps{
dir('devops1'){
sh "sudo docker-compose down"
sh "sudo docker-compose up -d"
}
}
}
}
}
it fails when repo is not already fetched, i'm unable to place if else condition. Looking for any help or suggestion, Thank you :)

rm -rf ignores nonexistent directories
docker-compose up restarts the service, no need to call docker-compose down first

Related

How to prevent docker container from stopping after executing sh script file

im running a docker container as shown below, after the sh files are executed the container terminate by default, how i can keep the container working in background
sample from docker file
#=========================
# Copying Scripts to root
#=========================
COPY entry_point.sh \
/root/
RUN chmod +x /root/entry_point.sh
#=======================
# framework entry point
#=======================
CMD /root/entry_point.sh
the entry_point.sh file
function clone_repo() {
mkdir /root/repo
git clone git#github.com:test/tests.git /root/repo && \
rm -rf /root/.ssh
}
clone_repo
and here is the command im using to initialize the container
docker run -p 5900:5900 --name mycontainer--privileged amrka/image
You could also run CMD /root/entry_point.sh; sleep infinity
Thats how docker container works, its goes offf when application dies.
Why you need it to keep alive?
What it will do?
if you want just to have, you can run sleep 10000000 for example at end of the file, so it will keep container, but why?
if you want execute any tas on schedule, then you can use any cron image.
Solved by removing entry_point

shell script returning not found on jenkins master using pipeline as code

I am new to jenkins and trying to write a pipeline. Everything is working when run with jobs, but facing issue with pipeline. My script which should run after checking out from github returns file not found. Could anyone help please. Attached is the image of the log.
https://i.stack.imgur.com/LuxGn.png
Below is the code sample I am trying to execute.
stage('puppet master config checkout') {
steps {
echo "cloning github"
git "https://github.com/rk280392/pipeline_scripts.git"
}
}
stage('puppet master config build') {
steps {
echo "running puppet master script"
sh "puppet_master.sh"
}
}
check the file script is here with command sh 'ls' just after the git step.
generally I would recommend not to use git step but checkout instead, it is more powerful and more reliable
checkout([
$class: 'GitSCM',
branches: scm.branches,
extensions: scm.extensions,
userRemoteConfigs: [[
url: 'https://github.com/rk280392/pipeline_scripts.git'
]]
])
is your script executable? you could use chmod +x puppet_master.sh before running it with dot slash as prefix ./puppet_master.sh
sh 'sh puppet_master.sh'

How to deploy the contents of container build by docker-compose after all containers are ready to remote server?

I have a fine working docker-compose for my local environment to work and play around with. the docker compose will add my website, mysql, and some other things in order to make everything work.
All the necessary files are now in one of my docker containers, and i can
docker cp container_name:/var/www/html/. dist/
to have my files on my local machine in the dist directory, where I am able to transfer the files to my server.
The next step is to automate the whole process, for what i want to use Jenkins.
Its not the first time that I use Jenkins, but for some reason I cannot get this to work.
I checkout my project from SCM into Jenkins, and be able to run docker-compose up --build, but when i dont use the -d parameter it will become stuck in this command line, since docker-compose up --build will only terminate with CTRL+C, therefore I use the -d parameter.
Afterwards i use
docker cp container_name:/var/www/html/. dist/
to move the files into my Jenkins directory, but here lies the next problem: Since I use the -d parameter the docker cp command does not wait for the docker-compose up --build to be completely finished.
So I tried to use something like
docker-compose logs -f -t | sed '/^Almost ! Starting web server now$/ q'
after the build command to determine a point in the build process where I am confident that all the files and installations made by custom docker_run.sh etc. files are already executed.
But it doesn't work. Either the job never ends and gets stuck or my command doesn't exactly wait for the "Almost ! Starting web server now" line in the docker-compose logs and will directly copy the files without the docker_run.sh having done its modifications inside of the container.
This is my current jenkins shell script inside the deployment job for my project.
# Create a dist directory
mkdir dist
# Build Docker
docker-compose up -d --build
# Wait for the logs to output "Almost ! Starting web server now"
docker-compose logs -f -t | sed '/^Almost ! Starting web server now$/ q'
# Copy files from container to jenkins directory "dist"
docker cp container_name:/var/www/html/. dist/
# Stop the containers, as I dont need them anymore
docker-compose down
# Go into the dist folder
cd dist
# Send files to remote
rsync -aHAXx --numeric-ids -e "some_parameters_here" . ssh_user#ssh_ip:httpdocs
I expect docker-compose to start by jenkins and that jenkins waits until it is ready, then cp the files to jenkins directory, then sends to remote server.
Actual result, as stated above, is either a never ending job of Jenkins, or Jenkins sending the files from docker without docker being 100% built.
Here is an excerpt of one of our groovy based pipeline build steps with the sensitive bits replaced. This pipeline actually has 14 containers in it:
Under pipeline --> configure, select either pipeline script or pipeline from scm.
pipeline.groovy
pipeline {
environment {
registry = "fqdn.to.our.private.docker.repo"
}
agent any
stages {
stage('Cloning Git') {
steps {
git(
branch: 'develop',
credentialsId: 'jenkins-user-ssh-key-credentials-id',
url: 'jenkinsuser#git_repo_server:/path/to/project/repo'
)
}
}
stage('Building Container1 image') {
steps{
script {
container1Image = docker.build(registry + "/container1_name:tag", "-f path/to/dockerfile/for/this/container/Dockerfile ./docker/build/context/in/the/checked/out/repo/for/this/container")
}
}
}
stage('Deploy Container1 Image to Docker Repository') {
steps {
script {
docker.withRegistry('https://fqdn.to.our.private.docker.repo', 'jenkins-credentials-to-access-the-docker-repo') {
container1Image.push()
}
}
}
}
stage('Building Container2 image') {
steps{
script {
container2Image = docker.build(registry + "/container2_name:tag", "-f path/to/dockerfile/for/this/container/Dockerfile ./docker/build/context/in/the/checked/out/repo/for/this/container")
}
}
}
stage('Deploy Container2 Image to Docker Repository') {
steps {
script {
docker.withRegistry('https://fqdn.to.our.private.docker.repo', 'jenkins-credentials-to-access-the-docker-repo') {
container2Image.push()
}
}
}
}
}
}

Docker Build Failed "chmod: cannot access '/main.sh': No such file or directory"

[this is the error I'm getting after build command ]
Step 7/9 : RUN chmod +x /main.sh
---> Running in 6e880a009c7d
chmod: cannot access '/main.sh': No such file or directory
The command '/bin/sh -c chmod +x /main.sh' returned a non-zero code: 1
and here is my docker file
FROM centos:latest
MAINTAINER Aditya Gupta
#install git
RUN yum -y update
RUN yum -y install git
#make git repo folder, change GIT_LOCATION
RUN mkdir -p /home/centos/doimages/dockimg;cd /home/centos/doimages/dockimg;
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Run chmod +x ./main.sh
RUN echo " ./main.sh\n "
EXPOSE Portnumber
When you perform a RUN step in a Dockerfile, a temporary container is launched, often with a shell parsing your command. When that command finishes, the container exits, and docker packages the filesystem changes as an image layer. That process is repeated from the beginning for each RUN line.
The key piece there is the shell exits, losing environment variables you've set, background processes you've run, and in this case, the current working directory you tried to set here:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Instead of a cd in a RUN command, you can update the value of WORKDIR:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername)
WORKDIR foldername
You want to execute a shell file which does not exist on your docker machine. use ADD command to add your script to your docker image!
-- somewehe inside your dockerfile befor the execution ---
ADD ./PATH/ON/HOST/main.sh /PATH/YOU/LIKE/ON/DOCKER/MACHINE
Then try to build your docker machine
issue is resolved with workdir and cloning manually without docker file and then give the path to mainsh in dockerfile.

How can I run a docker container and commit the changes once a script completes?

I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done

Resources