I am working on writing vagrantfile for automation of local setup.
Through vagrant, i am creating docker image for my app and running it inside vm. Everything is under one command i.e. vagrant up
But one thing i have to do manual i.e. creating jar file for my app by using mvn clean package.
I am wondering is there any way to run mvn command from vagrantfile, so that when i issue vagrant up, it should build the JAR and do the rest of the work.
as #Patrick mentions, the shell provisioning is a good fit - I personally use for gradle but the same can be done for maven. Here is how I call my script
config.vm.provision "shell", path: "script/run-test.sh", privileged: false, run: 'always'
path : is the path for my shell script from the project directory
privileged : if not set, root will run the script, if maven is installed for your vagrant user, make sure to set it to false else you will see issue
run: 'always' : this is my use-case (up to you to choose if it makes sense for you), the script will always run when I run vagrant up
the shell script will be something like
#!/bin/bash
if [ -d "/home/vagrant/test" ];then
cd /home/vagrant/test && git pull
cd /home/vagrant/test && maven compile
cd /home/vagrant/test && maven deploy
.....
else
git clone <your project> /home/vagrant/test
fi
This is an example, basically first time I create the instance it will clone a git repo - then it will pull from git latest files and run your maven command.
Again this is a simple example, use it for your own need
Related
I'm running CI jobs on a self-hosted GitLab instance plus 10 GitLab Runners.
For this minimal example, two Runners are needed:
Admin-01
A shell runner with Docker installed.
It can execute e.g. docker build ... to create new images, which are then pushed to the private Docker registry (also self-hosted / part of the GitLab installation)
Docker-01
A docker runner, which executes the previously build image.
On a normal bare-metal, virtual machine or shell runner, I would modify e.g. ~/.profile to execute commands before before_script or script sections are executed. In my use case I need to set new environment variables and source some configuration files provided by the tools I want to run in an image. Yes, environment variables could be set differently, but there seams to be no way to source Bash scripts automatically before before_script or script sections are executed.
When sourcing the Bash source file manually, it works. I also notice, that I have to source it again in script block. So I assume the Bash session is ended between before_script block to script block. Of cause, it's no nice solution to manually source the tools Bash configuration script in every .gitlab-ci.yml file manually by the image users.
myjobn:
# ...
before_script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
The mentioned modifications for e.g. shell runners does not work in images executed by GitLab Runner. It feels like the Bash in the container is not started as login shell.
The minimal example image is build as follows:
fetch debian:bullseye-slim from Docker Hub
use RUN commands in Dockerfile to modify with some echo outputs
/etc/profile
/root/.bashrc
/root/.profile
# ...
RUN echo "echo GREETINGS FROM /ROOT/PROFILE" >> /root/.profile \
&& echo "echo GREETINGS FROM /ETC/PROFILE" >> /etc/profile \
&& echo "echo GREETINGS FROM /ROOT/BASH_RC" >> /root/.bashrc
When the job starts, non of the echos is printing messages, while a cat shows, the echo commands have been put at the right places while building the image.
At next I tried to modify
SHELL ["/bin/bash", "-l", "-c"]
But I assume, this has only effects in RUN commands in the Dockerfile, but not on an executed container.
CMD ["/bin/bash", "-l"]
I see no behavior changes.
Question:
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
How to modify the environment in a container before before_script or script runs. Modifying means environment variables and execution / sourcing a configuration script or patched default script like ~/.profile.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Note:
Yes, the behavior can be achieved with some Docker arguments in docker run, but as I wrote GitLab Runner is managing the container. Alternatively, how to configure, how GitLab Runner launches the images? To my knowledge, there is no configuration option available / documented for this situation.
A shell runner with Docker installed. It can execute e.g. docker build ...
Use docker-in-docker or use kaniko. https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Shell executor is like "the last resort", where you want specifically to make changes to the server, or you are deploying your application "into" this server.
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
Add ENTRYPOING bash -l to your image. Or set the entrypoint from gitlab-ci.yml. See docker documentation on ENTRYPOINT and gitlab-ci.yml documentation on image: entrypoint: .
How to modify the environment in a container before before_script or script runs.
Build the image with modified environment. Consult Dockerfile documentation on ENV statements.
Or set the environment from gitlab-ci.yml file. Read documentation on variables: in gitlab-ci.
How to prepare the shell environment in an image executed by GitLab Runner?
Don't. The idea is that the environment is reproducible, ergo, there should be no changes beforehand. Add variables: in gitlab-ci file and use base images if possible.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
Gitlab is open-source.
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Yes, first a gitlab-runner-helper is executed - it has git and git-lfs and basically clones the repository and downloads and uploads the artifacts. Then the container specified with image: is run, cloned repository is copied into it and a specially prepared shell script is executed in it.
Context
I want to run a bash script during the building stage of my CI.
So far, MacOS building works fine and Unix is in progress but I cannot execute the scripts in my Windows building stage.
Runner
We run a local gitlab runner on Windows 10 home where WSL is configured, Bash for Windows installed and working :
Bash executing in Windows powershell
Gitlab CI
Here is a small example that highlights the issue.
gitlab-ci.yml
stages:
- test
- build
build-test-win:
stage: build
tags:
- runner-qt-windows
script:
- ./test.sh
test.sh
#!/bin/bash
echo "test OK"
Job
Running with gitlab-runner 13.4.1 (e95f89a0)
on runner qt on windows 8KwtBu6r
Resolving secrets 00:00
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:01
Running on DESKTOP-5LUC498...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in C:/Gitlab-Ci/builds/8KwtBu6r/0/<company>/projects/player-desktop/.git/
Checking out f8de4545 as 70-pld-demo-player-ecran-player...
Removing .qmake.stash
Removing Makefile
Removing app/
Removing business/
Removing <company>player/
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:02
$ ./test.sh
Cleaning up file based variables 00:01
Job succeeded
Issue
As you can see, the echo message "test OK" is not visible in the job output.
Nothing seems to be executed but no error is shown and running the script on the Windows device directly works fine.
In case you are wondering, this is a Qt application built via qmake, make and deployed using windeployqt in a bash script (where the issue is).
Any tips or help would be appreciated.
edit : Deploy script contains ~30 lines which would make the gitlab-ci yaml file hard to read if the commands are put directly in the yaml instead of an external shell executed during the CI.
Executing the script from the Windows env
It may be due to gitlab opened a new window to execute bash so stdout not captured.
You can try use file system based methods to check the execution results, such as echo to files. The artifact can be specified with wildcard for example **/*.zip.
I also tested on my windows machine. First if i run ./test.sh in powershell, it will prompt dialog to let me select which program to execute. the default is git bash. That means on your machine you may have configured one executable (you'd better find it out)
I also tried in powershell:
bash -c "mnt/c/test.sh"
and it gives me test OK as expected, without new window.
So I suggest you try bash -c "some/path/test.sh" on your gitlab.
I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate
Maven is well installed on my gitlab-runner server. When executing mvn clean directly on my repo it works, when running my pipeline using Gitlab UI got this error :
bash: line 60: mvn: command not found
ERROR: Job failed: exit status 1
I notice that I tried to fix the problem by adding the before_script section in the .gitlab-ci.yml file :
before_script:
- export MAVEN_HOME=/usr/local/apache-maven
I add also the line :
environment = ["MAVEN_HOME=/usr/local/apache-maven"]
on the config.toml file.
the problem still persist, my executor is : shell.
Any advice!
I managed to fix the problem using this workaround:
script:
- $MAVEN_HOME/bin/mvn clean
Just add the maven docker image, add below line as first line:
image: maven:latest or image: maven:3-jdk-10 or image: maven:3-jdk-9
refer: https://docs.gitlab.com/ee/ci/examples/artifactory_and_gitlab/
For anyone experiencing similar issues, it might be a good idea to restart the gitlab runner ".\gitlab-runner.exe restart". Especially after fiddling with environmental variables.
There is an easier way:
Making changes in ~/.bash_profile not ~/.bashrc.
According to this document:
.bashrc it is more common to use a non-login shell
This document saying:
For certain executors, the runner passes the --login flag as shown above, which also loads the shell profile.
So it should not be ~/.bashrc, you can also try ~/.profile which It can hold the same configurations, which are then also accessible by other shells
In my scenario I do following things:
1. Set gitlab-runner's user password.
passwd gitlab-runner
2. Login gitlab-runner.
su - gitlab-runner
3. Make changes in .bash_profile
Add maven to PATH:
$ export M2_HOME=/usr/local/apache-maven/apache-maven-3.3.9
$ export M2=$M2_HOME/bin
$ export PATH=$M2:$PATH
You can include these commands in $HOME/.bashrc
I hope you had figure out your question. I met the same question when I build my ci on my server.
I use the shell as the executer for my Runner.
here are the steps to figure out.
1 check the user on the runner server
if you had install maven on the runner server successfully, maybe it just successful for the root, u can check the real user for the ci process.
job1:
stage: test
script: whoami
if my case, it print gitlab-runner, not the root
2 su the real user, check mvn again
In this time, it print error as same as the Gitlab ci UI.
3 install maven for the real user. run the pipeline again.
You can also use as per below in the .gitlab-ci.yml
before_script:
- export PATH=$PATH:/opt/apache-maven-3.8.1/bin
I have a particularly involved java app that needs root access to system resources duing a build for running file mounts. Is there a way to directly invoke maven using "sudo" from jenkins via the maven2/3 plugin? Or does the plugin always run as jenkins.?
Here is how to run Jenkins as root - this will cause the maven plugin processes to also run as root.
Method 1) Modify the following line in JENKINS_USER in /etc/sysconfig/jenkins
#JENKINS_USER=jenkins
JENKINS_USER=root
In Debian-based systems, the file is located at /etc/default/jenkins
Method 2) Directly modify /etc/init.d/jenkins
#daemon --user "$JENKINS_USER" --pidfile "$JENKINS_PID_FILE" $JAVA_CMD $PARAMS > /dev/null
echo "WARNING: RUNNING AS ROOT"
daemon --user root --pidfile "$JENKINS_PID_FILE" $JAVA_CMD $PARAMS > /dev/null
Then, of course, you must run:
service jenkins restart
Try running the jenkins process as root (although not ideal security-wise), it should spawn the maven process as the same user.
When you run maven through Jenkins maven plugin, its executed in jenkins`s process. Running server is a root is a bad idea. You could try running plugin as shell command:
sudo mvn org.apache.maven.plugins:maven-dependency-plugin:2.4:get -DartifactId=...
see also this:
https://superuser.com/questions/67765/sudo-with-password-in-one-command-line
My suggestion for running root things with jenkins is to compile on jenkins a binary and giving it suid bit, so that it can be launched by the jenkins user but executed as root. For instance, I write a C file:
#include <stdio.h>
#include <stdlib.h>
int main(){
system("whoami");
}
compile it (as root)
# gcc -c iamroot.c
# gcc -o iamroot iamroot.o
and give it suid bit
# chmod u+s iamroot
Then you obtain (as any other user)
$ ./iamroot
root
Now this can be run by the jenkins user, and state it's root. In term of security, it's way better than giving jenkins user root or sudo rights.