Testing gitlab-runner run command in a separate shell? - bash

Context
While testing the functions of an installation script that automatically installs and runs GitLab and GitLab runners, I experienced some difficulties running the GitLab runner in the background an proceeding with the tests. (My intention is to simply run the runner in the background, and then to run a few tests that verify the service behaves as expected). However, when my test executes the function that executes the sudo gitlab-runner run command, the test hangs, waiting on the service to stop, (but the service should not stop).
Function
# Run GitLab runner service
run_gitlab_runner_service() {
run_command=$(sudo gitlab-runner run &)
echo "service is running"
}
The output of the test is:
Configuration loaded builds=0 7/7
And then the cursor just keeps blinking while the last runner is successfully awaiting instructions.
Test Code
#test "Test if the GitLab Runner CI service is running correctly." {
run_service_output=$(run_gitlab_runner_service)
EXPECTED_OUTPUT="service is running"
assert_equal "$run_service_output" "$EXPECTED_OUTPUT"
}
However, when inspecting the regular output of the sudo gitlab-runner run command, I observe that it starts a new line/shell/something without asking. For example:
(base) name#name:~$ sudo gitlab-runner run &
[1] 84799
(base) name#name:~$ Runtime platform arch=amd64 os=linux pid=84800 revision=8925d9a0 version=14.1.0
Starting multi-runner from /etc/gitlab-runner/config.toml... builds=0
Running in system-mode.
Configuration loaded builds=0
listen_address not defined, metrics & debug endpoints disabled builds=0
[session_server].listen_address not defined, session endpoints disabled builds=0
basically, the output (base) name#name:~$ Runtime platform arch=amd64 should never stop (until manually terminated), so I think the test is waiting on that shell/output to complete.
Note
For completeness, I do not type the (base) name#name:~$ Runtime platform arch=amd64 .. output, that somehow is pushed to the terminal in the new line some how. I do not know exactly why this isn't just under the previous/original (base) name#name:~$ that ran the sudo gitlab-runner run & command.
Question
How can I ensure the function and test proceed after starting the gitlab-runner run service?

I think you should use a slightly different syntax in order to run gitlab-runner in background
# Run GitLab runner service
run_gitlab_runner_service() {
( run_command="$(sudo gitlab-runner run)" ) &
echo "service is running"
}

A solution was found by redirecting the output of the command to /dev/null.
Code
# Run GitLab runner service
run_gitlab_runner_service() {
output=$(nohup sudo gitlab-runner run &>/dev/null &)
echo "service is running"
}

Related

Docker Command Issue Running Outside of Bash

I have a Docker container that handles an application. I am attempting to write tests for my system using npx and nightwatchJS.
I use CI and to run the tests for my entire suite I docker-compose build then run commands from outside of the container like so:
Example of backend python test being called (this works and is run as expected):
docker-compose run --rm web sh -c "pytest apps/login/tests -s"
Now I am trying to run an npx command to do some front-end testing but I am getting errors with something I cannot seem to diagnose:
Error while running .navigateTo() protocol action: An unknown server-side error occurred while processing the command. – unknown error: net::ERR_CONNECTION_REFUSED
Here is that command:
docker-compose run --rm web sh -c "npx nightwatch apps/login/tests/nightwatch/login_test.js"
The odd part of this is that if I go into bash:
docker-compose exec web bash
And then run:
npx nightwatch apps/login/tests/nightwatch/login_test.js
I don't get that error as I'm in bash.
This leads me to believe that I have an error in something with the command. Can somebody please help with this?
Think as containers as a separate computers.
When you run on your computer pytest apps/login/tests -s and then I run on my computer npx nightwatch apps/login/tests/nightwatch/login_test.js surely my computer will not connect to yours. I will get "connection refused" kind of error.
With docker run you run a separate new "computer" that runs that command - it has it's own pid space, it's own network address, etc. Than inside "that computer" you can execute another command with docker exec. To have you command to connect with localhost, you have to run them on the same "computer".
So when you run docker run with the client, it does not connect to a separate docker run. Either specify correct ip address or run both commands inside the same container.
I suggest to research how docker works. The above is a very crude oversimplification.

Composer script to start a background process, whose lifetime is bound to that of the Composer script

I've started experimenting with Composer Scripts.
I have a project where there are "Functional tests" of the API endpoints. Running the whole test suite requires running the following commands in order:
composer install to install all required dependencies of the backend APIs
php yii server --test to start a lite server that is connected to the "test" MySQL database. The test server starts running on localhost:9000.
sh vendor/bin/phpunit --configuration tests/functional/phpunit.xml to run th actual tests. This last command triggers the execution of all test cases, most of which execute HTTP calls to the lite server, launched at step 2.
I would like to automate and "atomize" this 3 step process into a single Composer script that can be easily started, killed and restarted effortlessly.
Here's my current progress:
"scripts": {
"test-functional": [
"#composer install",
"php yii server --test",
"sh vendor/bin/phpunit --configuration tests/functional/phpunit.xml"
]
}
The problem is that the 2-nd command (php yii server --test) "steals" the terminal because PHPs built-in lite server requires the terminal to be open while the command is running. Killing the command kills the lite server as well. I tried suffixing the second step of the script with & which generally makes processes go in the background and not steal the terminal, but it seems this trick doesn't work for Composer Scripts. Any other workaround or possibility that I'm missing?
My final goal is to make the 3 steps execute in an atomic way, output the results of the tests and end the command, cleaning up everything (including killing the lite server, launched in step 2).

Runner is paused and will not receive any new jobs

I recently install the gitlab-ruuner in a machine and register a specific runner manually with a registration token which obtained from CI/CD Setting/runner page for my repository by :
sudo gitlab-runner register
I start it by :
sudo -s gitlab-runner start
and output :
Runtime platform arch=amd64 os=linux pid=14558 revision=f100a208 version=11.6.0
The CI pipeline stuck in pending mode and requires a active runner assigned to it. How can I activate the runner?
I solved my problem. First I forgot to do sudo gitlab-runner run and after that I changed the config.toml. Specifically I turned privilege mode to true. And finally in the runner edit page, turn on the run untagged jobs option
If your CI/CD job is pending, saying This job is stuck, because you don't have any active runners that can run this job. Go to Runners page., try restarting your gitlab runner:
$ sudo gitlab-runner stop
$ sudo gitlab-runner start
If it is still not working, then try checking, if not yet, Run untagged jobs as below for your CI/CD runner, which by default is unchecked.

How to automate docker to run script within container and notify completion to host?

I would like to automate the build process of my application through the following steps:
Launch my build-system container that has all dependencies and scripts for making the final exe of my application.
Within the build-system container fire a bash script that starts the build process.
When the build completes, transfer the exe to the host machine either by using docker cp or by attaching a volume while launching the container.
Transfer the exe to a new dist-system container, which is essentially the final image that will be stored in Docker hub.
Install the application within the dist-system, make custom configurations by firing another bash script in that container.
When step 5 completes, get back to the host machine and run docker commit of the dist-system container and then docker push the image to Docker hub from the host machine.
My question relates specifically to points 3 and 6 where I need to know that the bash scripts have completed execution within the containers. Only then I would like to fire docker commands in the host machine. Is there a way in which docker can be notified of bash script execution within containers?
docker run is synchronous, and by default will block until the docker container exits.
For example, if the build-system Dockerfile looks like this:
FROM alpine:latest
COPY ./build.sh /build.sh
VOLUME /data
CMD ["/build.sh", "/data/test.out"]
With build.sh as follows:
#!/bin/sh
sleep 5
echo "Hello world!" >> "$1"
exit 0 # docker run will exit with this code, you can use this to check if your build was successful
Running docker run --rm -v /your/work/directory/build-output:/data build-system will wait 5 seconds and then exit, and /your/work/directory/build-output/test.out will have been created.

How to deploy SpringBoot Maven application with Jenkins ?

I have a Spring Boot application which runs on embedded Tomcat servlet container mvn spring-boot:run . And I don’t want to deploy the project as separate war to standalone Tomcat.
Whenever I push code to BitBucket/Github, a hook runs and triggers Jenkins job (runs on Amazon EC2) to deploy the application.
The Jenkins job has a post build action: mvn spring-boot:run, the problem is that the job hangs when post build action finished.
There should be another way to do this. Any help would be appreciated.
The problem is that Jenkins doesn't handle spawning child process from builds very well. Workaround suggested by #Steve in the comment (nohuping) didn't change the behaviour in my case, but a simple workaround was to schedule app's start by using the at unix command:
> echo "mvn spring-boot:run" | at now + 1 minutes
This way Jenkins successfully completes the job without timing out.
If you end up running your application from a .jar file via java -jar app.jar be aware that Boot breaks if the .jar file is overwritten, you'll need to make sure the application is stopped before copying the artifact. If you're using ApplicationPidListener you can verify that the application is running (and stop it if it is) by adding execution of this command:
> test -f application.pid && xargs kill < application.pid || echo 'App was not running, nothing to stop'
I find very useful to first copy the artifacts to a specified area on the server to keep track of the deployed artifacts and not to start the app from the jenkins job folder. Then create a server log file there and start to listening to it on the jenkins window until the server started.
To do that I developed a small shell script that you can find here
You will also find a small article explaining how to configure the project on jenkins.
Please let me know if worked for you. Thnaks
The nohup and the at now + 1 minutes didn't work for me.
Since Jenkins was killing the process spun in the background, I ensured the process to not be killed by setting a fake BUILD_ID to that Jenkins task. This is what the Jenkins Execute shell task looks like:
BUILD_ID=do_not_kill_me
java -jar -Dserver.port=8053 /root/Deployments/my_application.war &
exit
As discussed here.
I assume you have a Jenkins-user on the server and this user is the owner of the Jenkins-service:
log in on the server as root.
run sudo visudo
add "jenkins ALL=(ALL) NOPASSWD:ALL" at the end (jenkins=your Jenkins-user)
Sign In in Jenkins and choose your jobs and click to configure
Choose "Execute Shell" in the "Post build step"
Copy and paste this:
service=myapp
if ps ax | grep -v grep | grep -v $0 | grep $service > /dev/null
then
sudo service myapp stop
sudo unlink /etc/init.d/myapp
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
else
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
fi
Save and run your job, the service should start automatically.
This worked for me on jenkins on a linux machine
kill -9 $(lsof -t -i:8080) || echo "Process was not running."
mvn clean compile
echo "mvn spring-boot:run" | at now + 1 minutes
In case no process on 8080 it will print the message and will continue.
Make sure that at is installed on your linux machine. You can use
sudo apt-get install at
to install at

Resources