How can I run gradle apps in background in Gitlab job (nohup is not working in gitlab) - bash

I want to run few gradle apps that will run as background processes for around 5 minutes (there will be more commands to be run after calling gradle apps and then the job will be finished). They execute just fine on my ubuntu machine using nohup:
nohup gradle app1 > nohup1.out 2>&1 &
nohup gradle app2 > nohup2.out 2>&1 &
...
Running these commands does not require pressing INTERRUPT button or enter and so I can just run multiple gradle applications in background in row and start interacting with them.
Though today I learned that Gitlab runner cancels all child processes, thus making nohup useless in a Gitlab CI job.
Is there a workaround so that I can run multiple gradle jobs inside Gitlab job in the background?
I tried using tool at but it did not bring functionality as nohup did.

To background a job, you do not need to use nohup, you can simply use & at the end of a command to 'background' it.
As a simple example:
test_job:
image: python:3.9-slim
script:
- python -m http.server & # Start a server on port 8000 in the background
- apt update && apt install -y curl
- echo "contacting background server..."
- curl http://localhost:8000
And this works. The job will exit (closing the background jobs as well) after the last script: step is run.
Be sure to give enough time for your apps to start up before attempting to reach them.

Related

Start jobs in background from sh file in gitbash

I need to start a couple of processes locally in multiple command-prompt windows, to make it simple, I have written a shell script say abc.sh to run in git-bash which has below commands:
cd "<target_dir1>"
<my_command1> &>> output.log &
cd "<target_dir2>"
<my_command2> &>> output.log &
when I run these commands in git bash I get jobs running in the background, which can be seen using jobs and kill command, however when I run them through abc.sh, I get my processes running in the background, but the git-bash instance disowns them, now I can no longer see them using jobs.
how can I get them run through the abc.sh file and also able to see them in jobs list?

Running unit tests after starting elasticsearch

I've downloaded and set up elasticsearch on an EC2 instance that I use to run Jenkins. I'd like to use Jenkins to run some unit tests that use the local elasticsearch.
My problem is that I haven't found a way on how to start the elasticsearch locally and run the tests after, since the script doesn't proceed after starting ES, because the job is not killed or anything.
I can do this by starting ES manually through SSH and then building a project with only the unit tests. However, I'd like to automate the ES launching.
Any suggestions on how I could achieve this? I've tried now using single "Execute shell" block and two "Execute shell" blocks.
It is happening because you starting elasticsearch command in blocking way. It means command will wait until elasticsearch server is shutdown. Jenkins just keep waiting.
You can use following command
./elasticsearch 2>&1 >/dev/null &
or
nohup ./elasticsearch 2>&1 >/dev/null &
it will run command in non-blocking way.
You can also add small delay to allow elasticsearch server start
nohup ./elasticsearch 2>&1 >/dev/null &; sleep 5

how to keep jobs with nohup in lxd

I made python script that needs to run background.
This script is in lxd image which is running (checked by 'lxc list')
got into image and tried to keep it run in background.
local> lxc exec image-name -- bash
image-root> nohup python test.py &
and It worked at this point.
image-root> jobs
--printed test.py jobs
BUT when I got out from image and re-entered it, all jobs gone.
image-root> exit (or ctrl+d)
root> lxc exec image-name -- bash
image-root> jobs
--printed nothing and script is not running in background. WHY?
Are you sure the script is not running? Since you have backgrounded it and nohupped it, and then return in a new bash instance, there is no reason you should see it with the "jobs" command. When doing a quick test I have no problem leaving the job running and returning to see the result still being produced.

Running background process (webserver) for just as long as other process (test suite) executes

I'm looking for a unix shell command that would allow me to run a process in the background (which happens to be a webserver) for just as long as another foreground process (which happens to be a test suite) executes. I.e, after the foreground process exits, the background process should exit as well. What I have so far is
.. preliminary work .. && (webserver & test)
which comes close, but fails in that the webserver process never exits. Is there a good way to express this in a single command or would it be more reasonable to write a more verbose script for that?
To give some further detail (and welcoming any relevant suggestions), I'm in the process of writing a javascript lib which I'd like to test using Selenium - hence the need for the webserver to execute 'alongside' the test suite. And in my attempt to do away with Grunt & co. and follow a leaner, npm-based approach to task management I've hit this shell-related road bump.
You can try this:
# .. preliminary work ..
webserver &
PID=$!
test_suite &
(wait $!; kill $PID)
This will work as long as you don't mind both commands being run in the background. What happens here is:
The webserver is started in the background.
The pid of it is assigned to $PID.
The test suite is started in the background.
The last line waits for the test suite to exit, and then kills the webserver.
If you need either command to be in the foreground, then run this version
# .. preliminary work ..
screen -dm -S webserver webserver
screen -dm -S test_suite test_suite
(wait "$(screen -ls | awk '/\.test_suite\t/ {print strtonum($1)}')"
screen -X -S webserver -p 0 kill) &
screen -r test_suite
Note that this requires the screen command (use apt-get install screen to install it or use ubuntu's website if you have ubuntu). What this does is:
Starts the webserver
Starts the test suite.
Waits for the test suite to exit, and then kills the webserver
In the meantime, resumes the test suite in screen

How to automate startup of a web app with several independent processes?

I run the wesabe web app locally.
Each time I start it by opening separate shells to start the mysql server, java backend and rails frontend.
My question is, how could you automate this with a shell script or rake task?
I tried just listing the commands sequentially in a shell script (see below) but the later commands never run because each app server creates its own process that never 'returns' (until you quit the server).
I've looked into sub-shells and parallel rake tasks, but that's where I got stuck.
echo 'starting mysql'
mysqld_safe
echo 'starting pfc'
cd ~/wesabe/pfc
rails server -p 3001
echo 'starting brcm'
cd ~/wesabe/brcm-accounts-api
script/server
echo 'ok, go!'
open http://localhost:3001
If you don't mind the output being messed, put a "&" at the end of the line where you start the application to make it run in background.

Resources