Running unit tests after starting elasticsearch - bash

I've downloaded and set up elasticsearch on an EC2 instance that I use to run Jenkins. I'd like to use Jenkins to run some unit tests that use the local elasticsearch.
My problem is that I haven't found a way on how to start the elasticsearch locally and run the tests after, since the script doesn't proceed after starting ES, because the job is not killed or anything.
I can do this by starting ES manually through SSH and then building a project with only the unit tests. However, I'd like to automate the ES launching.
Any suggestions on how I could achieve this? I've tried now using single "Execute shell" block and two "Execute shell" blocks.

It is happening because you starting elasticsearch command in blocking way. It means command will wait until elasticsearch server is shutdown. Jenkins just keep waiting.
You can use following command
./elasticsearch 2>&1 >/dev/null &
or
nohup ./elasticsearch 2>&1 >/dev/null &
it will run command in non-blocking way.
You can also add small delay to allow elasticsearch server start
nohup ./elasticsearch 2>&1 >/dev/null &; sleep 5

Related

How can I run gradle apps in background in Gitlab job (nohup is not working in gitlab)

I want to run few gradle apps that will run as background processes for around 5 minutes (there will be more commands to be run after calling gradle apps and then the job will be finished). They execute just fine on my ubuntu machine using nohup:
nohup gradle app1 > nohup1.out 2>&1 &
nohup gradle app2 > nohup2.out 2>&1 &
...
Running these commands does not require pressing INTERRUPT button or enter and so I can just run multiple gradle applications in background in row and start interacting with them.
Though today I learned that Gitlab runner cancels all child processes, thus making nohup useless in a Gitlab CI job.
Is there a workaround so that I can run multiple gradle jobs inside Gitlab job in the background?
I tried using tool at but it did not bring functionality as nohup did.
To background a job, you do not need to use nohup, you can simply use & at the end of a command to 'background' it.
As a simple example:
test_job:
image: python:3.9-slim
script:
- python -m http.server & # Start a server on port 8000 in the background
- apt update && apt install -y curl
- echo "contacting background server..."
- curl http://localhost:8000
And this works. The job will exit (closing the background jobs as well) after the last script: step is run.
Be sure to give enough time for your apps to start up before attempting to reach them.

Running two scripts that use the foreground

I'm trying to fire up an instance of elasticsearch and then an instance of kibana (which needs to wait until ES is up) using a script. I can't just do ./bin/elasticseach && ./bin/kibana or something similar to that because the first script runs in the foreground which means the second command wont run. What's the best way I can do this while ensuring kibana only starts when ES is up and running?
If you have no way to tell when ES is up, I can only suggest:
./bin/elasticseach & sleep 10 && ./bin/kibana
Where you have to guesstimate in how much time it will be ready
Assuming ./bin/elasticsearch blocks the command line until it is 'up', you can just use a ';' between the commands to run them one after the other.
./bin/elasticseach; ./bin/kibana
But since elasticsearch just blocks the command line until stopped, you could do something else. You can run it like a daemon, so it doesn't block the command line. (Here is the documentation about starting and stopping es)
./bin/elasticseach -d -p PID; ./bin/kibana; kill `cat PID`

How can I run a Shell when booting up?

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help
Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

Jenkins job not stopping after running a remote script

I am building an executable jar file using jenkins and copying into a remote server. After the copying, I need to run the jar file in the remote server. I am using SSH Plugin for executing the remote script.
The remote script looks like this:
startServer.sh
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
Jenkins is able to execute the script file, but it is not stopping the job after the execution. It is still continuing the process and showing the log in jenkins console. This is creating problems since these continuing jobs blocks other jobs to execute.
How can I stop the job once the script is executed.
Finally, I was able to fix the problem. I am posting it for others sake.
I used ssh -f user#server ....
this solved my problem.
ssh -f root#${server} sh /home/administrator/bin/startServer.sh
I ran into a similar issue using the Publish Over SSH Plugin. For some reason Jenkins wasn't stopping after executing the remote script. Ticking the below configuration fixed the problem.
SSH Publishers > Transfers > Advanced > Exec in pty
Hope it helps someone else.
I got the solution for you my friend.
Make sure to add usePty: true in the pipeline that you are using which will enable the execution of sudo commands that require a tty (and possibly help in other scenarios too.)
sshTransfer(
sourceFiles: "target/*.zip",
removePrefix: "target",
remoteDirectory: "'/root/'yyyy-MM-dd",
execTimeout: 300000,
usePty: true,
verbose: true,
execCommand: '''
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
echo $! >> /tmp/jenkins/jenkins.pid
sleep 1
'''
)

start-stop-daemon weird behaviour

I'm creating a pallet crate for elasticsearch. I was stuck on the service not starting however after looking at the logs it seems that it's not really anything to do with pallet. I am using the elasticsearch apt package for 1.0 which includes an init script. If I run sudo service elasticsearch start then ES starts with no problems. If pallet does this for me then it records standard out as having started it successfully
start elasticsearch
* Starting Elasticsearch Server
...done.
However it is not started.
sudo service elasticsearch status
* elasticsearch is not running
I messed around with the init script and I found if I added sleep 1 after starting the daemon then it works correctly with pallet.
start-stop-daemon --start -b --user "$ES_USER" -c "$ES_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS
#this sleep will allow it to work
#sleep 1
log_end_msg $?
I don't understand what is going on?
I've seen issues like this before, too. It generally comes down to expecting something to have finished before the script finishes, which may not always happen with services since they fork off background tasks that may still get killed when the ssh connection is terminated.
For these kinds of things you should use Pallet's built in code for running things under supervision. This also has the advantage of making it very easy to switch from plain init.d to runit or daemontools later, which is especially useful for Elasticsearch because it's a JVM process and nearly any JVM will eventually crash if you let it run long enough.

Resources