After jenkins job complete, the service also down - shell

I have met a problem.
I used Jenkins to install haproxy and start the service, but after the job complete, the executor is free, and the haproxy daemon also disappear.
if I use sleep 30s after the service start, and the haproxy service will also alive at the 30s, after that, the haproxy daemon will down.

This behaviour is by design, as explained in ProcessTreeKiller. To avoid daemons spawned by the Jenkins build being terminated, add
export BUILD_ID=dontKillMe
to the beginning of your shell step.

Related

Centos 7 Created service to run shell script on infinite loop

I have the following script:
whie true
do
#code
sleep 60
done
I then wanted to create a service to start the machine and launch this script as service:
created my.service at /etc/systemd/system/my.service
[Unit]
Description=my Script
[Service]
Type=forking
ExecStart=/bin/script.sh
[Install]
WantedBy=multi-user.target
problem occurs when i systemctl start my.service
it goes to while true loop and hang in there, how can i run this service and make it run in the background ?
According to systemd specification at link. Type=forking is not exactly correct kind of start-up in your case
If set to forking, it is expected that the process configured with
ExecStart= will call fork() as part of its start-up. The parent
process is expected to exit when start-up is complete and all
communication channels are set up. The child continues to run as the
main service process, and the service manager will consider the unit
started when the parent process exits. This is the behavior of
traditional UNIX services. If this setting is used, it is recommended
to also use the PIDFile= option, so that systemd can reliably identify
the main process of the service. systemd will proceed with starting
follow-up units as soon as the parent process exits.
The Type=simple can be correct one. You can try with it
If set to simple (the default if ExecStart= is specified but neither
Type= nor BusName= are), the service manager will consider the unit
started immediately after the main service process has been forked
off. It is expected that the process configured with ExecStart= is the
main process of the service. In this mode, if the process offers
functionality to other processes on the system, its communication
channels should be installed before the service is started up (e.g.
sockets set up by systemd, via socket activation), as the service
manager will immediately proceed starting follow-up units, right after
creating the main service process, and before executing the service's
binary. Note that this means systemctl start command lines for simple
services will report success even if the service's binary cannot be
invoked successfully (for example because the selected User= doesn't
exist, or the service binary is missing).

Laravel horizon supervisor does not restart after horizon::terminate with forge daemon running

Horizon runs fine but only recently, after a deploy, supervisor and queue workers do not start back up again with Horizon GUI showing "Inactive"
To get them running again I can:
restart the daemon worker from within forge
restart the supervisor /etc/init.d/supervisor restart
My deploy script has php artisan horizon:terminate within it. I have also tried reset/purge and a combination thereof.
When I run terminate in the command with an inactive horizon, it seems to do nothing. When I run the same command with horizon active, it shuts it down but the daemon does not reboot supervisor.
The daemon runs without any errors throughout all of this.
Should terminate take down and bring up the service or is it the daemon itself?
Running horizon:terminate will kill the daemon, when the daemon is killed supervisor will realize this and boot up a new daemon. You can clearly see this if you monitor your server with htop while running terminate command.
If a long running job is running, it will run the current job until it finishes. Terminate in general is to reboot the process, to be certain the new code is loaded into horizon, this should be done after the last step in envoyer or similar deployment tool.
This seems like there is something wrong in your setup. Does the horizon process run before you call terminate, again check htop?. Or what happens when the command is called manually?

Ubuntu run service in foreground

I've made a (docker) container for ddclient.
The problem is that I'm having problems in running that service in the foreground so that the docker container keeps running.
I've managed to keep the docker running by adding a bashat the end of the script but this is hackish, since the actual process it should be whatching is the ddclient.
Another way I found was to tail -f the log file, but if the service stops, the container will keep running instead of stoping.
Q: So is there any (easy) way to keep a service running in the foreground?
The problem with the process (any process) running in a container is signal management: you need to make sure the SIGKILL and other signals are properly communicated to the right process(es) in order to successfully stop/remove a container (and not leave zombie processes: see "PID 1 zombie reaping issue")
One option is at least to make your service at least write in a log file
ENTRYPOINT ["/bin/sh" "-c" ]
CMD yourProcess > log
That should keep it in foreground, as suggested in "How do I bring a daemon process to foreground?".
For a service, try and use as a base image phusion/baseimage-docker which manages other services properly.

Running Go as a daemon webserver on CentOS 7

I am trying to migrate from PHP to Go and planning to drop nginx alltogether. But I don't know how to run the go http webserver as a deamon in the background and I also don't know how to automatically start the webserver if I make a reboot, or how to kill the process.
With nginx all I do is
$ systemctl start nginx.service
$ systemctl restart nginx.service
$ systemctl stop nginx.service
$ systemctl enable nginx.service
$ systemctl disable nginx.service
This is very convenient, but it seems like I can't do this with Go http server. I have to compile and run it as any other Go program. What solutions do exist for these concerns?
This is less of a Go question and more of a Systems Administration question. There are ways to add a command to systemd (like in this blog post).
Personally, I prefer to keep my applications separate from my services, so I tend to use supervisord for my programs that tend to be started, stopped, or restarted frequently. The documentation for supervisord is pretty straightforward, but essentially you can create a config file to describe the services you want to run, the command used to run it (such as /path/to/go/binary -flag) how you want to handle starting, stopping, failure recovery, logging, etc....

start daemon on remote server via Jenkins SSH shell script exits mysteriously

I have a build job on jenkins that is building my project and after it is done, it opens an ssh shell script on a remote server and transfers files and then stop and starts a daemon.
When I stop and start the daemon from the command line on a RHEL server, it executes just fine. When the job executes in jenkins, there are no errors.
The daemon stops fine and it starts fine. But shortly after starting, the daemon dies suddenly.
sudo service daemonName stop
# transfer files.
sudo service daemonName start
I'm sure that the problem isn't pathing
Does anyone know what could be special about the way Jenkins is executing the ssh shell script that would cause the daemon start to not fully complete?
The problem:
When executing a build through jenkins, the command to start the daemon process was clearly successfully executing, yet after the build job was done, the daemon would suddenly quit.
The solution:
I thought for this whole time that it was jenkins killing the daemon. So I tried many different incarnations and permutations of disabling the ProcessTree module that goes through and cleans up zombie child processes. I tried fooling it by resetting the BUILD_ID environment variable. Nothing worked.
Thanks to this thread I found out that that solution only works for child processes executed on the BUILD machine. I.E. not applicable to my problem.
More searching led me here: Run a persistent process via ssh
The solution? Nohup.
So now the build successfully restarts the daemon by executing the following:
sudo nohup service daemonname start
Jenkins watches for processes spawned by the job and kill them to avoid zombie processes.
See https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller
The workaround is to override the BUILD_ID environment variable:
BUILD_ID=dontKillMe

Resources