runit call script with start/stop - bash

I've some SysVinit scripts with start/stop, they are used for remote server deployment, as now i am using runit in other deployment purpose, and don't want to duplicate the scripts (maintenance reason). Is it possible that runit invokes these scripts? or other approach? Thank you in advance.

It is unlikely that you can use these scripts unmodified with runit. Runit expects the service started with run to remain in the foreground and not exit.
A SysVInit script expects the opposite behavior. Because it does not perform any process monitoring (and the lack of process monitoring is presumably why you are switching to runit), SysVInit scripts expect services to run in the background, and will exit after starting the service.
These are two fundamentally incompatible models.
You could consider using systemd instead of runit, which provides good process monitoring while also being able to follow processes that fork.

Related

Is forking in a daemon still necessary when using systemd?

I created a script which is supposed to run as a daemon, controlled by systemd. I came across ancient questions like What is the reason for performing a double fork when creating a daemon? and ancient documentation, which suggests that daemons should fork to detach from a terminal.
But in 2020, using systemd, all of this seems obsolete to me. As far as I understand (with support from https://jdebp.eu/FGA/unix-daemon-design-mistakes-to-avoid.html), there is no need to detach from any terminal, no need to avoid zombie processes etc. The whole forking-and-exiting only makes sense to me if I want to start the daemon manually from a terminal and not with systemd.
Am I right or is there still any benefit from forking inside a daemon and exiting the parent?
You are correct. The forking is now 100% handled by the systemd environment so there is really no need to do anything in that arena. It even saves the PID which you can access in the StopExec=... as $MAINPID:
StopExec=/bin/kill "$MAINPID"
If your daemon has a forking capability, you can use it with the forking type:
[Service]
type=forking
But if you don't have any forking mechanism in your daemon, don't implement it. It's useless.
Note that from the command line, you can always use the & to start it in the background. That's explicit. People can clearly understand how that works.
Another point, many people would use a PID file to save that identifier and use it to kill the process on a stop. That PID file was also useful to prevent the administrator from starting a second instance of the same service. Again, systemd takes care of that. You can have at most one instance of any service.

Spring Boot: Starting multiple services using shell script (.sh)

I like to write a shell script for a backend server using Spring Boot (v2.1.1) to start multiple microservices in a certain order - some services depend on other to be running.
What is the 'best practice'?
Of course i could run the .jars like this (original post):
#!/bin/bash
java -jar myjar1.jar &
java -jar myjar2.jar &
java -jar myjar3.jar &
But this would start the .jars simultaneously, afaik.
How can i ensure, that a certain service myjar1.jar started properly and after that, another service myjar2.jar is started. Because every service is a SpringBootApplication i assume that there are certain possibilities to do so?!
I read this SO solution but I don't want to create any symlinks, because i just need that for development purposes.
Well it is very specific to your service as to when it gets started.
At process level, as soon as you execute the command service is running, so you will need your service to share the state when its up.
One way i can think of is in your script start the service, expose health api and check if its up. if it is move to next one. You need to make use of curl and sleep command in your scripts.
But I would like to know why you want to do that. Specially for your microservices, your services should not depend on each other. They may need some data, but they should be resilient to the fact that services may come and go. You should have a very strong reason to do as you are doing, cause in real world env, it is very difficult to ensure order is maintained.

Running batch applications on Cloudfoundry: using tasks instead of long-running processes

I would like to run a batch application (that is a short lived process that should not be restarted) on Pivotal CloudFoundry.
I am not sure how to do that. My current batch app is restarted repeatedly by Pivotal CF.
It seems there's a new CF primitive called a task - as opposed to a long-running process. Tasks are supposed to be available on CF 1.7 (see https://stackoverflow.com/a/35512113/536299).
I was neither able to find relevant information in the CF documentation nor to figure out which version of the Pivotal CF is currently being run...
Can someone please help?
I just got relevant information regarding short-lived/one-off processes on CF. It currently seems to be very difficult to run short-lived/one-off processes on CF.
This will change when CF v3's tasks becomes generally available.
Here is the information I was given:
Batch jobs are a little tricky on PWS and PCF because at the moment
the platform expects your application to continue running forever.
Even if the app exits successfully, the platform considers it to have
crashed and will restart it. There is support in v3 of the platform
for one-off tasks like batch jobs, so this will get easier in the
future. For now, what you need to do is to make the app run forever.
One option is to add a loop to the main method in the app, the loop
would essentially run the batch job, pause for some set amount of time
and repeat indefinitely.
So bottom-line is wait for CF v3's tasks.
See here for documentation about tasks: http://v3-apidocs.cloudfoundry.org/version/release-candidate/index.html#tasks

Start and monitor multiple instances of one process in Windows

I have a Windows application of which I need multiple instances running, with different command line parameters. The application is quite unstable and tends to crash every 48 hours or so.
Since manual checking for failure and restarting in case of one isn't what I love to do I want to write a "manager program" for this. It would launch the program (all its instances) and then watch them. In case a process crashes it would be restarted.
In Linux I could achieve this with fork()s and pids, but this obviously is not available in Windows. So, should I try to implement a CreateProcess version or is there a better way?
When you call CreateProcess, you are returned a handle to the new process in the hProcess member of the process information struct that you pass to CreateProcess. You can use this handle to detect when the process terminates.
For instance, you can create another thread and call WaitForSingleObject(hProcess) and block until the process terminates. Then you can decide whether or not to restart it.
Or your could call GetExitCodeProcess(hProcess, &exitcode) and test exitcode. If it has the value STILL_ACTIVE then your process has not terminated. This approach based on GetExitCodeProcess necessitates polling.
If it can be run as a daemon, the simplest way to ensure it keep running is Non-Sucking Service Manager.
It will allow to run as win32 service applications not designed as services. It will monitor and restart if necessary. And the source code is included, if any customization is needed.
All you need to do is define each of your instances as a service, with the required parameters, at it will do the rest.
If you have some kind of security police limitation and can't use third party tools, then coding will be necessary. The answer from David Heffernan gives you the appropiate direction.
Or it can be done in batch, vbs or js without need of anything out of the system. WMI Win32_Process class should allow you to handle it.

Windows Service exits when calling an child process using _execv()

I have a C++ Windows application that was designed to be a Windows service. It executes an updater periodically to see if there's a new version. To execute the updater, _execv() is used. The updater looks for new versions, downloads them and stops the Windows service (all of these actions are logged), replaces the files, and starts the service again. Doing that in CLI mode (not going into service mode) works fine that way. According to my log files, the child process is launched, but the parent process (the Windows service) exits.
Is it even "allowed" to launch child processes in Windows services, and, why does the service exit unexpected then? My log files show no error (I am even monitoring for segfaults etc which is written to the log).
Why are you using _execv() rather than doing it the windows way and using CreateProcess()?
I assume you've put some debug into your service and you aren't getting past the point where you call _execv() in your service?
_execv replaces the existing process with a new one running the file you pass as the parameter. Under Unix (and similar) that's handled directly/natively. Windows, however, doesn't support that directly -- so it's done by having the parent process exit and arrange for a child process to be started as soon as it does.
IOW, it sounds like _execv is doing exactly what it's designed to -- but in this case, it's probably not what you really want. You can spawn a process from a service, but you generally want to use CreateProcessAsUser to create it under a specified account instead of the service account (which has a rather unusual set of rights assigned to it). The service process will then exit and restart when it's asked to by the service manager when your updater calls ControlService, CreateService, etc.

Resources