Is forking in a daemon still necessary when using systemd? - fork

I created a script which is supposed to run as a daemon, controlled by systemd. I came across ancient questions like What is the reason for performing a double fork when creating a daemon? and ancient documentation, which suggests that daemons should fork to detach from a terminal.
But in 2020, using systemd, all of this seems obsolete to me. As far as I understand (with support from https://jdebp.eu/FGA/unix-daemon-design-mistakes-to-avoid.html), there is no need to detach from any terminal, no need to avoid zombie processes etc. The whole forking-and-exiting only makes sense to me if I want to start the daemon manually from a terminal and not with systemd.
Am I right or is there still any benefit from forking inside a daemon and exiting the parent?

You are correct. The forking is now 100% handled by the systemd environment so there is really no need to do anything in that arena. It even saves the PID which you can access in the StopExec=... as $MAINPID:
StopExec=/bin/kill "$MAINPID"
If your daemon has a forking capability, you can use it with the forking type:
[Service]
type=forking
But if you don't have any forking mechanism in your daemon, don't implement it. It's useless.
Note that from the command line, you can always use the & to start it in the background. That's explicit. People can clearly understand how that works.
Another point, many people would use a PID file to save that identifier and use it to kill the process on a stop. That PID file was also useful to prevent the administrator from starting a second instance of the same service. Again, systemd takes care of that. You can have at most one instance of any service.

Related

Launchd flow of spawning new processes

I'm doing some research about the way launchd load it's services from plist files under /Library/LaunchDaemons/ or via the command launchctl load
So far I've managed to gather some various sources and compose the following vague picture as I understand it:
Upon Service loading (launchctl load) The process launchctl send the launchd an appropriate XPC message, and then the launchd is forked into new process with the context of xpcproxy.
This generic process, is waiting for another XPC call from the launchd to run it's real process context according to the launchDaemon plst.
Is this explanation sounds right ? perhaps anybody can help me make it more accurate ?
thanks
This is actually a bit more complicated. The kernel is composed of two parts, BSD and the mach kernel; the latter being responsible for management of memory and process scheduling.
Each mach process has one or more mach tasks (really task port rights!). When an application is first launched, it has just one right, the bootstrap port, allowing communication with launchd. Note that a task port right is uni-directional, so a launching process that has the right to communicate with launchd must give a right for launchd to communicate back to it.
When an XPC message is received by launchd, it depends upon the Launch Daemon as to what action it takes. It's possible that the message is for a service that runs with a network port that may or may not be running. If running, it forwards any arguments from the calling process to the running service. If not running, it can provide the service on demand by launching the process first.
More specifically you asked about launchctl load. Since the source code for launchd is no longer open source, the next best resource is the reverse engineering work by Jonathan Levin; Author of Mac OS X and iOS Internals and more recently, his newer self-published books on *OS Internals.
You'll find his slides about launchd here, but probably more useful to you is his version of launchctl, jlaunchctl which is open source.
Finally, if you want to view content of XPC messages between processes, disable SIP and use Jonathan's invaluable XPoCe tool.

runit call script with start/stop

I've some SysVinit scripts with start/stop, they are used for remote server deployment, as now i am using runit in other deployment purpose, and don't want to duplicate the scripts (maintenance reason). Is it possible that runit invokes these scripts? or other approach? Thank you in advance.
It is unlikely that you can use these scripts unmodified with runit. Runit expects the service started with run to remain in the foreground and not exit.
A SysVInit script expects the opposite behavior. Because it does not perform any process monitoring (and the lack of process monitoring is presumably why you are switching to runit), SysVInit scripts expect services to run in the background, and will exit after starting the service.
These are two fundamentally incompatible models.
You could consider using systemd instead of runit, which provides good process monitoring while also being able to follow processes that fork.

In Windows 7, how to send a Ctrl-C or Ctrl-Break to a separate process

Our group has long running processes which run daily. The processes are typically started at 9pm on any given day and run until 7pm the next day. Thus they typically run 22hrs/day. They are started by scheduled tasks on servers under a particular generic user ID, and they start and run regardless of whether or not that user ID is logged on. Thus, they are windowless console executables.
The tasks orchestrate computations running on a large server farm. Generally these controlling tasks run uninterrupted for the full 22hrs/day. However, we often have a need to stop and restart these processes. Because they control a multitude of tasks running on our server farm, it is important that they be shut down cleanly, so that they can stop and shut down all the server farm processes. Which brings me to our problem.
The controlling process has been programmed to respond to ctrl-C and ctrl-break signals. This works fine when the process is manually started in a console where we have access to the console and can "type" ctrl-c or ctrl-break in the console window. However, as mentioned, the processes typically run as windowless scheduled tasks. Hence we cannot "type" anything into a non-existent console window. Because they are console processes that execute without a logon process, the also must be able to execute in a completely windowless environment. So, how do we set up the process to listen for a shut-down signal?
While the process does indeed listen for a ctrl-C and ctrl-break signal, I can see no way to send that signal to a process. This seems to be a fundamental problem in Windows, or am I wrong? I am aware of SendSignal.exe, but so far have been unable to get it to work. It fails as follows:
>SendSignal 26320
Sending signal to process 26320...
CreateRemoteThread failed with 0x00000005.
StartRemoteThread failed with 0x00000005.
0x00000005 == Access is denied.
Trying "taskkill" without -F results in:
>taskkill /PID 24840
ERROR: The process with PID 24840 could not be terminated.
Reason: This process can only be terminated forcefully (with /F option).
All other "kill" functions kill the process immediately rather than sending a signal.
One possible solution would be a file-watch based solution: create a watch for some modification of a specific file. But this is a hack and we would prefer to do it with appropriate signaling. Has anyone solved this issue? It seems to be so very basic a functionality, and it is certainly trivial to do it in a Unix environment. Surely Microsoft has provided SOME mechanism to allow clean shut down of a windowless executable?
I am aware of the thread below, whose question is virtually identical (save for the specification of why the answer is necessary, i.e. why one needs to be able to do this for a windowless, console-less process), but there is no answer there excpet for "use SendSignal", which, as I said, does not work for us:
Can I send a ctrl-C (SIGINT) to an application on Windows?
There are other similar questions, but no answers as yet.
Any help appreciated.
[Upgrading #Anon's comment to an answer for visibility]
windows-kill worked perfectly and managed to resolve access denial issues faced with SendSignal. A privileged user would have to run it as well of course.
windows-kill also supports both ctrl-c and ctrl-break signals.

Start and monitor multiple instances of one process in Windows

I have a Windows application of which I need multiple instances running, with different command line parameters. The application is quite unstable and tends to crash every 48 hours or so.
Since manual checking for failure and restarting in case of one isn't what I love to do I want to write a "manager program" for this. It would launch the program (all its instances) and then watch them. In case a process crashes it would be restarted.
In Linux I could achieve this with fork()s and pids, but this obviously is not available in Windows. So, should I try to implement a CreateProcess version or is there a better way?
When you call CreateProcess, you are returned a handle to the new process in the hProcess member of the process information struct that you pass to CreateProcess. You can use this handle to detect when the process terminates.
For instance, you can create another thread and call WaitForSingleObject(hProcess) and block until the process terminates. Then you can decide whether or not to restart it.
Or your could call GetExitCodeProcess(hProcess, &exitcode) and test exitcode. If it has the value STILL_ACTIVE then your process has not terminated. This approach based on GetExitCodeProcess necessitates polling.
If it can be run as a daemon, the simplest way to ensure it keep running is Non-Sucking Service Manager.
It will allow to run as win32 service applications not designed as services. It will monitor and restart if necessary. And the source code is included, if any customization is needed.
All you need to do is define each of your instances as a service, with the required parameters, at it will do the rest.
If you have some kind of security police limitation and can't use third party tools, then coding will be necessary. The answer from David Heffernan gives you the appropiate direction.
Or it can be done in batch, vbs or js without need of anything out of the system. WMI Win32_Process class should allow you to handle it.

Ruby script restart itself after being killed with signal 137

I'm doing a test on an amazon microinstance where I'm runing a web server and a crawler written in Ruby (using Mechanize gem) on the same machine. The purpose of the test is to find the maximum load a microinstance server can handle (school project). However the system (Ubuntu) keeps killing my Ruby crawler with 137 when memory has reached its maximum. The memory gets full because of the webserver (mysql more precisely), not because of the crawler and it still kills the crawler. Therefore I would like to be able to prevent the system to kill my ruby script or to restart the script automatically when killed. Is that possible? I don't want to use different instance since I'm using free tier (and I don't want to pay for it).
I found a solution here on stackoverflow:
How do I ensure a process is running, even if it kills itself? (it needs to be restarted then)
but from what I understand there, it will keep starting the script over and over without even the need of being killed. Am I right? (I can't comment on the original question because of lack of reputation) I understand it's a bad workaround but I have no idea how to solve it differently.
Thank you very much for your answers.

Resources