Is daemon-reload called when the Unit is enabled with systemctl enable? - systemd

Question is:
When I call systemctl enable [Unit File Name], system manager configuration is reloaded. I wonder whether systemd is doing this stuff with daemon-reload or not in the background. According to reference link that I put below tells "system manager is reloaded in a way equivalent to daemon-reload).
Reference:
https://www.freedesktop.org/software/systemd/man/systemctl.html
enable UNIT…, enable PATH…
Enable one or more units or unit instances. This will create a set of symlinks, as encoded in the "[Install]" sections of the indicated unit files. After the symlinks have been created, the system manager configuration is reloaded (in a way equivalent to daemon-reload), in order to ensure the changes are taken into account immediately.
/Br
Cadoe

That's right. Unless you specify the --no-reload when calling systemctl enable/disable/reenable/mask/unmask/revert/link, systemd is reloaded the same way as if you'd call systemctl daemon-reload.
There are certain exceptions when the daemon is NOT reloaded, like when executing systemctl in a chroot, using the --root option, working on a global scope (--global), etc.
Relevant documentation snippets:
enable UNIT…, enable PATH…
Depending on whether --system, --user, --runtime, or --global is specified, this enables the unit for the system, for the calling user only, for only this boot of the system, or for all future logins of all users. Note that in the last case, no systemd daemon configuration is reloaded.
--root=
When used with enable/disable/is-enabled (and related commands), use the specified root path when looking for unit files. If this option is present, systemctl will operate on the file system directly, instead of communicating with the systemd daemon to carry out changes.

Related

Centos 7 Created service to run shell script on infinite loop

I have the following script:
whie true
do
#code
sleep 60
done
I then wanted to create a service to start the machine and launch this script as service:
created my.service at /etc/systemd/system/my.service
[Unit]
Description=my Script
[Service]
Type=forking
ExecStart=/bin/script.sh
[Install]
WantedBy=multi-user.target
problem occurs when i systemctl start my.service
it goes to while true loop and hang in there, how can i run this service and make it run in the background ?
According to systemd specification at link. Type=forking is not exactly correct kind of start-up in your case
If set to forking, it is expected that the process configured with
ExecStart= will call fork() as part of its start-up. The parent
process is expected to exit when start-up is complete and all
communication channels are set up. The child continues to run as the
main service process, and the service manager will consider the unit
started when the parent process exits. This is the behavior of
traditional UNIX services. If this setting is used, it is recommended
to also use the PIDFile= option, so that systemd can reliably identify
the main process of the service. systemd will proceed with starting
follow-up units as soon as the parent process exits.
The Type=simple can be correct one. You can try with it
If set to simple (the default if ExecStart= is specified but neither
Type= nor BusName= are), the service manager will consider the unit
started immediately after the main service process has been forked
off. It is expected that the process configured with ExecStart= is the
main process of the service. In this mode, if the process offers
functionality to other processes on the system, its communication
channels should be installed before the service is started up (e.g.
sockets set up by systemd, via socket activation), as the service
manager will immediately proceed starting follow-up units, right after
creating the main service process, and before executing the service's
binary. Note that this means systemctl start command lines for simple
services will report success even if the service's binary cannot be
invoked successfully (for example because the selected User= doesn't
exist, or the service binary is missing).

Spring Boot application as a systemd service: log file

I know it is possible to create a jar from Spring Boot application which can be used as a systemd service. I used this manual to create systemd service from my application on Debian Jessie OS. Eveyrthing works fine, but I can't find a way how to write logs to separate file insted of /var/syslog. As documentation says:
Note that unlike when running as an init.d service, user that runs the
application, PID file and console log file behave differently under
systemd and must be configured using appropriate fields in ‘service’
script. Consult the service unit configuration man page for more
details.
it should be configured in *.service file, but I can't find any appropriate options. Has someone any experience in this question?
Run the service with a sh process
[Service]
ExecStart=/bin/sh -c "/var/myapp/myapp.jar >> /var/logs/myapp.log"
KillMode=control-group
See this discussion in influxdb github repo https://github.com/influxdata/influxdb/issues/4490

Invoke a shell script execution using nagios

Hi all I am having a script which restarts all the components(.jar files) present in the server (/scripts/startAll.sh). So whenever my server goes down, I want to invoke the execution of the script using nagios, which is running on different linux server. is it possible to do so? kindly help on How to invoke execution of this script using nagios?
Event Handlers
Nagios and Naemon allow executing custom scripts, both for hosts and for services entering a 'problem state.' Since your implementation is for restarting specific applications, yours will most likely need to be service event handlers.
From Nagios Documentation:
Event handlers can be enabled or disabled on a program-wide basis by
using the enable_event_handlers in your main configuration file.
Host- and service-specific event handlers can be enabled or disabled
by using the event_handler_enabled directive in your host and service
definitions. Host- and service-specific event handlers will not be
executed if the global enable_event_handlers option is disabled.
Enabling and Creating Event Handler Commands for a Service or Host
First, enable event handlers by modifying or adding the following line to your Nagios config file.
[IE: /usr/local/nagios/etc/nagios.cfg]:
enable_event_handlers=1
Define and enable an event handler on the service failure(s) that will trigger the script. Do so by adding two event_handler directives inside of the service you've already defined.
[IE: /usr/local/nagios/etc/services.cfg]:
define service{
host_name my-server
service_description my-check
check_command my-check-command!arg1!arg2!etc
....
event_handler my-eventhandler
event_handler_enabled 1
}
The last step is to create the event_handler command named in step 2, and point it to a script you've already created. There are a few approaches to this (SSH, NRPE, Locally-Hosted, Remotely Hosted). I'll use the simplest method, hosting a BASH script on the monitor system that will connect via SSH and execute:
[IE: /usr/local/nagios/etc/objects/commands.cfg]:
define command{
command_name my-eventhandler
command_line /usr/local/nagios/libexec/eventhandlers/my-eventhandler.sh
}
In this example, the script "my-eventhandler.sh" should use SSH to connect to the remote system, and execute the commands you've decided on.
NOTE: This is only intended as a quick, working solution for one box in your environment. In practice, it is better to create an event handler script remotely, and to use an agent such as NRPE to execute the command while passing a $HOSTNAME$ variable (thus allowing the solution to scale across more than one system). The simplest tutorial I've found for using NRPE to execute an event handler can be found here.
You can run shell scripts on remote hosts by snmpd using check_by_snmp.pl
Take a view to https://exchange.nagios.org/directory/Plugins/*-Remote-Check-Tunneling/check_by_snmp--2F-check_snmp_extend--2F-check_snmp_exec/details
This is a very useful plugin for nagios. I work with this a lot.
Good luck!!

Running Go as a daemon webserver on CentOS 7

I am trying to migrate from PHP to Go and planning to drop nginx alltogether. But I don't know how to run the go http webserver as a deamon in the background and I also don't know how to automatically start the webserver if I make a reboot, or how to kill the process.
With nginx all I do is
$ systemctl start nginx.service
$ systemctl restart nginx.service
$ systemctl stop nginx.service
$ systemctl enable nginx.service
$ systemctl disable nginx.service
This is very convenient, but it seems like I can't do this with Go http server. I have to compile and run it as any other Go program. What solutions do exist for these concerns?
This is less of a Go question and more of a Systems Administration question. There are ways to add a command to systemd (like in this blog post).
Personally, I prefer to keep my applications separate from my services, so I tend to use supervisord for my programs that tend to be started, stopped, or restarted frequently. The documentation for supervisord is pretty straightforward, but essentially you can create a config file to describe the services you want to run, the command used to run it (such as /path/to/go/binary -flag) how you want to handle starting, stopping, failure recovery, logging, etc....

Apache 2 - reload config on Windows

I have a PHP script that modifies my httpd.conf file, so I need to automatically reload it in Apache.
On Linux, there is graceful restart, but on Windows (I use the restart command) it terminates all the current connections. Is there a command as graceful restart on Windows? Is there a workaround on this?
Yes, you should use the -k switch.
httpd.exe -k restart or apache.exe -k restart
More info here has well. http://www.zrinity.com/developers/apache/usage.cfm
Edit:
It shouldn't that is the point of Graceful. Notice I used the -k. That is not the same as a normal restart. It let's the current sessions complete their task while the config is being reread, so that it will start taking new requests immediately.
From the documentation:
The USR1 or graceful signal causes the parent process to advise the children to exit after their current request (or to exit immediately if they're not serving anything). The parent re-reads its configuration files and re-opens its log files. As each child dies off the parent replaces it with a child from the new generation of the configuration, which begins serving new requests immediately.
http://httpd.apache.org/docs/2.2/stopping.html#graceful
It's doing what you are asking for.
Edit 2:
Adding this link and gave both possible versions because some people think you there is only one specific way to do something instead of search themselves.
http://httpd.apache.org/docs/2.4/platform/windows.html#wincons
I think I'm just going to delete this answer because either people can't read or if it doesn't work for someone it gets a DV. There are different windows versions made by different developers. If it doesn't work look for the answer from them. Even Linux has different commands depending on the distro. geez
In the newest Apache 2.4.20 VC10 the "httpd -k restart" command actually DOES do a graceful restart. It won't drop any connections, for example if somebody is downloading something from your server, it WILL NOT interrupt this process. One more proof is that "-k restart" will not reset your server statistics that mod_status provides, won't even alter the "Restart Time" value.
Although "httpd -k graceful" and "httpd -k graceful-stop" commands are available in Windows, but they will not work giving an error "couldn't make a socket".

Resources