What is systemd equivalent of upstart "on stopping" and "on started" - systemd

I am translating my upstart scripts to systemd scripts and I am wondring what is the best practice of transalting the following:
start on started postgresql
stop on stopping postgresql
Is the Requires= section is the right for me or there is a better section?

start on and start on started according to upstart:
6.33 start on
This stanza defines the set of Events that will cause the Job to be automatically started.
...
6.33.2 Start depends on another service
start on started other-service
So in your case, start on started postgresql means it needs to start after postgresql has successfully started because it depends on it.
In systemd that would be:
[Unit]
Requires=postgresql
After=postgresql
Because according to the systemd.unit man page:
After=,Before= ... If a unit foo.service contains a setting Before=bar.service and both units are being started, bar.service's start-up is delayed until foo.service is started up. [...] After= is the inverse of Before=, i.e. while After= ensures that the configured unit is started after the listed unit finished starting up, Before= ensures the opposite, that the configured unit is fully started up before the listed unit is started.
...
Requires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units fails to activate, and an ordering dependency After= on the failing unit is set, this unit will not be started. [...] If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated.
As for stop on and stop on stopped according to upstart:
6.34 stop on
This stanza defines the set of Events that will cause the Job to be automatically stopped if it is already running.
...
6.34.3 Stop after dependent service
stop on stopped other-service
The After=postgresql mentioned above has you covered because again, according to the systemd.unit man page:
After=,Before= [...] Note that when two units with an ordering dependency between them are shut down, the inverse of the start-up order is applied. i.e. if a unit is configured with After= on another unit, the former is stopped before the latter if both are shut down.

Related

Gradle : Cleanup resources after build failure

I execute test suite through Gradle for the build and it spins up a lots of processes on different ports. Also, failFast is set to true for my test task. So, following happens when I execute my suite:
Suite starts up and spins up processes/servers listening to different ports
Tests in the suite are executed
When one or more tests fails, the suite execution is halted and the build is marked as failed
Now, when failing tests are fixed and the build is eventually run, step 1 (described above) fails with the message that the port is already in use. Also, I am using forkEvery parameter, meaning the previous tests might have more than one JVM running.
Is there any way to clean everything up (in terms of processes and not the physical files) when a build fails through gradle?
You can add a custom TestListener that stops the processes/servers from (1)
You can reference Spring Boot's FailureRecordingTestListener: https://github.com/spring-projects/spring-boot/blob/master/buildSrc/src/main/java/org/springframework/boot/build/testing/TestFailuresPlugin.java#L57..L95
The basic idea here is that in the afterSuite method, you would stop whatever processes where started/created from (1). Although within the TestListener, you don't have access to the test instance where processes were started from (1). So you'll need to figure out how to stop those processes without having a reference to the original class where it may have defined some things.

Does Windows sc order queries in any fashion?

Services are being managed using an NSIS installer. On uninstallation those services are stopped using net stop, because it is synchronous, then flagged for deletion using sc delete, as they do not have to be deleted immediately/synchronously.
Now I am wondering about the installation process. The order of calls is such:
net stop service1
net stop service2
sc config service1 depend=dependency1
sc config service2 depend=dependency2
sc start service1
sc start service2
Is there an intrinsic order to queries passed to sc? Are they worked through in the order sc is called (I assume they are not)? Are they being delegated to the respective service and queued there (this is what I hope for), e.g. whether service1 or service2 is stopped and configured first is ambiguous, but the sc config, sc start order is not? Is the order entirely ambiguous?
In addition I am curious to know what happens when mixing net and sc calls. Assume the following order:
net stop service
sc config service
net start service
Is it reasonable to assume that the service would likely be stopped, then started before any configuration occurred?
Supposedly the general question is, how to ensure proper service setup via concatenated sc/net calls. The required order:
Stop service (it may or may not exist on system)
Create service / conffigure service
Start service
Uninstallation is less pressing, because stopping services and flagging them for deletion is sufficient.
Mixing sc and net calls should not be a problem because it is the SCM (Service Control Manager) process that controls the services, other applications simply asks the SCM to perform a specific operation.
The documentation for the ChangeServiceConfig API function states that:
If the configuration is changed for a service that is running, with the exception of lpDisplayName, the changes do not take effect until the service is stopped.
This leads me to believe that a installer can use the following sequence:
Install/Configure --> Stop (synchronous) --> Start.
Performing "Stop --> Configure --> Start" is always going to have a race condition issue because another process might trigger a service start at the wrong time.

When will new value for CPUAffinity= in systemd come into effect?

If I change CPUAffinity= or CPUQuota= in a systemd unit configuration file such as postgresql#.service, when will the new settings come into effect? In particular, will I have to restart the service in order to see the service's processes executing on the intended CPUs and will they run there under guarantee?
According to the testing that I have just done (why, oh why, does the documentation not clarify this!), changing the CPUAffinity requires a reboot.
I tried changing the value and then
restarting processes - no effect
systemctl daemon-reload - no effect
systemctl daemon-reexec - no effect
reboot
Only the reboot effected the change to CPUAffinity.
Tested on CentOS 7.
Also, for those finding the documentation lacking, the CPU numbering goes from zero, you can specify ranges (such as 1-3 and multiples may be given either space or comma-delimnited).
You just need to reload the configuration (systemctl daemon-reload) then restart the service.
See for example here. There's no need to reboot the system like starfry suggests.

Integration test execution should wait until server is ready

I have written Selenium tests which should be executed during the build process of an web application. I am using the maven-failsafe-plugin to execute the integration tests and the tomcat7-maven-plugin to start up a tomcat server in the pre-integration-test phase and after the execution of the tests it gets stopped in the post-integration-test phase. This works fine.
The problem is that the tomcat server is caching some data when started up to improve the search speed. Some of my tests rely on that data, so the integration tests should wait for the server to finish caching the data.
How can I make that happen?
I added a process bar to show the loading progress. Once the loading is complete the process bar is not rendered anymore and the data table will be rendered. In this way I can add to the tests which depend on the data table to be loaded this line of code:
longWait.until(ExpectedConditions.presenceOfElementLocated(By.id("dataTablePanel")));
Additionally I am using org.junit.runners.Suite as a runner so that I can specify the order of how my test classes will be executed. Thereby I can execute the test which do not rely on the data first and then the ones which need it. To ensure that the data is present and I don't need to check that in every test case, I have created a test class which will only check the presence of the data and will be executed before all test cases which depend on the data.

Managing resource allocation using custom cgroups in system running systemd

I have an application which run as user "foo". I would like to set limit to CPU and memory that this application can use at run time. I am not able to realize how this can be achieved using systemd tools like - "systemctl" and "set-property" as this application is not a service but rather it is an application which a user can select and start.
Can some one please provide any guidance?
Resource management is on the cgroup level. A service for example would get its own cgroup. Systemd is just the intermediate guy here between kernel. You tell systemd how much resource your service should get and systemd would propagate this to the kernel by playing around with the cgroup hierarchy.
Few things to check up:
1) You can play with your application's service's resource management. To find the service of your application I would use systemd-cgls.
2) Make sure you have necessary control groups enabled in your system like cpu cgroup, cpuacct cgroup, memory cgroup.
If you have any other specific question, just shoot.
If you wish to start a process from the command line, systemd-run --scope might be used. This will be run as a scope unit:
:/home/foo# systemd-run --scope dummy_bin &
[1] 201
:/home/foo# Running as unit run-201.scope.
Then systemctl can be used by this scope unit, like:
:/home/foo# systemctl status run-201.scope
Loaded: loaded (/run/systemd/system/run-201.scope; static)
Drop-In: /run/systemd/system/run-201.scope.d
Active: active (running) since Thu 1970-01-01 01:01:01 UTC; 10s ago
CGroup: /system.slice/run-201.scope
`-201 /usr/bin/dummy_bin
The resource management of the scope unit could be realized by systemctl set-property via systemd.

Resources