Systemd for embedded devices - boot

I want to port some upstart scripts to systemd. But I want fully control of the services that get started(and the order). I have stripped down the folder of units and wrote one from the begining. Things get started in the desired ordered but I can see some tries for some default units like:
[0.791694] systemd[1]: systemd-journald.socket: Failed to load configurationn: No such file or
[0.792422] systemd[1]: syslog.target: Failed to load configuration: No such file or directory
[0.793083] systemd[1]: var.mount: Failed to load configuration: No such file or directory
[0.793677] systemd[1]: var-run.mount: Failed to load configuration: No such
How can I get rid of this. Can anyone point me to a guide for embedded systems with systemd. Searched for ones but couldn't find the right ones.
Also I need control of cgroups and when filesystem get mounted.

The errors are seen because the systemd unit files are available in your system and is referred to by some target unit under Wants
Cleaning up these dependencies shall get rid of these messages.
There is no guide specific to embedded systems. Systemd was initially started for desktop and can be used across.

Related

What is systemd equivalent of upstart "on stopping" and "on started"

I am translating my upstart scripts to systemd scripts and I am wondring what is the best practice of transalting the following:
start on started postgresql
stop on stopping postgresql
Is the Requires= section is the right for me or there is a better section?
start on and start on started according to upstart:
6.33 start on
This stanza defines the set of Events that will cause the Job to be automatically started.
...
6.33.2 Start depends on another service
start on started other-service
So in your case, start on started postgresql means it needs to start after postgresql has successfully started because it depends on it.
In systemd that would be:
[Unit]
Requires=postgresql
After=postgresql
Because according to the systemd.unit man page:
After=,Before= ... If a unit foo.service contains a setting Before=bar.service and both units are being started, bar.service's start-up is delayed until foo.service is started up. [...] After= is the inverse of Before=, i.e. while After= ensures that the configured unit is started after the listed unit finished starting up, Before= ensures the opposite, that the configured unit is fully started up before the listed unit is started.
...
Requires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units fails to activate, and an ordering dependency After= on the failing unit is set, this unit will not be started. [...] If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated.
As for stop on and stop on stopped according to upstart:
6.34 stop on
This stanza defines the set of Events that will cause the Job to be automatically stopped if it is already running.
...
6.34.3 Stop after dependent service
stop on stopped other-service
The After=postgresql mentioned above has you covered because again, according to the systemd.unit man page:
After=,Before= [...] Note that when two units with an ordering dependency between them are shut down, the inverse of the start-up order is applied. i.e. if a unit is configured with After= on another unit, the former is stopped before the latter if both are shut down.

Tomcat as a Service

I need to write a shell script in which I need to bounce the Tomcat server(it would possibly on anyone's system). Hence, I wanted to know how should I check if tomcat is ran as a service with "service tomcat6 start" or with the script "./bin/startup.sh"?
If this is for a production server: Assume that it's always started as a service. If you find out that it isn't: Find the person that started from the shell and fire them.
Hard words, but on production systems: Hand off, keep them operating according to a standard. If you automate the bouncing (restart): This is what you do.
Dangers when starting through startup.sh: The process will be started as whatever user executes the script - potentially lacking write permissions to the log and temp files, or ruining it for the next start through service tomcat start, when the service can't access those files any more.
Thinking of it: It might be a good idea to check (at least) the identity of the current user in startup.sh (or setenv.sh) and terminate if it's not the expected one. Thus effectively forbidding to ever run startup.sh as a regular user, including root.

Managing resource allocation using custom cgroups in system running systemd

I have an application which run as user "foo". I would like to set limit to CPU and memory that this application can use at run time. I am not able to realize how this can be achieved using systemd tools like - "systemctl" and "set-property" as this application is not a service but rather it is an application which a user can select and start.
Can some one please provide any guidance?
Resource management is on the cgroup level. A service for example would get its own cgroup. Systemd is just the intermediate guy here between kernel. You tell systemd how much resource your service should get and systemd would propagate this to the kernel by playing around with the cgroup hierarchy.
Few things to check up:
1) You can play with your application's service's resource management. To find the service of your application I would use systemd-cgls.
2) Make sure you have necessary control groups enabled in your system like cpu cgroup, cpuacct cgroup, memory cgroup.
If you have any other specific question, just shoot.
If you wish to start a process from the command line, systemd-run --scope might be used. This will be run as a scope unit:
:/home/foo# systemd-run --scope dummy_bin &
[1] 201
:/home/foo# Running as unit run-201.scope.
Then systemctl can be used by this scope unit, like:
:/home/foo# systemctl status run-201.scope
Loaded: loaded (/run/systemd/system/run-201.scope; static)
Drop-In: /run/systemd/system/run-201.scope.d
Active: active (running) since Thu 1970-01-01 01:01:01 UTC; 10s ago
CGroup: /system.slice/run-201.scope
`-201 /usr/bin/dummy_bin
The resource management of the scope unit could be realized by systemctl set-property via systemd.

Apache SparkUI on local mode

I'm new to Apache Spark. Currently using in on IntelliJ IDEA 14, maven project method. I'm unable to access the Spark UI after my program stops because of
15/10/22 23:28:27 INFO SparkUI: Stopped Spark web UI at http://192.168.1.88:4041
SparkUI stops too fast for me to review what my program does. Is there any way to review the SparkUI for the particular program after it's stopped? Or can I find some kind of log that is created after running the program?
After setting "spark.eventLog.enabled" to true and "spark.eventLog.dir" to a local folder, I was able to produce Unix executable files(local-1445955378567). However, the details of the log is not what I expected. How do I add SparkListener??
Output of unix executable"local-1445955378567"
You need to either keep the ui alive (by keeping your context alive) or start a history server since the ui is tied to the context. http://spark.apache.org/docs/latest/monitoring.html

Host bundle fails to start on Karaf restart when two fragments are present

I am using Karaf 3.0.1 and have two fragment bundles A and B that attach to a host bundle C. I am able to install A, then B, then C, and start C and everything works fine.
When I stop and start Karaf though, the host usually has a failure and does not start successfully. Both fragments are listed as "Resolved" and show as being attached to the host and the host shows it is attached to the fragments, but the host has a state of "Failure". The exception in the log file is:
20140507 07:35:39.011 [ERROR] FelixStartLevel | 19:org.apache.aries.blueprint.core |
org.apache.aries.blueprint.container.BlueprintContainerImpl | Unable to start
blueprint container for bundle <host bundle name>
org.osgi.service.blueprint.container.ComponentDefinitionException: Unable to load
class <class from 2nd fragment> from recipe BeanRecipe[name='<bean ID from second
fragment blueprint XML>']
If I deploy only a single fragment, then I'm able to restart Karaf and the host starts fine. But with two fragments, Karaf will restart ok perhaps once or twice, but then fails and never successfully starts again. I played with start levels and having the host have a higher/later start level doesn't help at all.
I read When is an OSGi fragment attached to host? that seems to make it clear that start-levels don't affect resolution order and saw the suggestion to use Provide/Require-Capability headers. I tried that and see the same behavior, although again with a single fragment it works fine.
What else might I do to get this to work? Might there be a bug in Karaf/Felix regarding multiple fragments for the same host on restart?
Yes, I'd rather not use fragments but am porting a fairly complex Java EE app to OSGi and this is the approach that works given the code-base I have, but if I can't depend on things starting correctly when Karaf starts, this won't be workable.
Thanks,
Kevin

Resources