How to properly override generated systemd unit file to start after a ZFS mount has mounted - systemd

I'm using Ubuntu 18.04.4 LTS which uses systemd, but the squid package packaged with this version of Ubuntu is configured to start via init.d. It starts and runs via systemctl start squid.service if I start it manually after the system has booted.
However, I'm using a ZFS mount point ("/media") to store the cache data, and during the boot process squid is starting before this mount point is active. Consequently I'm getting the error "Failed to verify one of the swap directories". Full output of systemctl status squid is here
I'd like to tell systemd to wait until after media.mount has completed in the most minimally invasive way possible (e.g. without modifying the /etc/init.d/squid file that is maintained by the package). To that end I created the /etc/systemd/system/squid.service.d/override.conf file like so:
% cat /etc/systemd/system/squid.service.d/override.conf
[Unit]
Wants=network.target network-online.target nss-lookup.target media.mount
After=network.target network-online.target nss-lookup.target media.mount
[Install]
WantedBy=multi-user.target
But squid is still starting too early.
Is what I want to do possible? Or do I have to bite the bullet and define a native /etc/systemd/system/squid.service file and remove the /etc/init.d/squid init script?

Related

Starting an opensplice publisher via systemd does not publish data

I have an opensplice publisher on Ubuntu 20.04 that is started via systemd.
If the publisher starts via systemd then the data is not pubished, but also no errors are reported or present in the opensplice log files.
The publisher works if I run it from a command line or if I stop and restart the service.
The QoS are the same for the publisher and subscriber.
The publisher and subscriber applications are running on different machines.
There are no other participants on the network. All the machines are rebooted and the order of reboot does not change the observed behaviour.
The systemd service is:
[Unit]
Description=Publisher Process
Documentation=
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
WorkingDirectory=/opt/publisher/bin
ExecStart=/opt/publisher/bin/publisher.sh
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target
The publisher.sh is:
#!/bin/bash
cd /opt/publisher/bin
source bashrc_local
# We just keep running the application (in case of a crash)
while true; do
./publisher
sleep 15
done
I have a work around that feels a little bit naff.
#!/bin/bash
cd /opt/publisher/bin
source bashrc_local
timeout 30 ./remote_processor
killall remote_processor
# We just keep running the application (in case of a crash)
while true; do
./publisher
sleep 15
done
Any ideas on how I can remove my work around?
Edit 16 Sept 22
The issue appears to be systemd start order and dependencies as I have run into the same issue with a program publishing data via UDP which is not using DDS.
Changing the dependencies so the services are started just before the user login does not help.
check your environment variables as systemd will not run with the same environment as your bash console
in particular have you set the OSPL_URI variable to point at the config?
if using the commercial version, OSPL_HOME and ADLINK_LICENSE will also need to be set
Does the PATH variable include your OSPL shared libraries?
These are all setup by running the $OSPL_HOME\release.com script in your bash session
I tend to manually add the required ones to the service file
e.g.
Environment=OSPL_URI=file:///opt/ospl.xml

User systemd service restarting only when SSH-ing into the machine

I have a strange situation with a web service hosted on a debian instance, that sometimes stops, and does not restart automatically. However, when SSH-ing into the machine, the service seems to restart automatically.
I originally wanted the service to always be up and restart, could you help me figure out what's wrong ? I may have misunderstood how systemctl --user services are meant to run.
The service in question is a Rails application running with passenger standalone, but I believe the problem might just be a misconfiguration in the systemd file.
My systemd file
# .config/systemd/user/my_service.service
[Unit]
Description=passenger with rails server for my_service (production)
After=syslog.target network.target
[Service]
Type=forking
PrivateTmp=yes
WorkingDirectory=/websites/xxx/current
PIDFile=/websites/xxx/shared/tmp/pids/passenger.8080.pid
ExecStart=/home/outscale/.asdf/shims/bundle exec passenger start /websites/xxx/current
ExecStop=/home/outscale/.asdf/shims/bundle exec passenger stop /websites/xxx/current
MemoryAccounting=true
MemoryLimit=3584M
Restart=always
RestartSec=1
TimeoutStopSec=30
KillMode=mixed
StandardInput=null
SyslogIdentifier=%p
# Environment
Environment="RAILS_ENV=production"
Environment="NODE_ENV=production"
[Install]
WantedBy=default.target
I have copied this installed the service using
systemctl --user daemon-reload
systemctl --user enable my_service
Was I meant to use something else, like systemctl --global enable unit ? I want my service to run with the "outscale" user installing the service (otherwise my version manager asdf does not work as expected)
I found the solution to my problem there. I had misunderstood the behavior of the --user flag (VS using the User= property in the service file)
I was running under debian 11 and as stated in the mentioned answer, my service would not necessarily shut down after logging out of ssh, but only at some point (not clear if it happened when my service crashed for the first time or some sort of garbage collection)
And the service would boot up again magically when SSHing in the instance as a reaction to a user login and starting all the services.
So the fix was to reimplement the services using User= and without the --user flag to make it a globally available service.

How to Set the Correct Permissions to Launch Neo4J on AWS EC2 via Its Bash Script?

I'm trying to launch Neo4J graph database on AWS using their AIM image (enteprise 3.3.9)
However, the server fails to launch the instance automatically how it's supposed to.
When I try to relaunch it using
systemctl restart neo4j
It also fails.
When I do
systemctl cat neo4j
I find the /etc/neo4j/pre-neo4j.sh file, which is apparently launched on the instance's startup, which, in turn launches Neo4J (when it's supposed to work):
[Unit]
Description=Neo4j Graph Database
After=network-online.target
Wants=network-online.target
[Service]
ExecStart=/etc/neo4j/pre-neo4j.sh
Restart=on-failure
User=neo4j
Group=neo4j
Environment="NEO4J_CONF=/etc/neo4j" "NEO4J_HOME=/var/lib/neo4j"
LimitNOFILE=60000
TimeoutSec=120
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
So then I launch it manually via the bash script using the sudo prefix and then it starts up fine.
sudo /etc/neo4j/pre-neo4j.sh
The documentation on deploying Neo4J on an AWS server doesn't mention anything about permissions if you use their image. So what can be the problem?
I don't want to have manually launch the DB using the sudo — is it possible to resolve this problem by modifying the bash script itself?
..
The file /etc/neo4j/pre-neo4j.sh sets some environmental parameters and then launches neo4j via:
/usr/share/neo4j/bin/neo4j console
Based on the comments.
The solution was to use
journalctl -u neo4j
to inspect the logs associated with the failed start of neo4j. This enabled to identify the root cause, and subsequently, to fix the issue.

Running systemd unit directly after local-fs.target and before basic.target

I am creating an embedded system. The embedded system mounts a partition. Directly after mounting the partition, I need to prep an encrypted folder (encfs). I need this to run before any other multi-user.target or graphical.target
Here is my unit file, which works by it's self.
[Unit]
Description=Mx Encrypted Folder
[Service]
Type=oneshot
ExecStart=/usr/bin/mxmountencrypted
RemainAfterExit=true
ExecStop=/usr/bin/mxunmountencrypted
This unit file has no dependencies defined, currently.
Again, I need:
To run this directly after mounting file systems (local-fs.target)
Before any multi-user.target or graphical.target, where must of the services that depend on it will be ran.
It must stop fully before stopping local-fs.target, since there will be a nested mount that needs to be unmounted before systemd can unmount the partition.
I looked into using the systemd.mount item, but it doesn't support encfs.
Based on what you have listed in the requirements:
[Unit]
Description=Mx Encrypted Folder
Requires=local-fs.target
After=local-fs.target
[Service]
Type=oneshot
ExecStart=/usr/bin/mxmountencrypted
RemainAfterExit=true
ExecStop=/usr/bin/mxunmountencrypted
[Install]
WantedBy=multi-user.target
More on systemd Unit files here: https://www.freedesktop.org/software/systemd/man/systemd.unit.html
and systemd Service files: https://www.freedesktop.org/software/systemd/man/systemd.service.html

When running from systemd unit file, unable to open directory

I have a strange problem with Ubuntu 16 and a systemd unit file. I have a service which reads a directory from the local filesystem. The directory is read from an environment variable. Now when I start the service manually (as in: in a ssh session), everything works fine. But when I start the service with the unit file from below, the service is unable to open the storage directory. The error I get is: could nog read contents of storage" message="open /srv/services/poddy/storage: no such file or directory.
Now my question is: does systemd kind of "sandbox" the services?
[Unit]
Description=Poddy service
After=network.target
[Service]
Type=simple
User=myusername
Group=myusername
WorkingDirectory=/srv/services/poddy
ExecStart=/srv/services/poddy/poddy
Restart=always
RestartSec=5
StartLimitInterval=60s
StartLimitBurst=3
Environment=PODDY_STORAGE="/srv/services/poddy/storage"
Environment=PODDY_PORT=8085
[Install]
WantedBy=multi-user.target
Well, I solved it myself. It turns out that quoting the value of an environment var in the systemd unit file eventually double-escaped the value.
So, changing this:
Environment=PODDY_STORAGE="/srv/services/poddy/storage"
into:
Environment=PODDY_STORAGE=/srv/services/poddy/storage
solved my problem :).

Resources