How to start systemd user service when device is removed, and stop it when device is inserted - systemd

Systemd allows starting/stopping service from udev rule using SYSTEMD_USER_WANTS environment and StopWhenUnneeded option. But, service will be started when device is inserted and stopped when device is removed. What I need is vice versa:
start service when device removed
stop service when device inserted
Since it is user service, running 'systemctl start/stop ...' from udev rule fails.

udev rule for this question is
..., ACTION=="add", RUN+="/usr/bin/su USER -c 'systemctl --user stop my-service'"
..., ACTION=="remove", RUN+="/usr/bin/su USER -c 'systemctl --user stop my-service'"
The important points are:
Instead of SYSTEMD_WANTS/SYSTEMD_USER_WANTS, service should be start/stop using systemctl since start/stop does not match device add/remove.
To start/stop service from other user su + systemctl --user is used.
Program pass to RUN udev key should be either from /usr/lib/udev or absolute pass must be used (man udev).

Related

User systemd service restarting only when SSH-ing into the machine

I have a strange situation with a web service hosted on a debian instance, that sometimes stops, and does not restart automatically. However, when SSH-ing into the machine, the service seems to restart automatically.
I originally wanted the service to always be up and restart, could you help me figure out what's wrong ? I may have misunderstood how systemctl --user services are meant to run.
The service in question is a Rails application running with passenger standalone, but I believe the problem might just be a misconfiguration in the systemd file.
My systemd file
# .config/systemd/user/my_service.service
[Unit]
Description=passenger with rails server for my_service (production)
After=syslog.target network.target
[Service]
Type=forking
PrivateTmp=yes
WorkingDirectory=/websites/xxx/current
PIDFile=/websites/xxx/shared/tmp/pids/passenger.8080.pid
ExecStart=/home/outscale/.asdf/shims/bundle exec passenger start /websites/xxx/current
ExecStop=/home/outscale/.asdf/shims/bundle exec passenger stop /websites/xxx/current
MemoryAccounting=true
MemoryLimit=3584M
Restart=always
RestartSec=1
TimeoutStopSec=30
KillMode=mixed
StandardInput=null
SyslogIdentifier=%p
# Environment
Environment="RAILS_ENV=production"
Environment="NODE_ENV=production"
[Install]
WantedBy=default.target
I have copied this installed the service using
systemctl --user daemon-reload
systemctl --user enable my_service
Was I meant to use something else, like systemctl --global enable unit ? I want my service to run with the "outscale" user installing the service (otherwise my version manager asdf does not work as expected)
I found the solution to my problem there. I had misunderstood the behavior of the --user flag (VS using the User= property in the service file)
I was running under debian 11 and as stated in the mentioned answer, my service would not necessarily shut down after logging out of ssh, but only at some point (not clear if it happened when my service crashed for the first time or some sort of garbage collection)
And the service would boot up again magically when SSHing in the instance as a reaction to a user login and starting all the services.
So the fix was to reimplement the services using User= and without the --user flag to make it a globally available service.

Intermittent failure executing on USB insert with auto-mount on Raspberry Pi

I've set up a raspberry pi to execute a command if a USB stick is inserted, and the command calls an executable on the stick.
This works about 80% of the time, but intermittently fails - seemingly at random. Because of the unpredictability I assume this is a race condition, however I'm not too familiar with where the risks are as I've pieced together the approach from information online. Most of the information comes from here.
The USB stick is auto-mounted with the following entry in /etc/fstab. I'm aware of the risk of /dev/sda1 changing but that does not appear to be the problem here:
/dev/sda1 /media/usb vfat defaults,rw,nofail,user,umask=000 0 0
A service waits for the USB to mount with the following configuration
[Unit]
Description=USB Mount Trigger
Requires=media-usb.mount
After=media-usb.mount
[Service]
ExecStart=/script.sh
[Install]
WantedBy=media-usb.mount
media-usb.mount comes from systemctl list-units -t mount, and /script.sh calls the USB stick's executable.
In failure cases, where the USB's executable is not called, I see the following from systemctl status service_name:
Nov 15 22:49:14 raspberrypi systemd[1]: Dependency failed for USB Mount Trigger.
Nov 15 22:49:14 raspberrypi systemd[1]: service_name.service: Job service_name.service/start failed with result 'dependency'.
In these cases if I execute systemctl list-units -t mount I do not see media-usb.mount and my USB stick is not mounted to /media/usb.
I think that an error / race condition in service_name.service causing the USB mount to die, because (I believe) a successful mount is required to trigger the service. If the USB is never inserted, systemctl status service_name simply reports Active: inactive (dead), so something is triggering the service to try to execute.

How to properly override generated systemd unit file to start after a ZFS mount has mounted

I'm using Ubuntu 18.04.4 LTS which uses systemd, but the squid package packaged with this version of Ubuntu is configured to start via init.d. It starts and runs via systemctl start squid.service if I start it manually after the system has booted.
However, I'm using a ZFS mount point ("/media") to store the cache data, and during the boot process squid is starting before this mount point is active. Consequently I'm getting the error "Failed to verify one of the swap directories". Full output of systemctl status squid is here
I'd like to tell systemd to wait until after media.mount has completed in the most minimally invasive way possible (e.g. without modifying the /etc/init.d/squid file that is maintained by the package). To that end I created the /etc/systemd/system/squid.service.d/override.conf file like so:
% cat /etc/systemd/system/squid.service.d/override.conf
[Unit]
Wants=network.target network-online.target nss-lookup.target media.mount
After=network.target network-online.target nss-lookup.target media.mount
[Install]
WantedBy=multi-user.target
But squid is still starting too early.
Is what I want to do possible? Or do I have to bite the bullet and define a native /etc/systemd/system/squid.service file and remove the /etc/init.d/squid init script?

Trigger event on AWS EC2 instance stop/terminate

Is there some way to trigger an event (e.g. running a script to push some logs to S3) when an EC2 instance is stopped/terminated?
I have looked into triggering the script using a service in /usr/lib/systemd/system but I haven't had any luck with that yet. I have heard that networking capabilities on the instance can be shutdown before a service is triggered and if true, that could be why the script is not executing correctly.
So the answer is not really AWS specific, but it is working for me now (tested on EC2 instance stopping and terminating).
I've created a system.d service file:
/usr/lib/systemd/system/my_shutdown.service
[Unit]
Description=my_shutdown Service
Before=shutdown.target reboot.target halt.target
Requires=network-online.target network.target
[Service]
KillMode=none
ExecStart=/bin/true
ExecStop=/path/to/my_script.sh
RemainAfterExit=yes
Type=oneshot
[Install]
WantedBy=multi-user.target
Added this service to multi-user.target:
systemctl enable my_shutdown.service
Alternatively you can manually create the symlink:
ln -s /usr/lib/systemd/system/my_shutdown.service /etc/systemd/system/multi-user.target.wants/my_shutdown.service
Started the service and tested by stopping/terminating the instance.
systemctl start my_shutdown.service
My understanding:
Description: a description of our service.
Before: we want our service to stop before these targets are started.
Requires: our service requires that network capabilities are available. These targets must not be stopped before our service starts/stops.
KillMode: none; do not kill our process.
ExecStart: /bin/true; a command that does nothing but returns a success. Run when are service is started.
ExecStop: the script to run. Run when are service is being stopped.
RemainAfterExit: consider our service active even when all its processes exited.
Type: oneshot; it is expected that the process has to exit before systemd starts follow-up units.
WantedBy: the target we want to add our service to.
References:
https://www.freedesktop.org/software/systemd/man/systemd.service.html
https://www.freedesktop.org/software/systemd/man/systemd.kill.html#
https://www.freedesktop.org/software/systemd/man/systemd.special.html
https://www.freedesktop.org/software/systemd/man/systemd.target.html
You can trigger events, such as pushing logs to S3 on specific events, with CloudWatch... Learn more here: https://aws.amazon.com/cloudwatch/

Start OpenSSH sshd automatically on the BeagleBone Black

Does anybody know how to start sshd automatically on the BeagleBone Black ? I've replaced dropbear with OpenSSH. The standard systemctl enable sshd doesn't work, but strangely systemctl start sshd does. I'm quite new to systems with systemd replacing init, so hopefully I'm not just missing something trivial / simple. The BeagleBone Black in question is running Angstrom Linux and is using the opkg package manager. OpenSSH was installed with opkg install openssh. When I run systemctl enable sshd#.service, I get the following message:
The unit files have no [Install] section. They are not meant to be enabled
using systemctl.
Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's
.wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
3) A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).
The version I have installed is OpenSSH_6.0p1, OpenSSL 1.0.1e 11 Feb 2013
Apparently systemd has both an sshd.service unit and an sshd.socket unit, that are intended to be used mutually exclusively of each other. The sshd daemon should be started and connected automatically by systemd every time an outside process connects to port 22.

Resources