Performance issue when executing the bash script by systemd - bash

I have the following issue.
I wrote a script in bash which have to make some operations on thousands of files. The main process forks about ten times to speed the whole execution up. The script works fine, as it is supposed to work.
The script have to be managed by the systemd.
I created the service file which have only:
[Unit]
Description=description
[Service]
ExecStart=script_path arguments
Type=simple
Unfortunately, the script started by the systemd is much slower than when explicitly running script.
What could be a root cause of this performance issue?
Does Type=simple fine for such case?
The script should be started sometimes and the main process should wait for the child processes to end.
Apart from that the script does not need any special treatment.

Related

Getting the exit code of an application launched via startx

I'm setting up Debian so that it works in kiosk mode. To do this, I created a service that performs the watchdog function:
[Unit]
Description=Watchdog for Myapp
After=getty#tty6.service
After=polkit.service
After=udisks2.service
[Service]
ExecStart=su user -c "startx /opt/myapp -- :0 vt5"
ExecStop=systemctl start xdm
Type=simple
Restart=on-failure
StartLimitInterval=60s
StartLimitBurst=5
RuntimeMaxSec=infinity
Environment="DISPLAY=:0"
Environment="XAUTHORITY=/home/user/.Xauthority"
Environment="XDG_VTNR=5"
[Install]
WantedBy=graphical.target
The problem is that ExecStart gets the exit code not from myapp, but from startx. I have tried many options, but I have not been able to come up with a way that would work as it should...
I tried trying to pass exit code through pipe, exit &? and writing the exit code to a file. But, apparently, my skills in bash are not enough to make the right command.
Googling also didn't help because to find a case in which people call starts directly from the root, and not from the user, which is why the transfer of exit code is much easier than in my case

How can I run xautolock as a service?

What I'm trying to do
Okay so I've been trying to create a custom lockscreen on ubuntu using i3lock.
I achieved something I'm comfortable with as a lockscreen, but now I want to use this lockscreen instead of the default gnome one.
So I'm using xautolock for that, which works well when typed in a terminal, but that means I have to enter this manually each time I turn my pc on.
So right now I'm trying to making it a systemd service, and that's where I struggle.
What I block on
First thing is: executing the xautolock command in a shell script doesn't even work.
`#!/bin/bash
`xautolock -time 1 -locker mylock.sh #this line works in CLI but suspend the console
(I've done a chess game on my phone to wait, it lasted way longer than a minute and it didn't do anything whereas executing it in terminal works well)
So there is this problem, but wait there is more!
I've done the .service file, and running mylock.sh from the service returns an exit status=1/FAILURE.
Thus my incomprehension, because the line works in the console, and executing the shell script manually blocks the console aswell so I assume it doesn't fail (it doesn't work as expected tho)
I've tried two different configs for the service:
First one with ExecStart=/usr/bin/xautolock -time 1 -locker <path_to_my_lock>/mylock.sh
Which outputs (code=exited, status=1/FAILURE)
In the other one I tried to executed the a "lockservice.sh" script containing the xautolock line (which has no chance to work given the previous tests):
ExecStart=/bin/bash /PATH/lockservice.sh
Same thing: (code=exited, status=1/FAILURE)
I did a systemctl daemon-reload everytime, I chmoded 755 all the shell scripts. I also tried to execute the service with User=root
What I would like
I basically want something that can replace the gnome default lockscreen with my custom lock script, using xautolock and services isn't mandatory to me, but I'd like not having to run it manually on startup.
Thanks in advance !

Python Script Not Starting Bash Script when running as service

I have a python script that get started automatically as a service (activated with systemd). In this python script, I call a bash script using subprocess.call(script_file,shell=True).
When I call the python script manually ($ python my_python_script.py), everything works perfectly. However, the automatically started program does not execute the bash script (however it does run, I checked this my making it edit a text file, which it indeed does).
I (think) I gave everyone read-write permissions to the bash scripts. Does anyone have ideas as to what I'm doing wrong?
Addendum: I want to write a small script that sends me my public IP address via telegram. The service file looks like this:
[Unit]
Description=IPsender
After=networking.service
[Service]
Type=simple
User=root
WorkingDirectory=/home/pi/projects/tg_bot
ExecStart=/home/pi/miniconda3/bin/python /home/pi/projects/tg_bot/ip_sender_tg.py
Restart=always
[Install]
WantedBy=multi-user.target
Protawn, welcome to the Unix and Linux StackExchange.
Why scripts work differently under system is a common question. Check out this answer to the general question elsewhere on the site.
Without the source code for your Python and Bash scripts it's hard to guess which difference you have encountered.
My personal guess is that your bash script is calling some other binaries without full paths, and those paths are found in your shell $PATH but not the default systemd path.
Add set -x to the top of your bash script so that all actions are logged to standard out, which will be captured in the systemd journal. Then after it fails, use journalctl -u your-service-name to view the logs for your service to see if you can find the last command that bash executed successfully. Also consider using set -e in the bash script to have it stop at the first error.
Despite the two "off-topic" "close" votes on this topic, why things work differently under systemd is on topic for this Stack Exchange site.

How to run multiple command systemd

i want to multiple command in myapp.service file in systemd
[Unit]
Description=to serve myapp
[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/myapp
ExecStart=/home/ubuntu/.local/bin/pserve production.ini http_port=5000
ExecStart=/home/ubuntu/.local/bin/pserve production.ini http_port=5001
Restart=always
[Install]
WantedBy=multi-user.target
it throws error saying invalid argument.
i want to run two commands
pserve production.ini http_port=5000
pserve production.ini http_port=5001
How do i do that??
You can start multiple background processes from one systemd unit, but systemd will not be able to track them for you and do all the nice things that it does to support a daemon, such as send signals to it on various system events or auto-restart it when needed.
If you must have it as a single unit, then you can do one of the following (in my order of preference):
make the two servers separate units (note you may be able to use the same config file for both, so they are two 'instances' of the same service - which makes sense, they run the same server). You will have two entries in the list of running services when you run 'systemctl'.
make that unit a one-shot (runs a program that exits and is not monitored and restarted). Make the one-shot command start both servers in background, e.g.,
sh -c " { pserve production.ini http_port=5000 & pserve production.ini http_port=5001 & } </dev/null >/dev/null >&1"
make a script that launches both daemons and watches for them, restarting them if needed and kills them when it is killed itself. Then you make that script the 'daemon' that systemd runs. Not really worth it, IMO - because you're doing much of the work that systemd itself is best suited to do. Of course you can spin a new copy of systemd that is configured to run just those two servers (and make that systemd as your 'one-service-for-two-commands' unit), but that seems an overkill.

Fixing a systemd service 203/EXEC failure (no such file or directory)

I'm trying to set up a simple systemd timer to run a bash script every day at midnight.
systemctl --user status backup.service fails and logs the following:
backup.service: Failed at step EXEC spawning /home/user/.scripts/backup.sh: No such file or directory.
backup.service: Main process exited, code=exited, status=203/EXEC
Failed to start backup.
backup.service: Unit entered failed state.
backup.service: Failed with result 'exit-code'.
I'm lost, since the files and directories exist. The script is executable and, just to check, I've even set permissions to 777.
Some background:
The backup.timer and backup.service unit files are located in /home/user/.config/systemd/user.
backup.timer is loaded and active, and currently waiting for midnight.
Here's what it looks like:
[Unit]
Description=Runs backup at 0000
[Timer]
OnCalendar=daily
Unit=backup.service
[Install]
WantedBy=multi-user.target
Here's backup.service:
[Unit]
Description=backup
[Service]
Type=oneshot
ExecStart=/home/user/.scripts/backup.sh
[Install]
WantedBy=multi-user.target
And lastly, this is a paraphrase of backup.sh:
#!/usr/env/bin bash
rsync -a --delete --quiet /home/user/directory/ /mnt/drive/directory-backup/
The script runs fine if I execute it myself.
Not sure if it matters, but I use fish as my shell (started from .bashrc).
I'm happy to post the full script if that's helpful.
I think I found the answer:
In the .service file, I needed to add /bin/bash before the path to the script.
For example, for backup.service:
ExecStart=/bin/bash /home/user/.scripts/backup.sh
As opposed to:
ExecStart=/home/user/.scripts/backup.sh
I'm not sure why. Perhaps fish. On the other hand, I have another script running for my email, and the service file seems to run fine without /bin/bash. It does use default.target instead multi-user.target, though.
Most of the tutorials I came across don't prepend /bin/bash, but I then saw this SO answer which had it, and figured it was worth a try.
The service file executes the script, and the timer is listed in systemctl --user list-timers, so hopefully this will work.
Update: I can confirm that everything is working now.
To simplify, make sure to add a hash bang to the top of your ExecStart script, i.e.
#!/bin/bash
python -u alwayson.py
When this happened to me it was because my script had DOS line endings, which always messes up the shebang line at the top of the script. I changed it to Unix line endings and it worked.
I ran across a Main process exited, code=exited, status=203/EXEC today as well and my bug was that I forgot to add the executable bit to the file.
If that is a copy/paste from your script, you've permuted this line:
#!/usr/env/bin bash
There's no #!/usr/env/bin, you meant #!/usr/bin/env.
try running:
systemctl daemon-reload
and then again run
service <yourservice> status
I faced a similar issue, changed the permission and added executable permission
use chmod +x /etc/systemd/system/<service-filename>
This worked for me
I actually used the answer from How do I run a node.js app as a background service? combined with what dwrz said above. In my case, I was creating a Discord bot that needed to be able to run when I was not around.
With this service in place, I initially got the same error that the initial poster did, which brought me here. I was missing the #!/usr/bin/env node at the top of my executed node.js script.
Since then, no problems, although I intend to see what else can be extended to the service itself.

Resources