I have written a golang RestAPI based on labstack/echo and vuejs and have a working version compiled and everything runs nicely when I start it. So far so good.
However when trying to integrate it with systemd to start the process at boot I am stuck. I have a service file.
[Unit]
Description=Server Software Manager
After=network.target
[Service]
Type=simple
ExecStart=/var/gameserver/steam/sman
KillMode=process
User=steam
Group=steam
Restart=on-failure
SuccessExitStatus=2
[Install]
WantedBy=multi-user.target
Alias=sman.service
But everytime I want to start the service I get the following error.
Feb 25 14:17:49 <SERVERNAME> systemd[1]: Stopped Server Software Manager.
Feb 25 14:17:49 <SERVERNAME> systemd[1]: Started Server Software Manager.
Feb 25 14:17:49 <SERVERNAME> systemd[1]: sman.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Feb 25 14:17:49 <SERVERNAME> systemd[1]: sman.service: Unit entered failed state.
Feb 25 14:17:49 <SERVERNAME> systemd[1]: sman.service: Failed with result 'exit-code'.
Feb 25 14:17:50 <SERVERNAME> systemd[1]: sman.service: Service hold-off time over, scheduling restart.
Feb 25 14:17:50 <SERVERNAME> systemd[1]: Stopped Server Software Manager.
Feb 25 14:17:50 <SERVERNAME> systemd[1]: sman.service: Start request repeated too quickly.
Feb 25 14:17:50 <SERVERNAME> systemd[1]: Failed to start Server Software Manager.
Feb 25 14:19:59 <SERVERNAME> systemd[1]: Started Server Software Manager.
According to google that error is when the Service exits with error code but when I run the Service manually as the steam user it does not do that.
My assumption is that something is wrong with that unit file but I don't know what. And Systemd-analyze has also not complained.
I am completely lost and thankful for any leads you might have help debug this.
The output of jounarlctl -xfe -u sman:
Feb 26 14:18:23 <SERVERNAME> systemd[1]: Started Server Software Manager.
-- Subject: Unit sman.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit sman.service has finished starting up.
--
-- The start-up result is done.
Notes:
OS: Ubuntu 16.04 LTS
I finally managed to find out what was wrong. Systemd needs all Env Variables explicitly mentioned. However it looks like I had loads of troubles reloading systemd services. This is what i've ended up doing. the service needs to be stoped, the .service file moved to a location systemd does not read, after that reload systemd with systemctl daemon-reload and wait until systemctl status $SERVICE claims there is no such service. (Systemd might still reread the cache if status claims anything else) After that add the .service file again to the appropriate location and voila it finaly worked.
Related
I configured a minio instance server on the ubuntu 18.04 with the guide from https://www.digitalocean.com/community/tutorials/how-to-set-up-an-object-storage-server-using-minio-on-ubuntu-18-04.
after the installation, the server failed to start with the command "sudo systemctl start minio", the error is saying :
root#iZbp1icuzly3aac0dmjz9aZ:~# sudo systemctl status minio
● minio.service - MinIO
Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2021-12-23 17:11:56 CST; 4s ago
Docs: https://docs.min.io
Process: 9085 ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES (code=exited, status=1/FAILURE)
Process: 9084 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCESS)
Main PID: 9085 (code=exited, status=1/FAILURE)
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Main process exited, code=exited, status=1/FAILURE
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Failed with result 'exit-code'.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Service hold-off time over, scheduling restart.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Scheduled restart job, restart counter is at 5.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: Stopped MinIO.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Start request repeated too quickly.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Failed with result 'exit-code'.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: Failed to start MinIO.
It looks like the reason is the Variable MINIO_VOLUMES not set in /etc/default/minio.
However, I double check the file from /etc/default/minio
MINIO_ACCESS_KEY="minioadmin"
MINIO_VOLUMES="/usr/local/share/minio/"
MINIO_OPTS="-C /etc/minio --address localhost:9001"
MINIO_SECRET_KEY="minioadmin"
I have set the value MINIO_VOLUMES.
I tried to start manually with minio server --address :9001 /usr/local/share/minio/, it works.
now I don't know what goes wrong with starting the minio server by using the systemctl start minio
I'd recommend sticking to the official documentation wherever possible. It's intended for distributed deployments but the only real change is that your MINIO_VOLUMES will be for a single node/drive.
I would recommend trying a combination of things here:
Review minio.service and ensure the user/group exists
Review file path permissions on the MINIO_VOLUMES value
Now for the why:
My guess without seeing further logs (journalctl -u minio would have been helpful here) is that this is a combination of two things:
the minio.service user/group doesn't have rwx permissions on the /usr/local/share/minio path,
you are missing an environment variable we recently introduced to prevent users from pointing at their root drive (this was intended as a safety measure, but somewhat complicates these kinds of smaller setups).
Take a look at these lines in the minio.service file - I'm assuming that is what you are using based on the instructions in the DO guide.
If you ls -al /usr/local/share/minio I would venture it has ROOT permissions for user and group and limited write access if any.
Hope this helps - for further troubleshooting having at least 10-20 lines from journalctl is invaluable, as it would show the actual error and not just the final quit message.
I need to deploy my Go app to aws (ec2 instance), with Ubuntu 18.04, however I can't manage to make it run using the systemd. Here is my created service (/lib/systemd/system/go.service)
[Unit]
Description=go api
[Service]
Type=simple
Restart=always
RestartSec=5s
ExecStart=/home/ubuntu/go/amutan
[Install]
WantedBy=multi-user.target
Here is the result when I run sudo service go start, sudo service go status
go.service - go api
Loaded: loaded (/lib/systemd/system/go.service; disabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2020-02-25 05
Process: 7326 ExecStart=/home/ubuntu/go/amutan (code=exited, status=203/EXEC)
Main PID: 7326 (code=exited, status=203/EXEC)
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: Stopped go api.
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: Started go api.
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: go.service: Main process exited, code=exited, status=203/EXEC
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: go.service: Failed with result 'exit-code'.
My go binary resides in /home/ubuntu/go which is named amutan.
Any ideas?
That error message is in the official documentation as
203 EXIT_EXEC The actual process execution failed (specifically, the
execve(2) system call). Most likely this is caused by a missing or
non-accessible executable file.
So check permissions, the exact path and things like selinux settings
I use Linux Mint 19. I have created a simple script, "After_suspension", which will run three commands. I am trying to make it run when Mint wakes up from suspension.
matthew#matthew-pc:~$ cat /usr/local/bin/After_suspension
#!/bin/bash
pon dsl-provider
sudo service fancontrol start
/usr/bin/mailnag
matthew#matthew-pc:~$ file /usr/local/bin/After_suspension
/usr/local/bin/After_suspension: Bourne-Again shell script, ASCII text executable
"mailnag" is (text/x-python). I have created the following service file, which has been enabled but failed. How can I make it run? Should I use three separate service files to run the three commands?
matthew#matthew-pc:~$ cat /etc/systemd/system/After_suspension.service
[Unit]
After=suspend.target
[Service]
ExecStart=/usr/local/bin/After_suspension
[Install]
WantedBy=suspend.target
matthew#matthew-pc:~$ systemctl is-enabled After_suspension.service
enabled
matthew#matthew-pc:~$ systemctl is-active After_suspension.service
failed
matthew#matthew-pc:~$ systemctl status After_suspension.service
● After_suspension.service
Loaded: loaded (/etc/systemd/system/After_suspension.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-04-11 01:19:42 HKT; 3min 46s ago
Process: 11655 ExecStart=/usr/local/bin/After_suspension (code=exited, status=1/FAILURE)
Main PID: 11655 (code=exited, status=1/FAILURE)
Apr 11 01:19:11 matthew-pc After_suspension[11655]: File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 122, in __new__
Apr 11 01:19:11 matthew-pc After_suspension[11655]: bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
Apr 11 01:19:11 matthew-pc After_suspension[11655]: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbu
Apr 11 01:19:07 matthew-pc systemd[1]: Started
After_suspension.service.
Apr 11 01:19:07 matthew-pc systemd[1]: After_suspension.service: Main process exited, code=exited, status=1/FAILURE
Apr 11 01:19:42 matthew-pc pppd[11657]: Timeout waiting for PADO packets
Apr 11 01:19:42 matthew-pc pppd[11657]: Unable to complete PPPoE Discovery
Apr 11 01:19:42 matthew-pc pppd[11657]: Terminating on signal 15
Apr 11 01:19:42 matthew-pc pppd[11657]: Exit.
Apr 11 01:19:42 matthew-pc systemd[1]: After_suspension.service: Failed with result 'exit-code'.
The following is the present "fancontrol.service".
matthew#matthew-pc:~$ cat /lib/systemd/system/fancontrol.service
[Unit]
Description=fan speed regulator
# Run pwmconfig to create this file.
ConditionPathExists=/etc/fancontrol
After=lm-sensors.service
Documentation=man:fancontrol(8) man:pwmconfig(8)
[Service]
ExecStartPre=/usr/sbin/fancontrol --check
ExecStart=/usr/sbin/fancontrol
PIDFile=/var/run/fancontrol.pid
[Install]
WantedBy=multi-user.target
Systemd runs system scripts as root, so you're trying to start GNOME disks as a root, in a terminal session with no X server active, that's why you probably get a connection refused message.
You probably want to configure that service to be an user service (you'll probably have to pass the DISPLAY variable, too), or just first try setting a DISPLAY variable in either your script or systemd service.
In Raspbian Stretch Lite I created the following systemd service:
[Unit]
Description=Autostart
After=multi-user.target
[Service]
Type=forking
ExecStart=/home/pi/autostart.sh
User=pi
Group=pi
[Install]
WantedBy=multi-user.target
Here the content of autostart.sh:
#!/bin/sh -ex
export TERM=linux
clear
mkdir -p /home/pi/logs
/home/pi/bin/./TestApp&
The script is actually executed (I added a debug echo to a file) but the application is not launched. It's a Qt5 console application, not a GUI one.
Trying to manually launch the script (i.e. ./autostart.sh) works as expected.
Instead, manually start the service leads to this output:
$ sudo systemctl start autostart.service
$ systemctl status autostart.service
● autostart.service - Autostart
Loaded: loaded (/lib/systemd/system/autostart.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-09-28 19:56:33 CEST; 9s ago
Process: 1351 ExecStart=/home/pi/autostart.sh (code=exited, status=0/SUCCESS)
Main PID: 1354 (code=exited, status=127)
Sep 28 19:56:33 localhost systemd[1]: Starting Autostart...
Sep 28 19:56:33 localhost autostart.sh[1351]: + export TERM=linux
Sep 28 19:56:33 localhost autostart.sh[1351]: + clear
Sep 28 19:56:33 localhost autostart.sh[1351]: [34B blob data]
Sep 28 19:56:33 localhost systemd[1]: Started Autostart.
Sep 28 19:56:33 localhost systemd[1]: autostart.service: Main process exited, code=exited, status=127/n/a
Sep 28 19:56:33 localhost systemd[1]: autostart.service: Unit entered failed state.
Sep 28 19:56:33 localhost systemd[1]: autostart.service: Failed with result 'exit-code'.
It's ok the mkdir command is not executed (the directory is already there), but I don't understand why application is not executed.
What could I do to get more information about what's happening?
First, running in the background is not the same as forking.
You can get rid of your autostart script and make a simpler systemd config file.
remove Type=forking. The default is Type=simple, which will handle running your app in the background for you.
Set Environment="TERM=linux" directly in the system configuration.
You can have multiple ExecStart= lines. Add a first one that is ExecStart=-/bin/mkdir -p /home/pi/logs, the extra "dash" there will allow the command to 'succeed' even if the directory has already been made.
Finally, run your app with ExecStart=/home/pi/bin/TestApp
Related man pages are man systemd.service, man systemd.exec, or look up any directive with man systemd.directives.
I am behind a proxy server and following the "Kubernetes Installation with Vagrant & CoreOS" steps listed here: https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
After finalizing the install, when I run
$ kubectl get nodes
I get the error.
Unable to connect to the server: Service Unavailable
e1, c1, and w1 are up and I can issue $vagrant ssh each of them.
when I check the w1, I have seen docker service was not running with the error listed below.
----------------------------------------------------------------------------
-- Unit docker.service has failed.
--
-- The result is dependency.
Aug 19 04:09:25 w1 systemd[1]: docker.service: Job docker.service/start failed with result 'dependency'.
Aug 19 04:09:25 w1 systemd[1]: flanneld.service: Unit entered failed state.
Aug 19 04:09:25 w1 systemd[1]: flanneld.service: Failed with result 'exit-code'.
Aug 19 04:09:30 w1 systemd[1]: flanneld.service: Service hold-off time over, scheduling restart.
Aug 19 04:09:30 w1 systemd[1]: Stopped Network fabric for containers.
-- Subject: Unit flanneld.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has finished shutting down.
Aug 19 04:09:30 w1 systemd[1]: Starting Network fabric for containers...
-- Subject: Unit flanneld.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has begun starting up.
Aug 19 04:09:30 w1 rkt[6888]: image: using image from file /usr/lib/rkt/stage1-images/stage1-fly.aci
Aug 19 04:09:31 w1 rkt[6888]: image: searching for app image quay.io/coreos/flannel
Aug 19 04:09:31 w1 rkt[6888]: run: discovery failed
Aug 19 04:09:31 w1 systemd[1]: flanneld.service: Main process exited, code=exited, status=1/FAILURE
Aug 19 04:09:31 w1 systemd[1]: Failed to start Network fabric for containers.
-- Subject: Unit flanneld.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has failed.
--
-- The result is failed.
Aug 19 04:09:31 w1 systemd[1]: flanneld.service: Unit entered failed state.
Aug 19 04:09:31 w1 systemd[1]: flanneld.service: Failed with result 'exit-code'.
----------------------------------------------------------------------------
I am guessing that problem is because of I am behind the proxy. Before running the install steps I issue the commands
$export "HTTP_PROXY=http://http-proxy.xxxxxx.com:8080"
$export "HTTPS_PROXY=http://http-proxy.xxxxxx.com:8080"
$export "http_proxy=http://http-proxy.xxxxxx.com:8080"
$export "https_proxy=http://http-proxy.xxxxxx.com:8080"
Do you know if this is enough for the installation behind proxy or do I need to add proxy settings to somewhere else.
Thank you in advance,
turgos
The variables you're exporting are valid only in your current shell session, they are not available to your flannel systemd unit.
Create the following drop-in inside the systemd unit directory, and then reload the daemon with systemctl daemon-reload, it should fix your issue with flannel:
/etc/systemd/system/flannel.service.d/proxy.conf:
[Service]
Environment="HTTP_PROXY=http://http-proxy.xxx:8080"
Environment="...
A similar example is available in the CoreOS documentation: Customizing Docker