Systemd fluentd service outputs 127 error - systemd

I have this systemd configuration file for fluentd
[Unit]
Description=Fluentd
Wants=network-online.target
After=network-online.target
[Service]
User=xxx
Group=users
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/rvm/gems/ruby-2.7.2/bin/fluentd --config /etc/fluent/fluent.conf
[Install]
WantedBy=multi-user.target
systemctl status outputs this:
xxx#test:/home/xxx # sudo systemctl status fluentd.service
● fluentd.service - Fluentd
Loaded: loaded (/etc/systemd/system/fluentd.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2023-02-08 20:51:05 UTC; 141ms ago
Process: 5286 ExecStart=/usr/local/rvm/gems/ruby-2.7.2/bin/fluentd --config /etc/fluent/fluent.conf (code=exited, status=127)
Main PID: 5286 (code=exited, status=127)
Feb 08 20:51:05 xenoss.io systemd[1]: Unit fluentd.service entered failed state.
Feb 08 20:51:05 xenoss.io systemd[1]: fluentd.service failed.
Warning: fluentd.service changed on disk. Run 'systemctl daemon-reload' to reload units.
But when I just run using this:
fluentd --config /etc/fluent/fluent.conf
I could successfully start it up, but with systemd it fails
Also which fluentd outputs:
/usr/local/rvm/gems/ruby-2.7.2/bin/fluentd

Related

Cannot run go binary as systemctl daemon

I have a go web app which is on the path /home/me/go/src/myapp.
When I run the executable using ./myapp on bash terminal, it works fine.
However this requires an open terminal to continue running, which is not practial so I tried to make a systemd daemon on my Debian server's /etc/systemd/system/myapp.service like this:
[Unit]
Description=MyApp Daemon
StartLimitIntervalSec=0
[Service]
Type=simple
User= me
Group=www-data
ExecStart=/home/me/go/src/myapp/myapp
TimeoutStopSec=300
[Install]
WantedBy=multi-user.target
I have enabled and started the daemon:
systemctl enable myapp
Start it:
systemctl start myapp
However it fails to run the daemn, and I get this error:
# systemctl status myapp
● myapp.service - MyApp Daemon
Loaded: loaded (/etc/systemd/system/myapp.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-07-17 05:42:18 CDT; 4s ago
Process: 19058 ExecStart=/home/me/go/src/myapp/myapp (code=exited, status=127)
Main PID: 19058 (code=exited, status=127)
CPU: 2ms
Jul 17 05:42:18 front systemd[1]: Started Myapp Daemon.
Jul 17 05:42:18 front systemd[1]: myapp.service: Main process exited, code=exited, status=127/n/a
Jul 17 05:42:18 front systemd[1]: myapp.service: Unit entered failed state.
Jul 17 05:42:18 front systemd[1]: myapp.service: Failed with result 'exit-code'.
I'm wondering what could be wrong and how should I fix it?
After lots of trial and error this config worked for me:
[Unit]
Description=Sai Go webapp Daemon
#After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
User= me
Group=www-data
WorkingDirectory=/home/me/go/src/myapp/
ExecStart=/home/me/go/src/myapp/myapp
TimeoutStopSec=300
[Install]
WantedBy=multi-user.target
Apparently WorkingDirectory was necessary.

System Unit File always failed

I need to deploy my Go app to aws (ec2 instance), with Ubuntu 18.04, however I can't manage to make it run using the systemd. Here is my created service (/lib/systemd/system/go.service)
[Unit]
Description=go api
[Service]
Type=simple
Restart=always
RestartSec=5s
ExecStart=/home/ubuntu/go/amutan
[Install]
WantedBy=multi-user.target
Here is the result when I run sudo service go start, sudo service go status
go.service - go api
Loaded: loaded (/lib/systemd/system/go.service; disabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2020-02-25 05
Process: 7326 ExecStart=/home/ubuntu/go/amutan (code=exited, status=203/EXEC)
Main PID: 7326 (code=exited, status=203/EXEC)
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: Stopped go api.
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: Started go api.
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: go.service: Main process exited, code=exited, status=203/EXEC
Feb 25 05:22:46 ip-172-31-27-28 systemd[1]: go.service: Failed with result 'exit-code'.
My go binary resides in /home/ubuntu/go which is named amutan.
Any ideas?
That error message is in the official documentation as
203 EXIT_EXEC The actual process execution failed (specifically, the
execve(2) system call). Most likely this is caused by a missing or
non-accessible executable file.
So check permissions, the exact path and things like selinux settings

The service has been enabled but failed. How can I make it run?

I use Linux Mint 19. I have created a simple script, "After_suspension", which will run three commands. I am trying to make it run when Mint wakes up from suspension.
matthew#matthew-pc:~$ cat /usr/local/bin/After_suspension
#!/bin/bash
pon dsl-provider
sudo service fancontrol start
/usr/bin/mailnag
matthew#matthew-pc:~$ file /usr/local/bin/After_suspension
/usr/local/bin/After_suspension: Bourne-Again shell script, ASCII text executable
"mailnag" is (text/x-python). I have created the following service file, which has been enabled but failed. How can I make it run? Should I use three separate service files to run the three commands?
matthew#matthew-pc:~$ cat /etc/systemd/system/After_suspension.service
[Unit]
After=suspend.target
[Service]
ExecStart=/usr/local/bin/After_suspension
[Install]
WantedBy=suspend.target
matthew#matthew-pc:~$ systemctl is-enabled After_suspension.service
enabled
matthew#matthew-pc:~$ systemctl is-active After_suspension.service
failed
matthew#matthew-pc:~$ systemctl status After_suspension.service
● After_suspension.service
Loaded: loaded (/etc/systemd/system/After_suspension.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-04-11 01:19:42 HKT; 3min 46s ago
Process: 11655 ExecStart=/usr/local/bin/After_suspension (code=exited, status=1/FAILURE)
Main PID: 11655 (code=exited, status=1/FAILURE)
Apr 11 01:19:11 matthew-pc After_suspension[11655]: File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 122, in __new__
Apr 11 01:19:11 matthew-pc After_suspension[11655]: bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
Apr 11 01:19:11 matthew-pc After_suspension[11655]: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbu
Apr 11 01:19:07 matthew-pc systemd[1]: Started
After_suspension.service.
Apr 11 01:19:07 matthew-pc systemd[1]: After_suspension.service: Main process exited, code=exited, status=1/FAILURE
Apr 11 01:19:42 matthew-pc pppd[11657]: Timeout waiting for PADO packets
Apr 11 01:19:42 matthew-pc pppd[11657]: Unable to complete PPPoE Discovery
Apr 11 01:19:42 matthew-pc pppd[11657]: Terminating on signal 15
Apr 11 01:19:42 matthew-pc pppd[11657]: Exit.
Apr 11 01:19:42 matthew-pc systemd[1]: After_suspension.service: Failed with result 'exit-code'.
The following is the present "fancontrol.service".
matthew#matthew-pc:~$ cat /lib/systemd/system/fancontrol.service
[Unit]
Description=fan speed regulator
# Run pwmconfig to create this file.
ConditionPathExists=/etc/fancontrol
After=lm-sensors.service
Documentation=man:fancontrol(8) man:pwmconfig(8)
[Service]
ExecStartPre=/usr/sbin/fancontrol --check
ExecStart=/usr/sbin/fancontrol
PIDFile=/var/run/fancontrol.pid
[Install]
WantedBy=multi-user.target
Systemd runs system scripts as root, so you're trying to start GNOME disks as a root, in a terminal session with no X server active, that's why you probably get a connection refused message.
You probably want to configure that service to be an user service (you'll probably have to pass the DISPLAY variable, too), or just first try setting a DISPLAY variable in either your script or systemd service.

Issue with custom service systemd when start Apache Gobblin

Running /opt/gobblin/bin/gobblin-standalone.sh start directly everything works, the output in logs are fine.
Running it through a systemd service, not works. Nothing are outputting in logs.
[vagrant#localhost ~]$ sudo systemctl start gobblin
[vagrant#localhost ~]$ sudo systemctl status gobblin
● gobblin.service - Gobblin Data Ingestion Framework
Loaded: loaded (/usr/lib/systemd/system/gobblin.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sun 2019-01-20 16:44:23 UTC; 693ms ago
Docs: https://gobblin.readthedocs.io
Process: 9673 ExecStop=/opt/gobblin/bin/gobblin-standalone.sh stop (code=exited, status=1/FAILURE)
Process: 9671 ExecStart=/opt/gobblin/bin/gobblin-standalone.sh start (code=exited, status=1/FAILURE)
Main PID: 9671 (code=exited, status=1/FAILURE)
Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service: control process exited, code=exited status=1
Jan 20 16:44:23 localhost.localdomain systemd[1]: Unit gobblin.service entered failed state.
Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service failed.
Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service holdoff time over, scheduling restart.
Jan 20 16:44:23 localhost.localdomain systemd[1]: Stopped Gobblin Data Ingestion Framework.
Jan 20 16:44:23 localhost.localdomain systemd[1]: start request repeated too quickly for gobblin.service
Jan 20 16:44:23 localhost.localdomain systemd[1]: Failed to start Gobblin Data Ingestion Framework.
Jan 20 16:44:23 localhost.localdomain systemd[1]: Unit gobblin.service entered failed state.
Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service failed.
The code of /usr/lib/systemd/system/gobblin.service below:
[Unit]
Description=Gobblin Data Ingestion Framework
Documentation=https://gobblin.readthedocs.io
After=network.target
[Service]
Type=simple
User=gobblin
Group=gobblin
WorkingDirectory=/opt/gobblin
ExecStart=/opt/gobblin/bin/gobblin-standalone.sh start
ExecStop=/opt/gobblin/bin/gobblin-standalone.sh stop
Restart=on-failure
[Install]
WantedBy=multi-user.target
The trick is with Type=oneshot, RemainAfterExit=true and set the environments:
[Unit]
Description=Gobblin Data Ingestion Framework
Documentation=https://gobblin.readthedocs.io
After=network.target
[Service]
Type=oneshot
User=gobblin
Group=gobblin
WorkingDirectory=/opt/gobblin
Environment=JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre
Environment=GOBBLIN_FWDIR=/opt/gobblin
Environment=GOBBLIN_JOB_CONFIG_DIR=/etc/gobblin
Environment=GOBBLIN_WORK_DIR=/var/lib/gobblin
Environment=GOBBLIN_LOG_DIR=/var/log/gobblin
ExecStart=/opt/gobblin/bin/gobblin-standalone.sh start
ExecStop=/opt/gobblin/bin/gobblin-standalone.sh stop
RemainAfterExit=true
[Install]
WantedBy=multi-user.target

Failed to start system service on centos with ansible-playbook

I created a system service called cooltoo_storage on centos. I am able to start/stop/restart the service by running the command "service cooltoo_storage start/stop/restart". Now I want to configure it on ansible playbook. Below is my config for starting this service.
- name: start cooltoo_storage service
sudo: yes
service:
name: cooltoo_storage
state: started
After running the ansible-playbook, I got below error
msg: Job for cooltoo_storage.service failed because the control process exited with error code. See "systemctl status cooltoo_storage.service" and "journalctl -xe" for details.
FATAL: all hosts have already failed -- aborting
Below is the command output of "systemctl status cooltoo_storage.service",
● cooltoo_storage.service - LSB: cooltoo storage provider
Loaded: loaded (/etc/rc.d/init.d/cooltoo_storage)
Active: failed (Result: exit-code) since Mon 2016-05-02 11:39:07 CST; 1min 5s ago
Docs: man:systemd-sysv-generator(8)
Process: 26661 ExecStart=/etc/rc.d/init.d/cooltoo_storage start (code=exited, status=203/EXEC)
May 02 11:39:07 Cool-Too systemd[1]: Starting LSB: cooltoo storage provider...
May 02 11:39:07 Cool-Too systemd[26661]: Failed at step EXEC spawning /etc/rc.d/init.d/cooltoo_storage: Exec format error
May 02 11:39:07 Cool-Too systemd[1]: cooltoo_storage.service: control process exited, code=exited status=203
May 02 11:39:07 Cool-Too systemd[1]: Failed to start LSB: cooltoo storage provider.
May 02 11:39:07 Cool-Too systemd[1]: Unit cooltoo_storage.service entered failed state.
May 02 11:39:07 Cool-Too systemd[1]: cooltoo_storage.service failed.
How should I fix this issue?
The problem is irrelevant to Ansible.
Your service cooltoo_storage failed to start. Just make sure it works:
sudo systemctl restart cooltoo_storage.service
sudo systemctl status cooltoo_storage.service
And if not - fix it. Probably cooltoo_storage custom written service. Start investigating from checking out startup config for this specific service:
systemctl cat cooltoo_storage.service
and contents of: /etc/rc.d/init.d/cooltoo_storage

Resources