Executing Coral Board Demonstration Model on Boot with Custom Systemd Service - systemd

I am trying to configure my coral board to boot to the Coco object detection model using a custom systemd service. I've created the executable file, unit file and then enabled the service. The intent is for the camera feed to display to a monitor, but when I power on the board, the monitor only displays a bluish background (I assume the "home screen" of the board).
The executable file:
edgetpu_detect \
--model mobilenet_ssd...
--labels coco...
The unit file:
[Unit]
Description=systemd service.
After=weston.target
[Service]
PAMName=login
Type=simple
User=mendel
WorkingDirectory=/home/mendel
ExecStart=/bin/bash /usr/bin/test_service.sh
Restart=always
[Install]
WantedBy=multi-user.targer
Status of service after enabling and after powering on:
mendel#jumbo-tang:/etc/system$ sudo systemctl status myservice.service
myservice.service - systemd service.
Loaded: loaded (/etc/systemd/system/system/myservice.service; enabled; vendor preset
Active: active (running) since Mon 2020-01-06 03:32:03 UTC; 1s ago
Main PID: 4847 (bash)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/myservice.service
4847 /bin/bash /usr/bin/test_service.sh
Jan 06 03:32:03 jumbo-tang systemd[1]: myservice.service: Service hold-off time
Jan 06 03:32:03 jumbo-tang systemd[1]: Stopped Example systemd service..
Jan 06 03:32:03 jumbo-tang systemd[1]: Started Example systemd service..
Jan 06 03:32:03 jumbo-tang systemd[4847]: pam_unix(login:session): session opene
The executable is saved to /usr/bin, and was made executable with sudo chmod +x /usr/bin/test_service.sh
The unit file was saved to /etc/systemd/system, and was given permissions with sudo chmod 644 /etc/systemd/system/myservice.service
I'm curious to know if my executable cannot simply contain the code I'd normally use to launch a model, like I've done, or if my unit file is properly configured, or what else could be wrong I'm not thinking of.
Any help is appreciated!

I believe I've answered you via coral-support, we discussed that you're most likely just missing a couple things:
1) When starting a systemd service especially at boot, sometimes not all environment variable are loaded, in this case, you may need to add the line:
Environment=DISPLAY=:0
before ExecStart. However, I don't suspect that is is the issue, because the process does waits on weston.target, which should already waits on environment variables.
2) This one is a lot more complicated than the previous one but you misspelled
"target" in "WantedBy=multi-user.targer" (joking, of course)
I'm showing the steps here again as an example for future references.
1) create a file call detects.service with the following contents:
[Unit]
Description=systemd auto face detection service
After=weston.target
[Service]
PAMName=login
Type=simple
User=mendel
WorkingDirectory=/home/mendel
Environment=DISPLAY=:0
ExecStart=/bin/bash /usr/bin/detect_service.sh
Restart=always
[Install]
WantedBy=multi-user.target
2) mv the file /lib/systemd/system/detects.service
$ sudo mv detects.service /lib/systemd/system/detects.service
3) create a file call detect_service.sh with the following content
edgetpu_detect --model fullpath/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite --label fullpath/coco_labels.txt
4) make it executable and mv it to /usr/bin
$ sudo chmod u+x detect_service.sh
$ sudo mv detect_service.sh /usr/bin
5) enable the service with systemctl
$ sudo systemctl enable detects.service

Related

redmine, puma, nginx and crontab

Wanted to make puma start automatically when system reboots.
This is my crontab line (standard user):
# m h dom mon dow command
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/local/bin:/usr/bin
HOME=/home/seven/
#reboot /home/seven/mars/start.sh
the file start.sh
#!/bin/bash
/usr/bin/screen -dmS mars /home/seven/mars/puma -e production
Is I run the file by myself:
./start.sh
All works fine, screen starts, puma starts. Brill
But if I reboot the machine, nothing happens, screen and puma are not loaded.
What am I doing wrong?
Many thanks
One possible way to run puma, is with systemd as proposed by Puma's team at
https://github.com/puma/puma/blob/master/docs/systemd.md
This way would also let you
restart the service by simply typing
systemctl restart redmine
and you can get status by
systemctl status redmine
so the output looks like:
● redmine.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/redmine.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-01-03 06:25:16 CET; 2 days ago
Main PID: 29598 (ruby)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/redmine.service
└─29598 puma 4.2.1 (tcp://0.0.0.0:3000) [redmine]
Jan 03 06:25:17 srv-redmine puma[29598]: Puma starting in single mode...
Jan 03 06:25:17 srv-redmine puma[29598]: * Version 4.2.1 (ruby 2.6.3-p62), codename: Distant Airhorns
Jan 03 06:25:17 srv-redmine puma[29598]: * Min threads: 0, max threads: 16
Jan 03 06:25:17 srv-redmine puma[29598]: * Environment: development
Jan 03 06:25:19 srv-redmine puma[29598]: * Listening on tcp://0.0.0.0:3000
Jan 03 06:25:19 srv-redmine puma[29598]: Use Ctrl-C to stop
Place the following code into: /etc/systemd/system/redmine.service
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
# Preferably configure a non-privileged user
User=testuser
# Specify the path to your puma application root
WorkingDirectory=/home/testuser/redmine
# Helpful for debugging socket activation, etc.
# Environment=PUMA_DEBUG=1
# Setting secret_key_base for rails production environment. We can set other Environment variables the same way, for example PRODUCTION_DATABASE_PASSWORD
#Environment=SECRET_KEY_BASE=b7fbccc14d4018631dd739e8777a3bef95ee8b3c9d8d51f14f1e63e613b17b92d2f4e726ccbd0d388555991c9e90d3924b8aa0f89e43eff800774ba29
# The command to start Puma, use 'which puma' to get puma's bin path, specify your config/puma.rb file
ExecStart=/home/testuser/.rvm/wrappers/my_app/puma -C /home/testuser/redmine/config/puma.rb
Restart=always
KillMode=process
#RemainAfterExit=yes
#KillMode=none
[Install]
WantedBy=multi-user.target
Make sure to adjust user and path in above code to fit your system, by replacing testuser with your real Redmine user.
puma.rb file can look something like this:
port ENV['PORT'] || 3000
stdout_redirect '/home/testuser/redmine/log/puma.stderr.log', '/home/testtser/redmine/log/puma.stdout.log'
#daemonize true
#workers 3
#threads 5,5
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
For more info about puma confg take a look at:
https://github.com/puma/puma
Thanks to Aleksandar Pavić for point out that I can use systemd!!!
I am the type that if I want something to work in one way, I ignore everything else because I want thinks to work the way I intended in first place. So in short, me been stupid.
Systemd was not that simple in the end, but reading the puma documentation helped a lot.
My redmine.service file:
[Unit]
Description=Puma HTTP Server
After=network.target
# Uncomment for socket activation (see below)
#Requires=puma.socket
[Service]
Type=simple
#WatchdogSec=10
# Preferably configure a non-privileged user
User=seven
# Specify the path to your puma application root
WorkingDirectory=/home/seven/mars/
# The command to start Puma, use 'which puma' to get puma's bin path, specify your config/puma.rb file
ExecStart=/bin/bash -lc '/home/seven/.rvm/gems/ruby-2.7.2/bin/puma -C /home/seven/mars/config/puma.rb'
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
and my puma.rb
# config/puma.rb
workers Integer(ENV['PUMA_WORKERS'] || 2)
threads Integer(ENV['MIN_THREADS'] || 0), Integer(ENV['MAX_THREADS'] || 16)
rackup DefaultRackup
port ENV['PORT'] || 9292
environment ENV['RACK_ENV'] || 'production'
lowlevel_error_handler do |e|
Rollbar.critical(e)
[500, {}, ["An error has occurred, and engineers have been informed. Please reload the page. If you continue to have problems, contact hq#starfleet-command.co\n"]]
end
After this systemd started to work with one exception. I was still getting an error when I wanted to enable the process:
sudo systemctl enable redmine.service
Synchronizing state of redmine.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable redmine
update-rc.d: error: cannot find a LSB script for redmine
I needed to remove the redmine file in /etc/init.d/ and enable redmine.service again to re-create the start script
Now all works as it should :) Many thanks to all!!!
Your details are not sufficient to answer but in your case i think that you created crontab file manually. if so why you just don't use that following command to add a cron job:
crontab -e
If you want to add it manually, make sure that your file has execute mode, add it to /etc/cron.d and reload your cron service with sudo service cron reload.
But if it doesn't help you, try to see the logs with the following command:
journalctl -xe
This may help you to debug your job when your system reboots.
so, this is the start.sh script:
#!/bin/bash
DISPLAY=:0; export DISPLAY
SHELL=/bin/bash
PWD=/home/seven/mars
LOGNAME=seven
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/home/seven
LANG=C.UTF-8
USER=seven
# main script to start screen and after puma.sh
cd /home/seven/mars/ && /usr/bin/screen -dmS mars /home/seven/puma.sh
what I did not is, I am trying to run a second script form the one that cron is starting. The puma.sh
Content:
#!/bin/bash
DISPLAY=:0; export DISPLAY
SHELL=/bin/bash
PWD=/home/seven/mars
LOGNAME=seven
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/home/seven
LANG=C.UTF-8
USER=seven
# main script to start puma
cd /home/seven/mars/ && puma -e production -w 2 -t 0:16
when I run the start script manually: ./start.sh in Cli all works fine.
When I reboot the machine (ubuntu 20.04 LTS) the script is been triggered, screen starts but the second script is not starting.
Now, I didi try to put the puma -e production ... after the screen -dmS mars ... (without the second script line), but that ended up in screen not starting after reboot.
Again, the start.sh work perfectly when triggered manually under Cli, just not when triggered by Cron, well not fully.

Error when creating a systemd unit for Apache Drill

I have installed Apache Drill and want to start it when the machine boots up. To do this, I have created the following systemd unit in /etc/systemd/system/drill.service
[Unit]
Description=Start/Stop Apache Drill
After=syslog.target network.target
[Service]
Type=forking
User=me
Group=me
ExecStartPre==-/usr/bin/zk_up
ExecStart=/opt/apache/drill/current/bin/drillbit.sh --config /opt/apache/drill/current/conf start
ExecStop=/opt/apache/drill/current/bin/drillbit.sh stop
[Install]
WantedBy=multi-user.target
The issue is that when I issue the command systemctl start drill, the service does not start completely. It seems to hang and then times out. While the process is hung, running the command systemctl status -l drill.service shows the status as activating. This is the output of the command systemctl status -l drill.service
drill.service - Start/Stop Apache Drill
Loaded: loaded (/etc/systemd/system/drill.service; disabled; vendor preset: disabled)
Active: activating (start) since Fri 2019-10-25 07:32:24 UTC; 55s ago
Process: 10257 ExecStartPre=/usr/bin/zk_up (code=exited, status=0/SUCCESS)
Control: 10262 (drillbit.sh)
CGroup: /system.slice/drill.service
├─10262 /bin/bash /opt/apache/drill/current/bin/drillbit.sh --config /opt/apache/drill/current/conf start
├─10273 /bin/bash /opt/apache/drill/current/bin/drillbit.sh --config /opt/apache/drill/current/conf start
├─10274 find -L / -name java -type f
└─10275 head -n 1
After the process fails, I see the following message displayed
Job for drill.service failed because a timeout was exceeded. See "systemctl status drill.service" and "journalctl -xe" for details.
The command systemctl status -l drill.service after the timeout returns the following
drill.service - Start/Stop Apache Drill
Loaded: loaded (/etc/systemd/system/drill.service; disabled; vendor preset: disabled)
Active: failed (Result: timeout) since Fri 2019-10-25 07:33:54 UTC; 3min 3s ago
Process: 10262 ExecStart=/opt/apache/drill/current/bin/drillbit.sh --config /opt/apache/drill/current/conf start (code=killed, signal=TERM)
Process: 10257 ExecStartPre=/usr/bin/zk_up (code=exited, status=0/SUCCESS)
And when I run journalctl -xe, I see the following messages
Oct 25 07:39:17 drill-1 drillbit.sh[10774]: find: File system loop detected; ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sda’ is part of the same file system loop as ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda’.
Oct 25 07:39:17 drill-1 drillbit.sh[10774]: find: File system loop detected; ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi/subsystem/2:0/subsystem’ is part of the same file system loop as ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi/subsystem’.
Oct 25 07:39:17 drill-1 drillbit.sh[10774]: find: File system loop detected; ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi/subsystem/8:0/subsystem’ is part of the same file system loop as ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi/subsystem’.
Oct 25 07:39:17 drill-1 drillbit.sh[10774]: find: File system loop detected; ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi/subsystem/0:38/subsystem’ is part of the same file system loop as ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi/subsystem’.
Oct 25 07:39:17 drill-1 drillbit.sh[10774]: find: File system loop detected; ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi/subsystem/8:16’ is part of the same file system loop as ‘/sys/bus/cpu/devices/cpu0/node0/cpu1/driver/cpu2/firmware_node/subsystem/devices/PNP0303:00/physical_node/subsystem/devices/00:03/tty/ttyS0/subsystem/ttyS2/device/subsystem/devices/VMBUS:01/firmware_node/2dd1ce17-079e-403c-b352-a1921ee207ee/driver/b6650ff7-33bc-4840-8048-e0676786f393/subsystem/devices/00000000-0001-8899-0000-000000000000/host3/scsi_host/host3/subsystem/host2/device/target2:0:0/subsystem/devices/3:0:1:0/scsi_device/3:0:1:0/subsystem/2:0:0:0/device/block/sda/sda2/subsystem/sdb/bdi’.
Can anyone tell me why am I seeing these messages and what do I need to do to get it working?
If I run the command defined for the ExecStart parameter on my terminal itself, Apache Drill start without throwing any errors. The command /usr/bin/zk_up just checks if the zookeepers are running before I start Drill and as shown above, this command exits successfully.
When I closely looked at the output of the systemctl status -l drill.service command, I saw that the culprit that was printing the file system loop messages was the find command and that command was looking for java. I added the JAVA_HOME to the environment in the systemd file and it worked. My new systemd unit is
[Unit]
Description=Start/Stop Apache Drill
After=syslog.target network.target
[Service]
Type=forking
User=me
Group=me
Environment="JAVA_HOME=/opt/java/current"
ExecStartPre==-/usr/bin/zk_up
ExecStart=/opt/apache/drill/current/bin/drillbit.sh --config /opt/apache/drill/current/conf start
ExecStop=/opt/apache/drill/current/bin/drillbit.sh stop
[Install]
WantedBy=multi-user.target
where /opt/java/current is where my $JAVA_HOME is.

How do I remove the bash from systemd service file? and replace it with something that solves my problem explained below?

Boring details alert:
I have a systemd service file which uses a bash file to start the service. Below is the service file in question:
[Unit]
Description=A program service
[Service]
User=root
#change this to your workspace
WorkingDirectory=/data/acloud/repository/lib
#path to executable.
#executable is a bash script file I created to run the application jar file
ExecStart=/data/acloud/repository/lib/program.sh
SuccessExitStatus=143
TimeoutStopSec=10
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Below is program.sh I used in the service file above:
#!/bin/bash
sudo java -XX:+UseG1GC -Xmx1g -Xms512m -jar abc-program-0.0.1-SNAPSHOT.jar
If you are wondering why I created a single line bash script, its because I do not know where and how to put the -XX, -Xms -Xmx parameters in .service file.
Even when the service is inactive, Main PID status is shown as status=0/SUCCESS and at the end of the systemctl status command output, it says Started a program service?? Below is how its shown:
ubuntu#ip-172-**-**-***:/data/acloud/repository/lib$ sudo systemctl status program
program.service - A program service
Loaded: loaded (/etc/systemd/system/program.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2019-02-22 13:08:21 UTC; 45s ago
Process: 27711 ExecStart=/data/acloud/repository/lib/program.sh (code=exited, status=0/SUCCESS)
Main PID: 27711 (code=exited, status=0/SUCCESS)
Feb 22 13:08:21 ip-172-**-**-*** systemd[1]: Started A program service.
I believe that using bash is causing this problem as the exit codes are not manipulated in here. How do I get this to stop?

custom systemd service can't start on Ubuntu 18.04

and thanks in advance for any assistance
I run original QT wallets (command-line based) for various cryptocurrencies. Earlier this year, I set them up as a custom systemd service, and that has been invaluable. It starts them up and shuts them down with the system just like all the normal services. I recently discovered an issue with one in particular, blackcoin.
This service worked fine in the past (I don't know how long it was down for before I found it)
If I run the command after execstart= command manually, everything works just fine. If I try to start the service (via systemctl start blackcoin), it fails with the following service status:
blackcoin.service - blackcoin wallet daemon
Loaded: loaded (/etc/systemd/system/blackcoin.service; enabled; vendor preset: enabled)
Active: failed (Result: core-dump) since Tue 2018-11-20 10:44:01 MST; 2h 51min ago
Process: 12272 ExecStart=/usr/bin/blackcoind -datadir=/coindaemon-rundirectory/blackcoin/ -conf=/coindaemon-rundirectory/blackcoin/blackcoin.conf -daemon (code=exited, status=0/SUCCESS)
Main PID: 12283 (code=dumped, signal=ABRT)
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Service hold-off time over, scheduling restart.
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Scheduled restart job, restart counter is at 5.
Nov 20 10:44:01 knox systemd[1]: Stopped blackcoin wallet daemon.
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Start request repeated too quickly.
Nov 20 10:44:01 knox systemd[1]: blackcoin.service: Failed with result 'core-dump'.
Nov 20 10:44:01 knox systemd[1]: Failed to start blackcoin wallet daemon.
Here is the body of the systemd service:
##################################################################
## Blackcoin Systemd service ##
##################################################################
[Unit]
Description=blackcoin wallet daemon
After=network.target
[Service]
Type=forking
User=somedude
RuntimeDirectory=blackcoind
PIDFile=/run/blackcoind/blackcoind.pid
Restart=on-failure
ExecStart=/usr/bin/blackcoind \
-datadir=/home/somedude/blackcoin/ \
-conf=/home/somedude/blackcoin/blackcoin.conf \
-daemon
ExecStop=/usr/bin/blackcoind \
-datadir=/home/somedude/blackcoin/ \
-conf=/home/somedude/blackcoin/blackcoin.conf \
stop
# Recommended hardening
# Provide a private /tmp and /var/tmp.
PrivateTmp=true
# Mount /usr, /boot/ and /etc read-only for the process.
ProtectSystem=full
# Disallow the process and all of its children to gain
# new privileges through execve().
NoNewPrivileges=true
# Use a new /dev namespace only populated with API pseudo devices
# such as /dev/null, /dev/zero and /dev/random.
PrivateDevices=true
# Deny the creation of writable and executable memory mappings.
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.target
And this is what blackcoin.conf contains:
rpcuser=somedude
rpcpassword=12345 (please don't rob my coins!)
# Wallets
wallet=wallet-blackcoin.dat
pid=/run/blackcoind/blackcoind.pid
rpcport=56111
port=56112
I'm going to keep testing and will post anything new that I find. Thanks for looking!

Cannot start prometheus by using systemd

OS level: CentOS Linux release 7.4.1708
Prometheus level: 2.4.2
prometheus.service:
[Unit]
Description=Prometheus
[Service]
User=prometheus
ExecStart=/usr/local/prometheus/prometheus
[Install]
WantedBy=default.target
When I use systemctl start prometheus to start the prometheus service, it always exit the main process by itself. And the systemctl's log shows like this:
● prometheus.service - Prometheus
Loaded: loaded (/etc/systemd/system/prometheus.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2018-09-25 10:43:56 CST; 6s ago
Process: 5174 ExecStart=/usr/local/prometheus/prometheus (code=exited, status=1/FAILURE)
Main PID: 5174 (code=exited, status=1/FAILURE)
Sep 25 10:43:56 devtestserver systemd[1]: Started Prometheus.
Sep 25 10:43:56 devtestserver systemd[1]: Starting Prometheus...
Sep 25 10:43:56 devtestserver prometheus[5174]: level=info ts=2018-09-25T02:43:56.736457704Z caller=main.go:238 msg="Starting Prometheus" version="(version=2.4.2, branch=HE...13b1190a0)"
Sep 25 10:43:56 devtestserver systemd[1]: prometheus.service: main process exited, code=exited, status=1/FAILURE
Sep 25 10:43:56 devtestserver systemd[1]: Unit prometheus.service entered failed state.
Sep 25 10:43:56 devtestserver systemd[1]: prometheus.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
I have no ideas with this problem. I use the same config for the node_exporter, but node_exporter can start as normal. Please help. Thanks a lot.
You have not added configuration file i.e. prometheus.yml
Considering Service part of your prometheus.service file,
ExecStart=/usr/local/prometheus/prometheus \
--config.file /prometheus-2.26.0.linux-amd64/prometheus.yml
here, my .yml file is in /prometheus-2.26.0.linux-amd64/ location.
your might be different. Befre running check your both paths i.e. your executable file is on the path given in "ExecStart" and yml file is in --config.file
then reload your system by
systemctl daemon-reload
systemctl start prometheus
systemctl enable prometheus
then check the status using,
systemctl status prometheus
It should be active(running).
This should solve your problem. Let me know if it helped : )
There is an extra "i" at the end of WantedBy=default.target.
To get more details about services failing to start, try sudo journalctl -ex
My guess is it's either the extra "i" or Prometheus might not be able to parse your scrape rules or alerts files. It comes with "promtool" to check your configuration files and is installed in the same directory as prometheus. Your first step should be to try "promtool check config /path/to/prometheus.yml"
I encountered the same issue with Ubuntu 16.04. Turned out to be a permissions issue.
You should check that you user owns the directories in which you installed the binaries and the files inside these directories.
Where is the config file located? systemd is executed by / by default. prometheus reads the setting of ./prometheus.yml by default. Perhaps you need to add the following config option to the unit file of systemd.
[Unit]
Description=Prometheus
[Service]
User=prometheus
ExecStart=/usr/local/prometheus/prometheus --config.file /path/to/your/config
[Install]
WantedBy=default.target
This problem is caused because the data storage directory does not have permission. The default Prometheus data directory is /data.
chown -R prometheus:prometheus /data
copy and paste this in your command line:
sudo tee /etc/systemd/system/prometheus.service<<EOF
[Unit]
Description=Prometheus
Documentation=https://prometheus.io/docs/introduction/overview/
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User=prometheus
Group=prometheus
ExecReload=/bin/kill -HUP \$MAINPID
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.external-url=
SyslogIdentifier=prometheus
Restart=always
[Install]
WantedBy=multi-user.target
EOF
I encountered similar issue in redhat/Centos.I solved it by temporarily running "sudo setenforce 0". You can also edit the /etc/selinux/config file and set the SELINUX to disabled

Resources