bash in systemctl . error 2 launch at start - bash

lHello, in preparation for using a RP4 (running ubuntu server) , i am trying to have a bash script that is kicked off on boot... and relaunches is killed. i have included the steps belle along with the content of the file. Any clue on the error code or why it is not work would be greatly appreciated.
any idea on the exit code with a status of 2?
thank you.
uburntu#ubuntu:/etc/systemd/system$ cat prysmbeacon_altona.service
[Unit]
Description=PrysmBeacon--Altona
Wants=network.target
After=network.target
[Service]
Type=simple
DynamicUser=yes
ExecStart=/home/ubuntu/Desktop/prysm/prysm.sh beacon-chain --altona --datadir=/home/ubuntu/.eth2
WorkingDirectory=/home/ubuntu/Desktop/prysm
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
ubuntu#ubuntu:/etc/systemd/system$ systemctl daemon-reload
==== AUTHENTICATING FOR org.freedesktop.systemd1.reload-daemon ===
Authentication is required to reload the systemd state.
Authenticating as: Ubuntu (ubuntu)
Password:
==== AUTHENTICATION COMPLETE ===
ubuntu#ubuntu:/etc/systemd/system$ systemctl start prysmbeacon_altona
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'prysmbeacon_altona.service'.
Authenticating as: Ubuntu (ubuntu)
Password:
==== AUTHENTICATION COMPLETE ===
ubuntu#ubuntu:/etc/systemd/system$ systemctl status prysmbeacon_altona.service
● prysmbeacon_altona.service - PrysmBeacon--Altona
Loaded: loaded (/etc/systemd/system/prysmbeacon_altona.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2020-07-23 15:51:48 CEST; 111ms ago
Process: 3407 ExecStart=/home/ubuntu/Desktop/prysm/prysm.sh beacon-chain --altona --datadir=/home/ubuntu/.eth2 (code=exited, status=2)
Main PID: 3407 (code=exited, status=2)
ubuntu#ubuntu:/etc/systemd/system$

Related

Stopped Laravel queue worker

Using AWS elasticbeanstalk.
Setting my deploy_config file like below:
08_queue_service_restart:
command: "systemctl restart laravel_worker"
files:
/opt/elasticbeanstalk/tasks/taillogs.d/laravel-logs.conf:
content: /var/app/current/storage/logs/laravel.log
group: root
mode: "000755"
owner: root
/etc/systemd/system/laravel_worker.service:
mode: "000755"
owner: root
group: root
content: |
# Laravel queue worker using systemd
# ----------------------------------
#
# /lib/systemd/system/queue.service
#
# run this command to enable service:
# systemctl enable queue.service
[Unit]
Description=Laravel queue worker
[Service]
User=nginx
Group=nginx
Restart=always
ExecStart=/usr/bin/nohup /usr/bin/php /var/app/current/artisan queue:work --tries=3
[Install]
WantedBy=multi-user.target
And it return error like below:
Aug 17 04:21:34 ip-blabla systemd: laravel_worker.service: main process exited, code=exited, status=1/FAILURE
Aug 17 04:21:34 ip-blabla systemd: Unit laravel_worker.service entered failed state.
Aug 17 04:21:34 ip-blabla systemd: laravel_worker.service failed.
Aug 17 04:21:34 ip-blabla systemd: laravel_worker.service holdoff time over, scheduling restart.
Aug 17 04:21:34 ip-blabla systemd: Stopped Laravel queue worker.
Aug 17 04:21:34 ip-blabla systemd: Started Laravel queue worker.
It was working with no error for months. But this morning, started to return error like that. And I tried making rebuild, but nothing changed.
It might stopped working when server restart. You have to manually run command in project directory via terminal to restart laravel queue worker.
for me this command worked
nohup php artisan queue:work --daemon &
Run this command into laravel-project-root-directory

Redhat Codeready Container Failed to start (crc start error): .crcbundle not found

I am receiving the following error when executing 'crc start -p .\pull-secret.txt' command:
/home/admin/.crc/cache/crc_libvirt_4.9.0.crcbundle not found, please provide the path to a valid bundle using the -b option`
crc setup --log-level debug` Debug below:
INFO Checking if libvirt daemon is running
DEBU Checking if libvirtd service is running
DEBU Running 'systemctl status virtqemud.socket'
DEBU Command failed: exit status 3
DEBU stdout: * virtqemud.socket - Libvirt qemu local socket
Loaded: loaded (/usr/lib/systemd/system/virtqemud.socket; disabled; vendor preset: disabled)
Active: inactive (dead)
Listen: /run/libvirt/virtqemud-sock (Stream)
DEBU stderr:
DEBU virtqemud.socket is neither running nor listening
DEBU Running 'systemctl status libvirtd.socket'
DEBU libvirtd.socket is running
INFO Checking if systemd-networkd is running
DEBU Checking if systemd-networkd.service is running
DEBU Running 'systemctl status systemd-networkd.service'
DEBU Command failed: exit status 4
DEBU stdout:
DEBU stderr: Unit systemd-networkd.service could not be found.
DEBU systemd-networkd.service is not running
INFO Checking crc daemon systemd service
DEBU Checking crc daemon systemd service
DEBU Checking if crc-daemon.service is running
DEBU Running 'systemctl --user status crc-daemon.service'
DEBU Command failed: exit status 3
DEBU stdout: * crc-daemon.service - CodeReady Containers daemon
Loaded: loaded (/home/admin/.config/systemd/user/crc-daemon.service; static; vendor preset: enabled)
Active: inactive (dead)
DEBU stderr:
DEBU crc-daemon.service is neither running nor listening
DEBU Checking if crc-daemon.service has the expected content
INFO Checking if systemd-networkd is running
DEBU Checking if systemd-networkd.service is running
DEBU Running 'systemctl status systemd-networkd.service'
DEBU Command failed: exit status 4
DEBU stdout:
DEBU stderr: Unit systemd-networkd.service could not be found.
DEBU systemd-networkd.service is not running
I then ran the following commands:
sudo yan install qemu qemu-kvm libvirt-clients libvirt-daemon-system virtinst bridge-utils
Output:
>> Error: Unable to find a match: qemu libvirt-clients libvirt-daemon-system virtinst bridge-utils
>> [admin#localhost ~]$ systemctl status libvirtd.service
- Test systemctl
>> systemctl status libvirtd.service
crc setup output:
[admin#localhost ~]$ crc setup
INFO Checking if running as non-root
INFO Checking if running inside WSL2
.......... Details removed ..........
INFO Checking if CRC bundle is extracted in '$HOME/.crc'
Your system is correctly setup for using CodeReady Containers, you can now run 'crc start -b $bundlename' to start the OpenShift cluster
I cannot seem to find .crcbundle file despite setup completing successfully.
Nothing found under:
#This seems to be an issue as I cannot find '.crcbundle'
[admin#localhost ~]$ tree --noreport .crc
.crc
├── bin
│   ├── crc -> /home/admin/bin/crc
│   ├── crc-admin-helper-linux
│   └── crc-driver-libvirt
├── crc-http.sock
├── crc.json
└── crc.log
OS info:
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.5
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.5"
Thanks in advance.
To resolve the problem you should execute the next commands with root user:
yum install qemu-kvm libvirt libvirt-daemon-kvm
systemctl start libvirtd
systemctl enable libvirtd
systemctl start virtnetworkd
systemctl enable virtnetworkd
systemctl start virtstoraged
systemctl enable virtstoraged
Then you could execute with a non root user session the next commands:
crc setup
crc start -p pull-secret.txt

systemd : two services are running together

I use a simple script
[Unit]
Description = description here
After = multi-user.target
[Service]
type=simple
ExecStart = /usr/lib/name_deamon/CP_linux/CP_linux
Restart = on-failure
TimeoutStopSec = infinity
[Install]
WantedBy=custom.target
then when typing
systemctl --user status name.service
i get two identical process running in parallel
● name.service - description here
Loaded: loaded (/home/ubuntu/.config/systemd/user/name.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-11-02 11:03:47 CET; 13min ago
Main PID: 1625 (CP_linux_test.e)
Tasks: 2 (limit: 4384)
Memory: 14.2M
CPU: 122ms
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/name.service
├─1625 /usr/lib/name_deamon/CP_linux_test/CP_linux_test.exe
└─1627 /usr/lib/name_deamon/CP_linux_test/CP_linux_test.exe
Since i have one ExecStart i don't understand why i get two process running in parallel.
The main process (PID 1625) is most probably forking another process (PID 1627).
To check that the parent process of 1627 is 1625: ps -o ppid 1627

systemd service not being triggered by its timer unit

Here is my service unit definition
[Unit]
Description=My Service
[Service]
ExecStart=/bin/bash -lc /usr/local/bin//myservice
# ExecStop=/bin/kill -15 $MAINPID
EnvironmentFile=/etc/myservice/config
User=myuser
Group=mygroup
and its timer unit file
[Unit]
Description=Timer for myservice
[Timer]
Unit=myservice.service
OnCalendar=*-*-* 10:33:00
[Install]
WantedBy=timers.target
I have tentatively set the OnCalendar to *-*-* 10:33:00 (followed by a sudo systemctl daemon-reload) but as a I was watching my machine, I didn't see the service firing. I had also set it for 5AM but this morning I saw no evidence of execution
When I perform a manual sudo systemctl start myservice it works as expected.
What might be preventing the service from executing according to its timer schedule?
You did not start the timer.
sudo systemctl start myservice.timer

NFS Vagrant on Fedora 22

I'm trying to run Vagrant using libvirt as my provider. Using rsync is unbearable since I'm working with a huge shared directory, but vagrant does succeed when the nfs setting is commented out and the standard rsync config is set.
config.vm.synced_folder ".", "/vagrant", mount_options: ['dmode=777','fmode=777']
Vagrant hangs forever on this step here after running vagrant up
==> default: Mounting NFS shared folders...
In my Vagrantfile I have this uncommented and the rsync config commented out, which turns NFS on.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
When Vagrant is running it echos this out to the terminal.
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Redirecting to /bin/systemctl start nfs-server.service
Job for nfs-server.service failed. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
Results of systemctl status nfs-server.service
dillon#localhost ~ $ systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2015-05-29 22:24:47 PDT; 22s ago
Process: 3044 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=1/FAILURE)
Process: 3040 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3044 (code=exited, status=1/FAILURE)
May 29 22:24:47 localhost.sulfur systemd[1]: Starting NFS server and services...
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: unable to set any sockets for nfsd
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service: main process exited, code=exited, status=1/FAILURE
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS server and services.
May 29 22:24:47 localhost.sulfur systemd[1]: Unit nfs-server.service entered failed state.
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service failed.
The journelctl -xe log has a ton of stuff in it so I won't post all of it here, but there are some things in the bold red.
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.statd[3028]: failed to create RPC listeners, exiting
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Before I ran vagrant up I looked to see if there were any process binding to port 98 with netstat -tulpn and did not see anything and in fact while vagrant is hanging I ran netstat -tulpn again to see what was binding to port 98 and didn't see anything. (checked for both current user and root)
UPDATE: Haven't gotten any responses.
I wasn't able to figure out the current issue I'm having. I tried using lxc instead, but gets stuck on booting. I'd also prefer not to use VirtualBox, but the issue seems to lie within nfs not the hypervisor. Going to try using the rsync-auto feature Vagrant provides, but I'd prefer to get nfs working.
Looks like when using libvirt the user is given control over nfs and rpcbind, and Vagrant doesn't even try to touch those things like I had assumed it did. Running these solved my issue:
service rpcbind start
service nfs stop
service nfs start
The systemd unit dependencies of nfs-server.service contain rpcbind.target but not rpcbind.service.
One simple solution is to create a file /etc/systemd/system/nfs-server.service containing:
.include /usr/lib/systemd/system/nfs-server.service
[Unit]
Requires=rpcbind.service
After=rpcbind.service
On CentOS 7, all I needed to do
was install the missing rpcbind, like this:
yum -y install rpcbind
systemctl enable rpcbind
systemctl start rpcbind
systemctl restart nfs-server
Took me over an hour to find out and try this though :)
Michel
I've had issues with NFS mounts using both the libvirt and the VirtualBox provider on Fedora 22. After a lot of gnashing of teeth, I managed to figure out that it was a firewall issue. Fedora seems to ship with a firewalld service by default. Stopping that service - sudo systemctl stop firewalld - did the trick for me.
Of course, ideally you would configure this firewall rather than disable it entirely, but I don't know how to do that.

Resources