Redhat Codeready Container Failed to start (crc start error): .crcbundle not found - rhel8

I am receiving the following error when executing 'crc start -p .\pull-secret.txt' command:
/home/admin/.crc/cache/crc_libvirt_4.9.0.crcbundle not found, please provide the path to a valid bundle using the -b option`
crc setup --log-level debug` Debug below:
INFO Checking if libvirt daemon is running
DEBU Checking if libvirtd service is running
DEBU Running 'systemctl status virtqemud.socket'
DEBU Command failed: exit status 3
DEBU stdout: * virtqemud.socket - Libvirt qemu local socket
Loaded: loaded (/usr/lib/systemd/system/virtqemud.socket; disabled; vendor preset: disabled)
Active: inactive (dead)
Listen: /run/libvirt/virtqemud-sock (Stream)
DEBU stderr:
DEBU virtqemud.socket is neither running nor listening
DEBU Running 'systemctl status libvirtd.socket'
DEBU libvirtd.socket is running
INFO Checking if systemd-networkd is running
DEBU Checking if systemd-networkd.service is running
DEBU Running 'systemctl status systemd-networkd.service'
DEBU Command failed: exit status 4
DEBU stdout:
DEBU stderr: Unit systemd-networkd.service could not be found.
DEBU systemd-networkd.service is not running
INFO Checking crc daemon systemd service
DEBU Checking crc daemon systemd service
DEBU Checking if crc-daemon.service is running
DEBU Running 'systemctl --user status crc-daemon.service'
DEBU Command failed: exit status 3
DEBU stdout: * crc-daemon.service - CodeReady Containers daemon
Loaded: loaded (/home/admin/.config/systemd/user/crc-daemon.service; static; vendor preset: enabled)
Active: inactive (dead)
DEBU stderr:
DEBU crc-daemon.service is neither running nor listening
DEBU Checking if crc-daemon.service has the expected content
INFO Checking if systemd-networkd is running
DEBU Checking if systemd-networkd.service is running
DEBU Running 'systemctl status systemd-networkd.service'
DEBU Command failed: exit status 4
DEBU stdout:
DEBU stderr: Unit systemd-networkd.service could not be found.
DEBU systemd-networkd.service is not running
I then ran the following commands:
sudo yan install qemu qemu-kvm libvirt-clients libvirt-daemon-system virtinst bridge-utils
Output:
>> Error: Unable to find a match: qemu libvirt-clients libvirt-daemon-system virtinst bridge-utils
>> [admin#localhost ~]$ systemctl status libvirtd.service
- Test systemctl
>> systemctl status libvirtd.service
crc setup output:
[admin#localhost ~]$ crc setup
INFO Checking if running as non-root
INFO Checking if running inside WSL2
.......... Details removed ..........
INFO Checking if CRC bundle is extracted in '$HOME/.crc'
Your system is correctly setup for using CodeReady Containers, you can now run 'crc start -b $bundlename' to start the OpenShift cluster
I cannot seem to find .crcbundle file despite setup completing successfully.
Nothing found under:
#This seems to be an issue as I cannot find '.crcbundle'
[admin#localhost ~]$ tree --noreport .crc
.crc
├── bin
│   ├── crc -> /home/admin/bin/crc
│   ├── crc-admin-helper-linux
│   └── crc-driver-libvirt
├── crc-http.sock
├── crc.json
└── crc.log
OS info:
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.5
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.5"
Thanks in advance.

To resolve the problem you should execute the next commands with root user:
yum install qemu-kvm libvirt libvirt-daemon-kvm
systemctl start libvirtd
systemctl enable libvirtd
systemctl start virtnetworkd
systemctl enable virtnetworkd
systemctl start virtstoraged
systemctl enable virtstoraged
Then you could execute with a non root user session the next commands:
crc setup
crc start -p pull-secret.txt

Related

systemd : two services are running together

I use a simple script
[Unit]
Description = description here
After = multi-user.target
[Service]
type=simple
ExecStart = /usr/lib/name_deamon/CP_linux/CP_linux
Restart = on-failure
TimeoutStopSec = infinity
[Install]
WantedBy=custom.target
then when typing
systemctl --user status name.service
i get two identical process running in parallel
● name.service - description here
Loaded: loaded (/home/ubuntu/.config/systemd/user/name.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-11-02 11:03:47 CET; 13min ago
Main PID: 1625 (CP_linux_test.e)
Tasks: 2 (limit: 4384)
Memory: 14.2M
CPU: 122ms
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/name.service
├─1625 /usr/lib/name_deamon/CP_linux_test/CP_linux_test.exe
└─1627 /usr/lib/name_deamon/CP_linux_test/CP_linux_test.exe
Since i have one ExecStart i don't understand why i get two process running in parallel.
The main process (PID 1625) is most probably forking another process (PID 1627).
To check that the parent process of 1627 is 1625: ps -o ppid 1627

How can I access elasticsearch from a static ip?

I created a virtual machine with Ubuntu using Hyper-V. I setup elasticsearch in my ubuntu virtual machine. How can I run elasticsearch from a static ip
sudo apt-get install openjdk-8-jdk
sudo apt-get install nginx
Sudo systemctl enable nginx
sudo apt-get install apt-transport-https
sudo apt-get update
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-amd64.deb
installed elasticsearch
Sudo nano /etc/elasticsearch/elasticsearch.yml
I am able to run elasticsearch on a the local machine
-------Network-----
#
# Set the bind address to a specific IP
network.host: localhost
http.port: 9200
When I go to the browser and connect to localhost:9200, it successfully finds elasticsearch
I want to access elasticsearch through a static ip. For example: 192.168.1.150:9200
I tried
-------Network-----
#
# Set the bind address to a specific IP
network.host: 192.168.1.150
http.port: 9200
I run 'sudo systemctl start elasticsearch.service'
I get error
Job for elasticsearch.service failed because the control process exited with error code
See "systemctl status elasticsearch.service" and "journalctl -xe" for details
Is there a way to accomplish this?
I want to be able to run elasticsearch, logstash, and kibana on the static ip
journalctl -xe
- An ExecStart= process belonging to unit elasticsearch.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Nov 20 14:22:52 ateet-Virtual-Machine systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit elasticsearch.service has entered the 'failed' state with result 'exit-code'.
Nov 20 14:22:52 ateet-Virtual-Machine systemd[1]: Failed to start Elasticsearch.
-- Subject: A start job for unit elasticsearch.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit elasticsearch.service has finished with a failure.
--
-- The job identifier is 9061 and the job result is failed.
Nov 20 14:22:52 ateet-Virtual-Machine sudo[45417]: pam_unix(sudo:session): session closed for user root
elasticsearch.log
org.elasticsearch.transport.BindTransportException: Failed to bind to 192.168.1.150:[9300-9400]; nested: BindException[Cannot assign requested address];
Thanks

bash in systemctl . error 2 launch at start

lHello, in preparation for using a RP4 (running ubuntu server) , i am trying to have a bash script that is kicked off on boot... and relaunches is killed. i have included the steps belle along with the content of the file. Any clue on the error code or why it is not work would be greatly appreciated.
any idea on the exit code with a status of 2?
thank you.
uburntu#ubuntu:/etc/systemd/system$ cat prysmbeacon_altona.service
[Unit]
Description=PrysmBeacon--Altona
Wants=network.target
After=network.target
[Service]
Type=simple
DynamicUser=yes
ExecStart=/home/ubuntu/Desktop/prysm/prysm.sh beacon-chain --altona --datadir=/home/ubuntu/.eth2
WorkingDirectory=/home/ubuntu/Desktop/prysm
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
ubuntu#ubuntu:/etc/systemd/system$ systemctl daemon-reload
==== AUTHENTICATING FOR org.freedesktop.systemd1.reload-daemon ===
Authentication is required to reload the systemd state.
Authenticating as: Ubuntu (ubuntu)
Password:
==== AUTHENTICATION COMPLETE ===
ubuntu#ubuntu:/etc/systemd/system$ systemctl start prysmbeacon_altona
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'prysmbeacon_altona.service'.
Authenticating as: Ubuntu (ubuntu)
Password:
==== AUTHENTICATION COMPLETE ===
ubuntu#ubuntu:/etc/systemd/system$ systemctl status prysmbeacon_altona.service
● prysmbeacon_altona.service - PrysmBeacon--Altona
Loaded: loaded (/etc/systemd/system/prysmbeacon_altona.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2020-07-23 15:51:48 CEST; 111ms ago
Process: 3407 ExecStart=/home/ubuntu/Desktop/prysm/prysm.sh beacon-chain --altona --datadir=/home/ubuntu/.eth2 (code=exited, status=2)
Main PID: 3407 (code=exited, status=2)
ubuntu#ubuntu:/etc/systemd/system$

Fedora 24 Vagrant issue. mount.nfs access denied by server

I started using fedora 24 last year for my study/work computer. First time I run into an issue I cannot figure out within a reasonable amount of time.
We need to use Vagrant for a project, and I'm trying to get it running on my computer. The command vagrant up fails at the mounting nfs. Here's the output after the command:
Bringing machine 'default' up with 'libvirt' provider...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
==> default: Waiting for SSH to become available...
==> default: Creating shared folders metadata...
==> default: Exporting NFS shared folders...
==> default: Preparing to edit /etc/exports. Administrator privileges will be required...
[sudo] password for feilz:
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/etc/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Wed 2017-02-15 15:17:58 EET; 19h ago
Main PID: 16889 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
CGroup: /system.slice/nfs-server.service
Feb 15 15:17:58 feilz systemd[1]: Starting NFS server and services...
Feb 15 15:17:58 feilz systemd[1]: Started NFS server and services.
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o 'vers=4' 192.168.121.1:'/home/feilz/env/debian64' /vagrant
Stdout from the command:
Stderr from the command:
stdin: is not a tty
mount.nfs: access denied by server while mounting 192.168.121.1:/home/feilz/env/debian64
My Vagrantfile looks like: (I skipped the commented out lines)
Vagrant.configure(2) do |config|
config.vm.box = "debian/jessie64"
config.vm.provider :libvirt do |libvirt|
libvirt.driver = "qemu"
end
end
I can run the vagrant ssh command to log in, and write the command
sudo mount -o 'vers=4' 192.168.121.1:'/home/feilz/env/debian64' /vagrant
inside vagrant to try again.Then the output becomes
mount.nfs: access denied by server while mounting 192.168.121.1:/home/feilz/env/debian64
I've gone through loads of webpages. I fixed missing ruby gems (nokogiri and libffi). I tried modifying the /etc/exports file, it doesn't work, and it gets reset after I run vagrant halt / up.
I have installed the vagrant plugin vagrant-libvirt
What haven't I tried yet, that would allow me to use the NFS file sharing for Vagrant?

NFS Vagrant on Fedora 22

I'm trying to run Vagrant using libvirt as my provider. Using rsync is unbearable since I'm working with a huge shared directory, but vagrant does succeed when the nfs setting is commented out and the standard rsync config is set.
config.vm.synced_folder ".", "/vagrant", mount_options: ['dmode=777','fmode=777']
Vagrant hangs forever on this step here after running vagrant up
==> default: Mounting NFS shared folders...
In my Vagrantfile I have this uncommented and the rsync config commented out, which turns NFS on.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
When Vagrant is running it echos this out to the terminal.
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Redirecting to /bin/systemctl start nfs-server.service
Job for nfs-server.service failed. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
Results of systemctl status nfs-server.service
dillon#localhost ~ $ systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2015-05-29 22:24:47 PDT; 22s ago
Process: 3044 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=1/FAILURE)
Process: 3040 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3044 (code=exited, status=1/FAILURE)
May 29 22:24:47 localhost.sulfur systemd[1]: Starting NFS server and services...
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: unable to set any sockets for nfsd
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service: main process exited, code=exited, status=1/FAILURE
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS server and services.
May 29 22:24:47 localhost.sulfur systemd[1]: Unit nfs-server.service entered failed state.
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service failed.
The journelctl -xe log has a ton of stuff in it so I won't post all of it here, but there are some things in the bold red.
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.statd[3028]: failed to create RPC listeners, exiting
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Before I ran vagrant up I looked to see if there were any process binding to port 98 with netstat -tulpn and did not see anything and in fact while vagrant is hanging I ran netstat -tulpn again to see what was binding to port 98 and didn't see anything. (checked for both current user and root)
UPDATE: Haven't gotten any responses.
I wasn't able to figure out the current issue I'm having. I tried using lxc instead, but gets stuck on booting. I'd also prefer not to use VirtualBox, but the issue seems to lie within nfs not the hypervisor. Going to try using the rsync-auto feature Vagrant provides, but I'd prefer to get nfs working.
Looks like when using libvirt the user is given control over nfs and rpcbind, and Vagrant doesn't even try to touch those things like I had assumed it did. Running these solved my issue:
service rpcbind start
service nfs stop
service nfs start
The systemd unit dependencies of nfs-server.service contain rpcbind.target but not rpcbind.service.
One simple solution is to create a file /etc/systemd/system/nfs-server.service containing:
.include /usr/lib/systemd/system/nfs-server.service
[Unit]
Requires=rpcbind.service
After=rpcbind.service
On CentOS 7, all I needed to do
was install the missing rpcbind, like this:
yum -y install rpcbind
systemctl enable rpcbind
systemctl start rpcbind
systemctl restart nfs-server
Took me over an hour to find out and try this though :)
Michel
I've had issues with NFS mounts using both the libvirt and the VirtualBox provider on Fedora 22. After a lot of gnashing of teeth, I managed to figure out that it was a firewall issue. Fedora seems to ship with a firewalld service by default. Stopping that service - sudo systemctl stop firewalld - did the trick for me.
Of course, ideally you would configure this firewall rather than disable it entirely, but I don't know how to do that.

Resources