Problem in installation of snapd on Oracle Linux Server 7.6 (based on RHEL 7.6) using yum - installation

First step is - sudo yum install snapd
This seems to work fine download the dependencies and all and setup is completed
Installed Version : snapd.x86_64 0:2.45-1.el7
Second step is - sudo systemctl enable --now snapd.socket
Gives output as > Created symlink from /etc/systemd/system/sockets.target.wants/snapd.socket to /usr/lib/systemd/system/snapd.socket.
Now checking status - sudo systemctl status snapd gives:
● snapd.service - Snap Daemon
Loaded: loaded (/usr/lib/systemd/system/snapd.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2020-12-07 11:17:28 IST; 16min ago
Main PID: 5726 (code=exited, status=1/FAILURE)
systemd[1]: snapd.service holdoff time over, scheduling restart.
systemd[1]: Stopped Snap Daemon.
systemd[1]: start request repeated too quickly for snapd.service
systemd[1]: Failed to start Snap Daemon.
systemd[1]: Unit snapd.service entered failed state.
systemd[1]: Triggering OnFailure= dependencies of snapd.service.
systemd[1]: snapd.service failed.
systemd[1]: start request repeated too quickly for snapd.service
systemd[1]: Failed to start Snap Daemon.
systemd[1]: snapd.service failed.
Possible Solutions tried- 1) Re-installation after purge, 2) enabled socket and service after reboot again.
journalctl -u snapd.service output:
-- Logs begin at Wed 2020-12-09 11:21:36 IST, end at Thu 2020-12-17 12:40:16 IST. --
Dec 17 12:36:45 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:45 whf00jfw snapd[4639]: AppArmor status: apparmor not enabled
Dec 17 12:36:45 whf00jfw snapd[4639]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4639]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4639]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:46 whf00jfw snapd[4673]: AppArmor status: apparmor not enabled
Dec 17 12:36:46 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:46 whf00jfw snapd[4673]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4673]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4673]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:46 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:46 whf00jfw snapd[4699]: AppArmor status: apparmor not enabled
Dec 17 12:36:46 whf00jfw snapd[4699]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4699]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4699]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:46 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:46 whf00jfw snapd[4745]: AppArmor status: apparmor not enabled
Dec 17 12:36:46 whf00jfw snapd[4745]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4745]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4745]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:47 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:47 whf00jfw snapd[4775]: AppArmor status: apparmor not enabled
Dec 17 12:36:47 whf00jfw snapd[4775]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:47 whf00jfw snapd[4775]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:47 whf00jfw snapd[4775]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:47 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:47 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:47 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:36:47 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:47 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:47 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:36:47 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:15 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:15 whf00jfw snapd[5826]: AppArmor status: apparmor not enabled
Dec 17 12:40:15 whf00jfw snapd[5826]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:15 whf00jfw snapd[5826]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:15 whf00jfw snapd[5826]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:15 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:15 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:15 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:15 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:15 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:15 whf00jfw snapd[5855]: AppArmor status: apparmor not enabled
Dec 17 12:40:15 whf00jfw snapd[5855]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:15 whf00jfw snapd[5855]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:15 whf00jfw snapd[5855]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:15 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:15 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:15 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:16 whf00jfw snapd[5952]: AppArmor status: apparmor not enabled
Dec 17 12:40:16 whf00jfw snapd[5952]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:16 whf00jfw snapd[5952]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:16 whf00jfw snapd[5952]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:16 whf00jfw snapd[6154]: AppArmor status: apparmor not enabled
Dec 17 12:40:16 whf00jfw snapd[6154]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:16 whf00jfw snapd[6154]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:16 whf00jfw snapd[6154]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:16 whf00jfw snapd[6260]: AppArmor status: apparmor not enabled
Dec 17 12:40:16 whf00jfw snapd[6260]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:16 whf00jfw snapd[6260]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:16 whf00jfw snapd[6260]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.

It appears to be an issue with strict confinement not being supported on operating systems that don't use AppArmor (OL, CentOS, RHEL). There are steps needed to make that work. Did you create the symbolic link?
sudo ln -s /var/lib/snapd/snap /snap
This worked for me:
sudo yum install snapd
sudo ln -s /var/lib/snapd/snap /snap
sudo systemctl enable --now snapd.socket
sudo systemctl restart snapd
<restart session>
sudo snap install firefox
sudo snap install --classic nano

Related

ElasticSearch is being constantly killed

I am tearing my hair trying to figure why ElasticSearch is not starting. These are my first days with ES stack, so I am completely helpless.
I am running
sudo systemctrl start elasticsearch
and get error saying that
Job for elasticsearch.service failed because a fatal signal was
delivered to the control process. See "systemctl status
elasticsearch.service" and "journalctl -xe" for details.
When I try to run journalctl -xe, I see lines of super-unclear messages for me:
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered blocking state
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered disabled state
Apr 21 19:16:24 my-pc-15IKB kernel: device veth19babe9 entered promiscuous mode
Apr 21 19:16:24 my-pc-15IKB charon[1828]: 10[KNL] interface veth19babe9 activated
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.2920] manager: (veth8fd9f3d): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.2934] manager: (veth19babe9): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered blocking state
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered forwarding state
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: Using default interface naming scheme 'v245'.
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: veth19babe9: Could not generate persistent MAC: No data available
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 21 19:16:24 my-pc-15IKB dockerd[1804]: time="2021-04-21T19:16:24.298236681+03:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using def>
Apr 21 19:16:24 my-pc-15IKB dockerd[1804]: time="2021-04-21T19:16:24.298287953+03:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 200>
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: Using default interface naming scheme 'v245'.
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: veth8fd9f3d: Could not generate persistent MAC: No data available
Apr 21 19:16:24 my-pc-15IKB containerd[851]: time="2021-04-21T19:16:24.328926349+03:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.cont>
Apr 21 19:16:24 my-pc-15IKB charon[1828]: 11[KNL] interface veth8fd9f3d deleted
Apr 21 19:16:24 my-pc-15IKB kernel: eth0: renamed from veth8fd9f3d
Apr 21 19:16:24 my-pc-15IKB kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth19babe9: link becomes ready
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.8637] device (veth19babe9): carrier: link connected
Apr 21 19:16:24 my-pc-15IKB gnome-shell[5088]: Removing a network device that was not added
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: Joining mDNS multicast group on interface veth19babe9.IPv6 with address fe80::a0c2:a3ff:feb8:587a.
Apr 21 19:16:26 my-pc-15IKB charon[1828]: 12[KNL] fe80::a0c2:a3ff:feb8:587a appeared on veth19babe9
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: New relevant interface veth19babe9.IPv6 for mDNS.
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: Registering new address record for fe80::a0c2:a3ff:feb8:587a on veth19babe9.*.
My .yml file looks this:
cluster.name: petlon-app
node.name: my-app-node
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1
http.port: 9200
discovery.type: single-node
Could you please give a hint why does that mean ?
UPDATE:
The output of sudo journalctl -u elasticsearch.service
-- Logs begin at Thu 2021-03-11 13:10:55 MSK, end at Wed 2021-04-21 21:11:37 MSK. --
мар 16 21:23:20 my-pc systemd[1]: Starting Elasticsearch...
мар 16 21:23:41 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
мар 16 21:23:41 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
мар 16 21:23:41 my-pc systemd[1]: Failed to start Elasticsearch.
-- Reboot --
апр 11 15:55:31 my-pc systemd[1]: Starting Elasticsearch...
апр 11 15:55:46 my-pc systemd[1]: Started Elasticsearch.
апр 11 15:56:41 my-pc systemd[1]: Stopping Elasticsearch...
апр 11 15:56:41 my-pc systemd[1]: elasticsearch.service: Succeeded.
апр 11 15:56:41 my-pc systemd[1]: Stopped Elasticsearch.
апр 11 15:56:41 my-pc systemd[1]: Starting Elasticsearch...
апр 11 15:57:01 my-pc systemd[1]: Started Elasticsearch.
апр 11 16:11:06 my-pc systemd[1]: Stopping Elasticsearch...
апр 11 16:11:07 my-pc systemd[1]: elasticsearch.service: Succeeded.
апр 11 16:11:07 my-pc systemd[1]: Stopped Elasticsearch.
-- Reboot --
апр 11 16:12:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 16:13:13 my-pc systemd[1]: Started Elasticsearch.
апр 11 18:51:08 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 18:51:08 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:31:42 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:31:47 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:31:47 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:31:47 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:32:14 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:32:16 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:32:16 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:32:16 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:35:33 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:35:37 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:35:37 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:35:37 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:37:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:37:57 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:37:57 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:37:57 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:38:02 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:38:06 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:38:06 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:38:06 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:41:57 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:42:00 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:42:00 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:42:00 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:46:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:46:59 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:46:59 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:46:59 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:49:00 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:49:03 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:49:03 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:49:03 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:49:32 my-pc systemd[1]: Starting Elasticsearch...

Elasticsearch won't start and no logs centOS

Hi after downloading the latest rpm for CentIS and and installing for the first time I am getting this error in the logs:
Jun 22 09:47:31 ssd316r.simpleservers.co.uk systemd[1]: Starting Elasticsearch...
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd-entrypoint[2501]: ERROR: Temporary file directory [/usr/share/elasticsearch/tmp] does not exist or is not accessible
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: elasticsearch.service: main process exited, code=exited, status=78/n/a
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: Failed to start Elasticsearch.
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: Unit elasticsearch.service entered failed state.
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: elasticsearch.service failed.
Error is due to below log:
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd-entrypoint[2501]:
ERROR: Temporary file directory [/usr/share/elasticsearch/tmp] does
not exist or is not accessible
Can you check /usr/share/elasticsearch/tmp is present on your server or not, if not please create this folder at the same location and make sure your elasticsearch process has write access to it.

Kibana failed to start

Elasticsearch working with no issues on http://localhost:9200
And Operating system is Ubuntu 18.04
Here is the error log for Kibana
root#syed-MS-7B17:/var/log# journalctl -fu kibana.service
-- Logs begin at Sat 2020-01-04 18:30:58 IST. --
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: {"type":"log","#timestamp":"2020-04-03T14:52:49Z","tags":["fatal","root"],"pid":7165,"message":"{ Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601\n at Server.setupListenHandle [as _listen2] (net.js:1263:19)\n at listenInCluster (net.js:1328:12)\n at GetAddrInfoReqWrap.doListen (net.js:1461:7)\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:61:10)\n code: 'EADDRNOTAVAIL',\n errno: 'EADDRNOTAVAIL',\n syscall: 'listen',\n address: '7.0.0.1',\n port: 5601 }"}
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: FATAL Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Scheduled restart job, restart counter is at 2.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Stopped Kibana.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Start request repeated too quickly.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Failed to start Kibana.
I have resolved it myself after checking the /etc/hosts file
It was edited by mistake like below
7.0.0.1 localhost

i moved elasticsearch to new folder also give new path in elasticsearch.yml file but its giving error

This is the error I am getting from elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-07-11 08:23:29 UTC; 1h 29min ago
Docs: http://www.elastic.co
Process: 1579 ExecStart=/usr/local/elasticsearch/bin/elasticsearch (code=exited, status=78)
Main PID: 1579 (code=exited, status=78)
This is the log file , this is what i get after using 'journalctl -u elasticsearch.service' command:
Jul 11 06:06:26 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:08:28 vyakar-stage-elastic systemd[1]: Stopping Elasticsearch...
Jul 11 06:08:28 vyakar-stage-elastic systemd[1]: Stopped Elasticsearch.
Jul 11 06:34:49 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:35:09 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:35:09 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 06:48:00 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:48:20 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:48:20 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 06:52:21 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:52:42 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:52:42 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 06:57:36 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:57:57 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:57:57 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 07:46:36 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,490][WARN ][o.e.b.JNANatives ] [fmcn] Unable to lock JVM Memory: error=12, r
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,501][WARN ][o.e.b.JNANatives ] [fmcn] This can result in part of the JVM bei
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,503][WARN ][o.e.b.JNANatives ] [fmcn] Increase RLIMIT_MEMLOCK, soft limit: 1
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,504][WARN ][o.e.b.JNANatives ] [fmcn] These can be adjusted by modifying /et
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: # allow user 'elasticsearch' mlockall

Configure kibana with SSL

I want to configure Kibana, so, that I can access over https.
I did following changes in Kibana config file (/etc/kibana/kibana.yml):
server.host: 0.0.0.0
server.ssl.enabled: true
server.ssl.key: /etc/elasticsearch/privkey.pem // Using same SSL that I created for elasticsearch
server.ssl.certificate: /etc/elasticsearch/cert.pem // Using same SSL that I created for elasticsearch
elasticsearch.url: https://127.0.0.1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
When I restart/start Kibana, it's giving me below error:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Wed 2019-06-05 14:20:12 UTC; 382ms ago
Process: 32505 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 32505 (code=exited, status=1/FAILURE)
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Failed with result 'exit-code'.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Jun 05 14:20:12 mts-elk-test systemd[1]: Stopped Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Start request repeated too quickly.
Jun 05 14:20:12 mts-elk-test systemd[1]: Failed to start Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Failed with result 'start-limit-hit'.
root#mts-elk-test:/home/ronak# vi /etc/kibana/kibana.yml
I found the solution. There was a problem with file permission.
I copied cert.pem and privkey.pem files from elasticsearch directory to kibana and changed owner with kibana user:
chown kibana:kibana /etc/kibana/cert.pem
chown kibana:kibana /etc/kibana/privkey.pem
Changed path in kibana.yml file:
server.ssl.key: /etc/kibana/privkey.pem
server.ssl.certificate: /etc/kibana/cert.pem
Rstart kibana: service kibana restart
And it worked!

Resources