Varnish crash in overloads - caching

Im using varnish v 4.1.1, in our system after a while varnish suddenly crush and iowait raise a lot and we should restart varnish for fixing it.
this is journalctl resault after crushing varnish:
question2:
I decided to installing new version of varnish,
Which version is the best?
varnish_6.3.1-1xenial_amd64.deb OR
varnish_6.0.5-1xenial_amd64.deb
Thanks for Ur answer
-- Logs begin at Tue ********************************** --
Nov 19 17:29:58 varnish-06 systemd[1]: Started Varnish HTTP accelerator.
Nov 19 17:29:59 varnish-06 varnishd[9205]: Debug: Platform: Linux,4.4.0-154-generic,x86_64,-junix,-smalloc,-smalloc,-hcritbit
Nov 19 17:29:59 varnish-06 varnishd[9205]: Platform: Linux,4.4.0-154-generic,x86_64,-junix,-smalloc,-smalloc,-hcritbit
Nov 19 17:29:59 varnish-06 varnishd[9205]: Debug: Child (9342) Started
Nov 19 17:29:59 varnish-06 varnishd[9205]: Child (9342) Started
Nov 19 17:29:59 varnish-06 varnishd[9205]: Info: Child (9342) said Child starts
Nov 19 17:29:59 varnish-06 varnishd[9205]: Child (9342) said Child starts
Nov 19 17:30:10 varnish-06 varnishd[9205]: Error: Manager got SIGINT
Nov 19 17:30:10 varnish-06 varnishd[9205]: Debug: Stopping Child
Nov 19 17:30:10 varnish-06 systemd[1]: Stopping Varnish HTTP accelerator...
Nov 19 17:30:10 varnish-06 varnishd[9205]: Manager got SIGINT
Nov 19 17:30:10 varnish-06 varnishd[9205]: Stopping Child
Nov 19 17:30:11 varnish-06 varnishd[9205]: Error: Child (9342) died signal=15
Nov 19 17:30:11 varnish-06 varnishd[9205]: Debug: Child cleanup complete
Nov 19 17:30:11 varnish-06 systemd[1]: Stopped Varnish HTTP accelerator.
Nov 19 17:30:11 varnish-06 systemd[1]: Started Varnish HTTP accelerator.
Nov 19 17:30:11 varnish-06 varnishd[10479]: Debug: Platform: Linux,4.4.0-154-generic,x86_64,-junix,-smalloc,-smalloc,-hcritbit
Nov 19 17:30:11 varnish-06 varnishd/varnish[10479]: Platform: Linux,4.4.0-154-generic,x86_64,-junix,-smalloc,-smalloc,-hcritbit
Nov 19 17:30:11 varnish-06 varnishd[10479]: Debug: Child (10513) Started
Nov 19 17:30:11 varnish-06 varnishd/varnish[10479]: Child (10513) Started
Nov 19 17:30:11 varnish-06 varnishd[10479]: Info: Child (10513) said Child starts
Nov 19 17:30:11 varnish-06 varnishd/varnish[10479]: Child (10513) said Child starts
Nov 20 17:22:11 varnish-06 systemd[1]: Stopping Varnish HTTP accelerator...
Nov 20 17:22:18 varnish-06 varnishd[10479]: Error: Child (10513) not responding to CLI, killing it.
Nov 20 17:22:18 varnish-06 varnishd/varnish[10479]: Child (10513) not responding to CLI, killing it.
Nov 20 17:22:18 varnish-06 varnishd[10479]: Error: Child (10513) not responding to CLI, killing it.
Nov 20 17:22:18 varnish-06 varnishd[10479]: Error: Manager got SIGINT
Nov 20 17:22:18 varnish-06 varnishd/varnish[10479]: Child (10513) not responding to CLI, killing it.
Nov 20 17:22:18 varnish-06 varnishd[10479]: Debug: Stopping Child
Nov 20 17:22:18 varnish-06 varnishd/varnish[10479]: Manager got SIGINT
Nov 20 17:22:18 varnish-06 varnishd/varnish[10479]: Stopping Child
Nov 20 17:22:18 varnish-06 varnishd[10479]: Error: Child (10513) died signal=15
Nov 20 17:22:18 varnish-06 varnishd/varnish[10479]: Child (10513) died signal=15
Nov 20 17:22:18 varnish-06 varnishd[10479]: Debug: Child cleanup complete

Check for OOM in dmesg if you find any then your system is Out Of Memory, reason why Varnish is being killed.
Also free --human gives you a good overview of available memory.

Related

ElasticSearch is being constantly killed

I am tearing my hair trying to figure why ElasticSearch is not starting. These are my first days with ES stack, so I am completely helpless.
I am running
sudo systemctrl start elasticsearch
and get error saying that
Job for elasticsearch.service failed because a fatal signal was
delivered to the control process. See "systemctl status
elasticsearch.service" and "journalctl -xe" for details.
When I try to run journalctl -xe, I see lines of super-unclear messages for me:
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered blocking state
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered disabled state
Apr 21 19:16:24 my-pc-15IKB kernel: device veth19babe9 entered promiscuous mode
Apr 21 19:16:24 my-pc-15IKB charon[1828]: 10[KNL] interface veth19babe9 activated
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.2920] manager: (veth8fd9f3d): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.2934] manager: (veth19babe9): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered blocking state
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered forwarding state
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: Using default interface naming scheme 'v245'.
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: veth19babe9: Could not generate persistent MAC: No data available
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 21 19:16:24 my-pc-15IKB dockerd[1804]: time="2021-04-21T19:16:24.298236681+03:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using def>
Apr 21 19:16:24 my-pc-15IKB dockerd[1804]: time="2021-04-21T19:16:24.298287953+03:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 200>
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: Using default interface naming scheme 'v245'.
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: veth8fd9f3d: Could not generate persistent MAC: No data available
Apr 21 19:16:24 my-pc-15IKB containerd[851]: time="2021-04-21T19:16:24.328926349+03:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.cont>
Apr 21 19:16:24 my-pc-15IKB charon[1828]: 11[KNL] interface veth8fd9f3d deleted
Apr 21 19:16:24 my-pc-15IKB kernel: eth0: renamed from veth8fd9f3d
Apr 21 19:16:24 my-pc-15IKB kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth19babe9: link becomes ready
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.8637] device (veth19babe9): carrier: link connected
Apr 21 19:16:24 my-pc-15IKB gnome-shell[5088]: Removing a network device that was not added
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: Joining mDNS multicast group on interface veth19babe9.IPv6 with address fe80::a0c2:a3ff:feb8:587a.
Apr 21 19:16:26 my-pc-15IKB charon[1828]: 12[KNL] fe80::a0c2:a3ff:feb8:587a appeared on veth19babe9
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: New relevant interface veth19babe9.IPv6 for mDNS.
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: Registering new address record for fe80::a0c2:a3ff:feb8:587a on veth19babe9.*.
My .yml file looks this:
cluster.name: petlon-app
node.name: my-app-node
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1
http.port: 9200
discovery.type: single-node
Could you please give a hint why does that mean ?
UPDATE:
The output of sudo journalctl -u elasticsearch.service
-- Logs begin at Thu 2021-03-11 13:10:55 MSK, end at Wed 2021-04-21 21:11:37 MSK. --
мар 16 21:23:20 my-pc systemd[1]: Starting Elasticsearch...
мар 16 21:23:41 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
мар 16 21:23:41 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
мар 16 21:23:41 my-pc systemd[1]: Failed to start Elasticsearch.
-- Reboot --
апр 11 15:55:31 my-pc systemd[1]: Starting Elasticsearch...
апр 11 15:55:46 my-pc systemd[1]: Started Elasticsearch.
апр 11 15:56:41 my-pc systemd[1]: Stopping Elasticsearch...
апр 11 15:56:41 my-pc systemd[1]: elasticsearch.service: Succeeded.
апр 11 15:56:41 my-pc systemd[1]: Stopped Elasticsearch.
апр 11 15:56:41 my-pc systemd[1]: Starting Elasticsearch...
апр 11 15:57:01 my-pc systemd[1]: Started Elasticsearch.
апр 11 16:11:06 my-pc systemd[1]: Stopping Elasticsearch...
апр 11 16:11:07 my-pc systemd[1]: elasticsearch.service: Succeeded.
апр 11 16:11:07 my-pc systemd[1]: Stopped Elasticsearch.
-- Reboot --
апр 11 16:12:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 16:13:13 my-pc systemd[1]: Started Elasticsearch.
апр 11 18:51:08 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 18:51:08 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:31:42 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:31:47 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:31:47 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:31:47 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:32:14 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:32:16 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:32:16 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:32:16 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:35:33 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:35:37 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:35:37 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:35:37 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:37:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:37:57 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:37:57 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:37:57 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:38:02 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:38:06 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:38:06 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:38:06 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:41:57 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:42:00 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:42:00 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:42:00 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:46:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:46:59 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:46:59 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:46:59 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:49:00 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:49:03 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:49:03 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:49:03 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:49:32 my-pc systemd[1]: Starting Elasticsearch...

Kibana installation error "Kibana server is not ready yet" (CentOS)

Working on a Kibana deployment, after installing Kibana & Elasticsearch i get the error 'Kibana server is not ready yet'.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-centos-7
[opc#homer7 etc]$
[opc#homer7 etc]$ sudo systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-02-26 13:56:07 CET; 37s ago
Docs: https://www.elastic.co
Main PID: 18215 (node)
Memory: 208.3M
CGroup: /system.slice/kibana.service
└─18215 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist --logging.dest="/var/log/kibana/kibana.log" --pid.file="/run/kibana/kibana.pid"
Feb 26 13:56:07 homer7 systemd[1]: kibana.service failed.
Feb 26 13:56:07 homer7 systemd[1]: Started Kibana.
[opc#homer7 etc]$
[opc#homer7 etc]$
[opc#homer7 etc]$
[opc#homer7 etc]$ sudo journalctl --unit kibana
-- Logs begin at Fri 2021-02-26 11:31:02 CET, end at Fri 2021-02-26 13:56:57 CET. --
Feb 26 12:15:38 homer7 systemd[1]: Started Kibana.
Feb 26 13:21:25 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:22:55 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:22:55 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:22:55 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:22:55 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:22:55 homer7 systemd[1]: kibana.service failed.
Feb 26 13:25:05 homer7 systemd[1]: Started Kibana.
Feb 26 13:25:29 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:26:59 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:26:59 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:26:59 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:26:59 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:26:59 homer7 systemd[1]: kibana.service failed.
Feb 26 13:27:56 homer7 systemd[1]: Started Kibana.
Feb 26 13:40:53 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:42:23 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:42:23 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:42:23 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:42:23 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:42:23 homer7 systemd[1]: kibana.service failed.
Feb 26 13:42:23 homer7 systemd[1]: Started Kibana.
Feb 26 13:44:09 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:45:40 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:45:40 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:45:40 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:45:40 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:45:40 homer7 systemd[1]: kibana.service failed.
Feb 26 13:45:40 homer7 systemd[1]: Started Kibana.
Feb 26 13:54:37 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:56:07 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:56:07 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:56:07 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:56:07 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:56:07 homer7 systemd[1]: kibana.service failed.
Feb 26 13:56:07 homer7 systemd[1]: Started Kibana.
[opc#homer7 etc]$
[opc#homer7 etc]$
check $systemctl status elasticsearch. I am guessing your elasticsearch service is not started yet.
I guess there are many factors that need to be checked, first of all please go to the config directory of where you installed Kibana and check the kibana.yml by sudo vi kibana.yml and check the port of elastic server that Kibana tries to connect(the default is 9200).
Here is an example of default configuration.
After matching this configuration with your need go to the script file that you save in for Kibana service and check the the [unix] part to if it needs activate elastic service first and if you didn't add "Required" part for Elasticserver make sure that the elastic server is up and run before running Kibana as service, you can also lunch Kibana as shell by going to the bin director of Kibana and lunching Kibana .
Maybe The issue happened due to kibana was unable to access elasticsearch locally.
I think that you have enabled xpack.security plugin for security purpose at elasticsearch.yml by adding a new line :
xpack.security.enabled : true
if so you need to uncomment the two lines on kibana.yml :
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"
elasticsearch.username = "kibana_system"
elasticsearch.password = "your-password"
after saving the changes, restart kibana service :
sudo sservice kibana restart

Problem in installation of snapd on Oracle Linux Server 7.6 (based on RHEL 7.6) using yum

First step is - sudo yum install snapd
This seems to work fine download the dependencies and all and setup is completed
Installed Version : snapd.x86_64 0:2.45-1.el7
Second step is - sudo systemctl enable --now snapd.socket
Gives output as > Created symlink from /etc/systemd/system/sockets.target.wants/snapd.socket to /usr/lib/systemd/system/snapd.socket.
Now checking status - sudo systemctl status snapd gives:
● snapd.service - Snap Daemon
Loaded: loaded (/usr/lib/systemd/system/snapd.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2020-12-07 11:17:28 IST; 16min ago
Main PID: 5726 (code=exited, status=1/FAILURE)
systemd[1]: snapd.service holdoff time over, scheduling restart.
systemd[1]: Stopped Snap Daemon.
systemd[1]: start request repeated too quickly for snapd.service
systemd[1]: Failed to start Snap Daemon.
systemd[1]: Unit snapd.service entered failed state.
systemd[1]: Triggering OnFailure= dependencies of snapd.service.
systemd[1]: snapd.service failed.
systemd[1]: start request repeated too quickly for snapd.service
systemd[1]: Failed to start Snap Daemon.
systemd[1]: snapd.service failed.
Possible Solutions tried- 1) Re-installation after purge, 2) enabled socket and service after reboot again.
journalctl -u snapd.service output:
-- Logs begin at Wed 2020-12-09 11:21:36 IST, end at Thu 2020-12-17 12:40:16 IST. --
Dec 17 12:36:45 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:45 whf00jfw snapd[4639]: AppArmor status: apparmor not enabled
Dec 17 12:36:45 whf00jfw snapd[4639]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4639]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4639]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:46 whf00jfw snapd[4673]: AppArmor status: apparmor not enabled
Dec 17 12:36:46 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:46 whf00jfw snapd[4673]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4673]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4673]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:46 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:46 whf00jfw snapd[4699]: AppArmor status: apparmor not enabled
Dec 17 12:36:46 whf00jfw snapd[4699]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4699]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4699]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:46 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:46 whf00jfw snapd[4745]: AppArmor status: apparmor not enabled
Dec 17 12:36:46 whf00jfw snapd[4745]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:46 whf00jfw snapd[4745]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:46 whf00jfw snapd[4745]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:46 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:46 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:46 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:46 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:47 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:36:47 whf00jfw snapd[4775]: AppArmor status: apparmor not enabled
Dec 17 12:36:47 whf00jfw snapd[4775]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:36:47 whf00jfw snapd[4775]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:36:47 whf00jfw snapd[4775]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:36:47 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:47 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:36:47 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:36:47 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:36:47 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:36:47 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:36:47 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:36:47 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:15 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:15 whf00jfw snapd[5826]: AppArmor status: apparmor not enabled
Dec 17 12:40:15 whf00jfw snapd[5826]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:15 whf00jfw snapd[5826]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:15 whf00jfw snapd[5826]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:15 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:15 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:15 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:15 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:15 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:15 whf00jfw snapd[5855]: AppArmor status: apparmor not enabled
Dec 17 12:40:15 whf00jfw snapd[5855]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:15 whf00jfw snapd[5855]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:15 whf00jfw snapd[5855]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:15 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:15 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:15 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:15 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:16 whf00jfw snapd[5952]: AppArmor status: apparmor not enabled
Dec 17 12:40:16 whf00jfw snapd[5952]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:16 whf00jfw snapd[5952]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:16 whf00jfw snapd[5952]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:16 whf00jfw snapd[6154]: AppArmor status: apparmor not enabled
Dec 17 12:40:16 whf00jfw snapd[6154]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:16 whf00jfw snapd[6154]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:16 whf00jfw snapd[6154]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Starting Snap Daemon...
Dec 17 12:40:16 whf00jfw snapd[6260]: AppArmor status: apparmor not enabled
Dec 17 12:40:16 whf00jfw snapd[6260]: daemon.go:343: started snapd/2.45-1.el7 (series 16; classic; devmode) ol/7.6 (amd64) linux/4.14.35-1902.304.6.el7uek.
Dec 17 12:40:16 whf00jfw snapd[6260]: daemon.go:436: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 17 12:40:16 whf00jfw snapd[6260]: cannot run daemon: state startup errors: [cannot obtain snap-seccomp version information: fork/exec /usr/lib/snapd/snap
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service: main process exited, code=exited, status=1/FAILURE
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service holdoff time over, scheduling restart.
Dec 17 12:40:16 whf00jfw systemd[1]: Stopped Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: Unit snapd.service entered failed state.
Dec 17 12:40:16 whf00jfw systemd[1]: Triggering OnFailure= dependencies of snapd.service.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
Dec 17 12:40:16 whf00jfw systemd[1]: start request repeated too quickly for snapd.service
Dec 17 12:40:16 whf00jfw systemd[1]: Failed to start Snap Daemon.
Dec 17 12:40:16 whf00jfw systemd[1]: snapd.service failed.
It appears to be an issue with strict confinement not being supported on operating systems that don't use AppArmor (OL, CentOS, RHEL). There are steps needed to make that work. Did you create the symbolic link?
sudo ln -s /var/lib/snapd/snap /snap
This worked for me:
sudo yum install snapd
sudo ln -s /var/lib/snapd/snap /snap
sudo systemctl enable --now snapd.socket
sudo systemctl restart snapd
<restart session>
sudo snap install firefox
sudo snap install --classic nano

Elasticsearch won't start and no logs centOS

Hi after downloading the latest rpm for CentIS and and installing for the first time I am getting this error in the logs:
Jun 22 09:47:31 ssd316r.simpleservers.co.uk systemd[1]: Starting Elasticsearch...
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd-entrypoint[2501]: ERROR: Temporary file directory [/usr/share/elasticsearch/tmp] does not exist or is not accessible
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: elasticsearch.service: main process exited, code=exited, status=78/n/a
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: Failed to start Elasticsearch.
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: Unit elasticsearch.service entered failed state.
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: elasticsearch.service failed.
Error is due to below log:
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd-entrypoint[2501]:
ERROR: Temporary file directory [/usr/share/elasticsearch/tmp] does
not exist or is not accessible
Can you check /usr/share/elasticsearch/tmp is present on your server or not, if not please create this folder at the same location and make sure your elasticsearch process has write access to it.

Can't start Elasticsearch (fileInputStream Fail)

I'm currently building up a test environment for HPE ALM Octane for my company. This Application uses Elasticsearch. Now I have the problem, that I can't start my Elasticsearchserver and I'm a bit at the end of my nerves ;).
Cause Octane works with Elasticsearch version 2.4.0, I'm also forced to work with this version.
I get the following Error:
Error - Console Screenshot
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service;
enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-02-21 09:40:50 CET; 1h 9min ago
Process: 954 ExecStart=/usr/share/elasticsearch/bin/elasticsearch
-Des.pidfile=${PID_DIR}/elasticsearch.pid
-Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR}
-Des.default.path.data=${DATA_DIR} -Des.default.path.conf=${CONF_DIR}
(code=exited, status=1/FAILURE)
Process: 949 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 954 (code=exited, status=1/FAILURE)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at java.nio.file.Files.newInputStream(Files.java:152)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1067)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:88)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:218)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:257)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: Refer to the log for complete error details.
Feb 21 09:40:50 linux-rfw5 systemd 1 : elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Feb 21 09:40:50 linux-rfw5 systemd 1 : elasticsearch.service: Unit entered failed state.
Feb 21 09:40:50 linux-rfw5 systemd 1 : elasticsearch.service: Failed with result 'exit-code'.
I configured the absolute minimum, that is possible. My Configurations:
elasticsearch.yml (/etc/elasticsearch/)
1.1 cluster.name: octane_test
1.2 node.name: elasticNode
1.3 network.host: 127.0.0.1 (yes localhost, cause I'm running the octane server on the same host)
http.port: 9200 elasticsearch (/etc/sysconfig/)
2.1 ES_HEAP_SIZE=4g (4 GB is 50% of the maximum memory)
I appreciate your help ;)
Joel

Resources