As a part of VPS creation, I have a script that creates a default user, secures SSH etc. One step, however, keeps failing and that's the installation of docker. I minimized the script to handle only docker installation and it still fails. Here is the script.
#!/bin/bash
set -e
{
export DEBIAN_FRONTEND=noninteractive
DOCKER_COMPOSE_VERSION="1.24.1"
apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-get -y update
apt-get -y install docker-ce docker-ce-cli containerd.io
curl -L "https://github.com/docker/compose/releases/download/$DOCKER_COMPOSE_VERSION/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
curl -L https://raw.githubusercontent.com/docker/compose/$DOCKER_COMPOSE_VERSION/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
} > /init-script.log 2> /init-script-err.log
And this is a trimmed content of the error log
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
invoke-rc.d: initscript docker, action "start" failed.
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2020-03-26 18:00:35 UTC; 9ms ago
Docs: https://docs.docker.com
Process: 3216 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 3216 (code=exited, status=1/FAILURE)
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: Failed to start Docker Application Container Engine.
dpkg: error processing package docker-ce (--configure):
installed docker-ce package post-installation script subprocess returned error exit status 1
When I check journalctl it seems the underlying issue is related to socket unavailability failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd when starting the docker.service.
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 systemd[1]: Starting containerd container runtime...
-- Subject: A start job for unit containerd.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit containerd.service has begun execution.
--
-- The job identifier is 425.
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 systemd-udevd[220]: Network interface NamePolicy= disabled on kernel command line, ignoring.
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 systemd[1]: Started containerd container runtime.
-- Subject: A start job for unit containerd.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit containerd.service has finished successfully.
--
-- The job identifier is 425.
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.204032978Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=1.2.13
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.205059960Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.205311948Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.205638264Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.205845252Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.208866659Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.0-8-amd64\n": exit status 1"
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.211076859Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.211375337Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.211700190Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.212088689Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.212242393Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.212411125Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.212551115Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.0-8-amd64\n": exit status 1"
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.212710676Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215500601Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215535042Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215577789Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215594004Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215607955Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215624419Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215639471Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215655963Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215669565Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215687040Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215792942Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.215857103Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.216185682Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.216211500Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217511730Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217534636Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217547186Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217556095Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217572710Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217583819Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217593802Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217604004Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.217614317Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.218026907Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.218049421Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.218064133Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.218076770Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.218306294Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 containerd[3085]: time="2020-03-26T18:00:34.218323197Z" level=info msg="containerd successfully booted in 0.014799s"
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 systemd[1]: Reloading.
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 systemd[1]: Reloading.
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 groupadd[3141]: group added to /etc/group: name=docker, GID=998
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 groupadd[3141]: group added to /etc/gshadow: name=docker
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 groupadd[3141]: new group: name=docker, GID=998
Mar 26 18:00:34 debian-1cpu-1gb-de-fra1 systemd[1]: Reloading.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: Reloading.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: Reloading.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: Reloading.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: Starting Docker Application Container Engine...
-- Subject: A start job for unit docker.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.service has begun execution.
--
-- The job identifier is 472.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 dockerd[3216]: time="2020-03-26T18:00:35.644035999Z" level=info msg="Starting up"
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 dockerd[3216]: failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit docker.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: docker.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit docker.service has entered the 'failed' state with result 'exit-code'.
Mar 26 18:00:35 debian-1cpu-1gb-de-fra1 systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: A start job for unit docker.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.service has finished with a failure.
--
-- The job identifier is 472 and the job result is failed.
Mar 26 18:00:37 debian-1cpu-1gb-de-fra1 systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Mar 26 18:00:37 debian-1cpu-1gb-de-fra1 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Automatic restarting of the unit docker.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Mar 26 18:00:37 debian-1cpu-1gb-de-fra1 systemd[1]: Stopped Docker Application Container Engine.
-- Subject: A stop job for unit docker.service has finished
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A stop job for unit docker.service has finished.
--
-- The job identifier is 473 and the job result is done.
Then the system attempts to start docker.socket which launches successfully.
Mar 26 18:00:37 debian-1cpu-1gb-de-fra1 systemd[1]: Starting Docker Socket for the API.
-- Subject: A start job for unit docker.socket has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.socket has begun execution.
--
-- The job identifier is 474.
Mar 26 18:00:37 debian-1cpu-1gb-de-fra1 systemd[1]: Reached target Network is Online.
-- Subject: A start job for unit network-online.target has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit network-online.target has finished successfully.
--
-- The job identifier is 521.
Mar 26 18:00:37 debian-1cpu-1gb-de-fra1 systemd[1]: Listening on Docker Socket for the API.
-- Subject: A start job for unit docker.socket has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.socket has finished successfully.
--
-- The job identifier is 474.
And is followed by an attempt to start docker.service again, which now finishes successfully as well.
Mar 26 18:00:37 debian-1cpu-1gb-de-fra1 systemd[1]: Starting Docker Application Container Engine...
-- Subject: A start job for unit docker.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.service has begun execution.
--
-- The job identifier is 473.
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.027461667Z" level=info msg="Starting up"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 audit[4193]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=4193 comm="apparmor_parser"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 kernel: audit: type=1400 audit(1585245638.085:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=4193 comm="apparmor_parser"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.092587890Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.095092114Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.095416449Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.095906885Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.104124475Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.107066961Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.107394704Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.107561449Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport420610117-merged.mount: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit var-lib-docker-check\x2doverlayfs\x2dsupport420610117-merged.mount has successfully entered the 'dead' state.
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.194551478Z" level=warning msg="Your kernel does not support swap memory limit"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.194995719Z" level=warning msg="Your kernel does not support cgroup rt period"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.195206982Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.195572472Z" level=info msg="Loading containers: start."
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 systemd-udevd[220]: Network interface NamePolicy= disabled on kernel command line, ignoring.
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 kernel: Bridge firewalling registered
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 kernel: Initializing XFRM netlink socket
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.449529108Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 systemd-udevd[4216]: Using default interface naming scheme 'v240'.
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 systemd-udevd[4216]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.552102196Z" level=info msg="Loading containers: done."
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck002229522-merged.mount: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit var-lib-docker-overlay2-opaque\x2dbug\x2dcheck002229522-merged.mount has successfully entered the 'dead' state.
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.583939094Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 dockerd[4160]: time="2020-03-26T18:00:38.584331232Z" level=info msg="Daemon has completed initialization"
Mar 26 18:00:38 debian-1cpu-1gb-de-fra1 systemd[1]: Started Docker Application Container Engine.
-- Subject: A start job for unit docker.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.service has finished successfully.
--
-- The job identifier is 473.
When I run the exact same script from terminal once I connect to the VPS via ssh everything runs without a hassle.
Edit: I checked the logs when executing this script manually from terminal and indeed the services seem to be started in the correct order
containerd.service
docker.socket
docker.service
Related
Working on a Kibana deployment, after installing Kibana & Elasticsearch i get the error 'Kibana server is not ready yet'.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-centos-7
[opc#homer7 etc]$
[opc#homer7 etc]$ sudo systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-02-26 13:56:07 CET; 37s ago
Docs: https://www.elastic.co
Main PID: 18215 (node)
Memory: 208.3M
CGroup: /system.slice/kibana.service
└─18215 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist --logging.dest="/var/log/kibana/kibana.log" --pid.file="/run/kibana/kibana.pid"
Feb 26 13:56:07 homer7 systemd[1]: kibana.service failed.
Feb 26 13:56:07 homer7 systemd[1]: Started Kibana.
[opc#homer7 etc]$
[opc#homer7 etc]$
[opc#homer7 etc]$
[opc#homer7 etc]$ sudo journalctl --unit kibana
-- Logs begin at Fri 2021-02-26 11:31:02 CET, end at Fri 2021-02-26 13:56:57 CET. --
Feb 26 12:15:38 homer7 systemd[1]: Started Kibana.
Feb 26 13:21:25 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:22:55 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:22:55 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:22:55 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:22:55 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:22:55 homer7 systemd[1]: kibana.service failed.
Feb 26 13:25:05 homer7 systemd[1]: Started Kibana.
Feb 26 13:25:29 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:26:59 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:26:59 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:26:59 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:26:59 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:26:59 homer7 systemd[1]: kibana.service failed.
Feb 26 13:27:56 homer7 systemd[1]: Started Kibana.
Feb 26 13:40:53 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:42:23 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:42:23 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:42:23 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:42:23 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:42:23 homer7 systemd[1]: kibana.service failed.
Feb 26 13:42:23 homer7 systemd[1]: Started Kibana.
Feb 26 13:44:09 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:45:40 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:45:40 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:45:40 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:45:40 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:45:40 homer7 systemd[1]: kibana.service failed.
Feb 26 13:45:40 homer7 systemd[1]: Started Kibana.
Feb 26 13:54:37 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:56:07 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:56:07 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:56:07 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:56:07 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:56:07 homer7 systemd[1]: kibana.service failed.
Feb 26 13:56:07 homer7 systemd[1]: Started Kibana.
[opc#homer7 etc]$
[opc#homer7 etc]$
check $systemctl status elasticsearch. I am guessing your elasticsearch service is not started yet.
I guess there are many factors that need to be checked, first of all please go to the config directory of where you installed Kibana and check the kibana.yml by sudo vi kibana.yml and check the port of elastic server that Kibana tries to connect(the default is 9200).
Here is an example of default configuration.
After matching this configuration with your need go to the script file that you save in for Kibana service and check the the [unix] part to if it needs activate elastic service first and if you didn't add "Required" part for Elasticserver make sure that the elastic server is up and run before running Kibana as service, you can also lunch Kibana as shell by going to the bin director of Kibana and lunching Kibana .
Maybe The issue happened due to kibana was unable to access elasticsearch locally.
I think that you have enabled xpack.security plugin for security purpose at elasticsearch.yml by adding a new line :
xpack.security.enabled : true
if so you need to uncomment the two lines on kibana.yml :
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"
elasticsearch.username = "kibana_system"
elasticsearch.password = "your-password"
after saving the changes, restart kibana service :
sudo sservice kibana restart
Elasticsearch working with no issues on http://localhost:9200
And Operating system is Ubuntu 18.04
Here is the error log for Kibana
root#syed-MS-7B17:/var/log# journalctl -fu kibana.service
-- Logs begin at Sat 2020-01-04 18:30:58 IST. --
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: {"type":"log","#timestamp":"2020-04-03T14:52:49Z","tags":["fatal","root"],"pid":7165,"message":"{ Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601\n at Server.setupListenHandle [as _listen2] (net.js:1263:19)\n at listenInCluster (net.js:1328:12)\n at GetAddrInfoReqWrap.doListen (net.js:1461:7)\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:61:10)\n code: 'EADDRNOTAVAIL',\n errno: 'EADDRNOTAVAIL',\n syscall: 'listen',\n address: '7.0.0.1',\n port: 5601 }"}
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: FATAL Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Scheduled restart job, restart counter is at 2.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Stopped Kibana.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Start request repeated too quickly.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Failed to start Kibana.
I have resolved it myself after checking the /etc/hosts file
It was edited by mistake like below
7.0.0.1 localhost
This is the error I am getting from elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-07-11 08:23:29 UTC; 1h 29min ago
Docs: http://www.elastic.co
Process: 1579 ExecStart=/usr/local/elasticsearch/bin/elasticsearch (code=exited, status=78)
Main PID: 1579 (code=exited, status=78)
This is the log file , this is what i get after using 'journalctl -u elasticsearch.service' command:
Jul 11 06:06:26 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:08:28 vyakar-stage-elastic systemd[1]: Stopping Elasticsearch...
Jul 11 06:08:28 vyakar-stage-elastic systemd[1]: Stopped Elasticsearch.
Jul 11 06:34:49 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:35:09 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:35:09 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 06:48:00 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:48:20 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:48:20 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 06:52:21 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:52:42 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:52:42 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 06:57:36 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 06:57:57 vyakar-stage-elastic systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a
Jul 11 06:57:57 vyakar-stage-elastic systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Jul 11 07:46:36 vyakar-stage-elastic systemd[1]: Started Elasticsearch.
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,490][WARN ][o.e.b.JNANatives ] [fmcn] Unable to lock JVM Memory: error=12, r
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,501][WARN ][o.e.b.JNANatives ] [fmcn] This can result in part of the JVM bei
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,503][WARN ][o.e.b.JNANatives ] [fmcn] Increase RLIMIT_MEMLOCK, soft limit: 1
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: [2019-07-11T07:46:40,504][WARN ][o.e.b.JNANatives ] [fmcn] These can be adjusted by modifying /et
Jul 11 07:46:40 vyakar-stage-elastic elasticsearch[726]: # allow user 'elasticsearch' mlockall
I want to configure Kibana, so, that I can access over https.
I did following changes in Kibana config file (/etc/kibana/kibana.yml):
server.host: 0.0.0.0
server.ssl.enabled: true
server.ssl.key: /etc/elasticsearch/privkey.pem // Using same SSL that I created for elasticsearch
server.ssl.certificate: /etc/elasticsearch/cert.pem // Using same SSL that I created for elasticsearch
elasticsearch.url: https://127.0.0.1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
When I restart/start Kibana, it's giving me below error:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Wed 2019-06-05 14:20:12 UTC; 382ms ago
Process: 32505 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 32505 (code=exited, status=1/FAILURE)
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Failed with result 'exit-code'.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Jun 05 14:20:12 mts-elk-test systemd[1]: Stopped Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Start request repeated too quickly.
Jun 05 14:20:12 mts-elk-test systemd[1]: Failed to start Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Failed with result 'start-limit-hit'.
root#mts-elk-test:/home/ronak# vi /etc/kibana/kibana.yml
I found the solution. There was a problem with file permission.
I copied cert.pem and privkey.pem files from elasticsearch directory to kibana and changed owner with kibana user:
chown kibana:kibana /etc/kibana/cert.pem
chown kibana:kibana /etc/kibana/privkey.pem
Changed path in kibana.yml file:
server.ssl.key: /etc/kibana/privkey.pem
server.ssl.certificate: /etc/kibana/cert.pem
Rstart kibana: service kibana restart
And it worked!
I'm currently building up a test environment for HPE ALM Octane for my company. This Application uses Elasticsearch. Now I have the problem, that I can't start my Elasticsearchserver and I'm a bit at the end of my nerves ;).
Cause Octane works with Elasticsearch version 2.4.0, I'm also forced to work with this version.
I get the following Error:
Error - Console Screenshot
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service;
enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-02-21 09:40:50 CET; 1h 9min ago
Process: 954 ExecStart=/usr/share/elasticsearch/bin/elasticsearch
-Des.pidfile=${PID_DIR}/elasticsearch.pid
-Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR}
-Des.default.path.data=${DATA_DIR} -Des.default.path.conf=${CONF_DIR}
(code=exited, status=1/FAILURE)
Process: 949 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 954 (code=exited, status=1/FAILURE)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at java.nio.file.Files.newInputStream(Files.java:152)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1067)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:88)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:218)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:257)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Feb 21 09:40:50 linux-rfw5 elasticsearch[954]: Refer to the log for complete error details.
Feb 21 09:40:50 linux-rfw5 systemd 1 : elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Feb 21 09:40:50 linux-rfw5 systemd 1 : elasticsearch.service: Unit entered failed state.
Feb 21 09:40:50 linux-rfw5 systemd 1 : elasticsearch.service: Failed with result 'exit-code'.
I configured the absolute minimum, that is possible. My Configurations:
elasticsearch.yml (/etc/elasticsearch/)
1.1 cluster.name: octane_test
1.2 node.name: elasticNode
1.3 network.host: 127.0.0.1 (yes localhost, cause I'm running the octane server on the same host)
http.port: 9200 elasticsearch (/etc/sysconfig/)
2.1 ES_HEAP_SIZE=4g (4 GB is 50% of the maximum memory)
I appreciate your help ;)
Joel