Description:
We have a services what running on the Google Container Engine, based on the golang library go-micro and these services running fine, except random restarting during the day.
Problem:
Pods is restarting pretty often during the day. This affects to our services and core services like kube-dns or nginx-ingress. After checking of the logs, it looks like a networking problem, after this docker daemon and kubelet is restarting, and takes to restart our services. It might happen 10 times per day or 2 times per day. This is not constantly.
Details:
Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
OS:
uname -a
Linux microservices-g1-small-25eedb64-w265 4.4.21+ #1 SMP Thu Nov 10 02:50:15 PST 2016 x86_64 Intel(R) Xeon(R) CPU # 2.30GHz GenuineIntel GNU/Linux
cat /etc/lsb-release
CHROMEOS_AUSERVER=https://tools.google.com/service/update2
CHROMEOS_RELEASE_BOARD=lakitu-signed-mpkeys
CHROMEOS_RELEASE_BRANCH_NUMBER=0
CHROMEOS_RELEASE_BUILDER_PATH=lakitu-release/R56-8977.0.0
CHROMEOS_RELEASE_BUILD_NUMBER=8977
CHROMEOS_RELEASE_BUILD_TYPE=Official Build
CHROMEOS_RELEASE_CHROME_MILESTONE=56
CHROMEOS_RELEASE_DESCRIPTION=8977.0.0 (Official Build) dev-channel lakitu
CHROMEOS_RELEASE_NAME=Chrome OS
CHROMEOS_RELEASE_PATCH_NUMBER=0
CHROMEOS_RELEASE_TRACK=dev-channel
CHROMEOS_RELEASE_VERSION=8977.0.0
DEVICETYPE=OTHER
GOOGLE_RELEASE=8977.0.0
HWID_OVERRIDE=LAKITU DOGFOOD
Golang microservice framework
go-micro
I tried to check the logs for figure out what happening and what i found:
rvices-g1-small-25eedb64-s0p6 update_engine[899]: [0310/064853:INFO:update_manager-inl.h(52)] ChromeOSPolicy::UpdateCheckAllowed: START
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 update_engine[899]: [0310/064908:WARNING:evaluation_context-inl.h(43)] Error reading Variable update_disabled: "No value set for update_disabled"
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 update_engine[899]: [0310/064932:WARNING:evaluation_context-inl.h(43)] Error reading Variable release_channel_delegated: "No value set for release_channel_delegated"
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 update_engine[899]: [0310/065015:INFO:chromeos_policy.cc(314)] Periodic check interval not satisfied, blocking until 3/10/2017 6:58:27 GMT
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 update_engine[899]: [0310/065025:INFO:update_manager-inl.h(74)] ChromeOSPolicy::UpdateCheckAllowed: END
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1435]: Docker daemon failed!
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1435]: Docker daemon failed!
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1435]: Docker daemon failed!
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1435]: Docker daemon failed!
Mar 10 06:53:28 gke-microservices-g1-small-25eedb64-s0p6 metrics_daemon[903]: [INFO:upload_service.cc(103)] Metrics disabled. Don't upload metrics samples.
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1432]: okKubelet is unhealthy!
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:05.302107123Z" level=error msg="Force shutdown daemon"
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:17.997217 30078 helpers.go:101] Unable to get network stats from pid 27012: couldn't read network stats: failure opening /proc/27012/net/d
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.134978 30078 helpers.go:101] Unable to get network stats from pid 26236: couldn't read network stats: failure opening /proc/26236/net/d
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.135389 30078 helpers.go:101] Unable to get network stats from pid 27581: couldn't read network stats: failure opening /proc/27581/net/d
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.135801 30078 helpers.go:101] Unable to get network stats from pid 27581: couldn't read network stats: failure opening /proc/27581/net/d
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:18.430715 30078 prober.go:98] No ref for container "docker://4a90f704319f64738915bc353515403263a60ad04d5859174b50bb47c255db12" (social-syn
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.430740 30078 prober.go:106] Liveness probe for "social-sync-deployment-2745944389-rftmf_on-deploy-dev(80a79ba8-04b6-11e7-be05-42010
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:18.431064 30078 prober.go:98] No ref for container "docker://964f8ef2da5de63196f5ddfaec156f6b93fb05671be3dd7f2d90e4efb91cbd34" (heapster-v
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.431076 30078 prober.go:106] Liveness probe for "heapster-v1.2.0.1-1382115970-l9h4q_kube-system(7f0f2677-04b6-11e7-be05-42010af00129):he
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1432]: % Total % Received % Xferd Average Speed Time Time Time Current
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1432]: Dload Upload Total Spent Left Speed
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:12Z" level=info msg="stopping containerd after receiving terminated"
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:18.525414 30078 prober.go:98] No ref for container "docker://6fa84a9c20b7c8600048a98d06974817e85652b3b66b8c64d6390735de5bbf19" (kube-dns-4
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.525458 30078 prober.go:106] Readiness probe for "kube-dns-4101612645-bkt6z_kube-system(7f12f616-04b6-11e7-be05-42010af00129):kubedns" f
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: E0310 06:53:18.631190 30078 generic.go:197] GenericPLEG: Unable to retrieve pods: operation timeout: context deadline exceeded
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: E0310 06:53:18.646004 30078 container_manager_linux.go:625] error opening pid file /var/run/docker.pid: open /var/run/docker.pid: no such file or dire
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: E0310 06:53:18.893042 30078 kubelet_pods.go:710] Error listing containers: dockertools.operationTimeout{err:context.deadlineExceededError{}}
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: E0310 06:53:18.893091 30078 kubelet.go:1860] Failed cleaning pods: operation timeout: context deadline exceeded
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.947556 30078 logs.go:41] http: TLS handshake error from 127.0.0.1:39224: EOF
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:18.990182 30078 prober.go:98] No ref for container "docker://964f8ef2da5de63196f5ddfaec156f6b93fb05671be3dd7f2d90e4efb91cbd34" (heapster-v
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.990207 30078 prober.go:106] Liveness probe for "heapster-v1.2.0.1-1382115970-l9h4q_kube-system(7f0f2677-04b6-11e7-be05-42010af00129):he
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:18.990268 30078 prober.go:98] No ref for container "docker://4a90f704319f64738915bc353515403263a60ad04d5859174b50bb47c255db12" (social-syn
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1432]: [1.9K blob data]
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.043529322Z" level=error msg="Stop container error: Stop container d0c295d50409a171745524d6171a845fc3d29fd6db26da3fc883653fce1e4
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.077975854Z" level=error msg="Stop container error: Stop container 4712afe5f084cf3163bef94ac21e3d63a5179190e73a8a0fa906a59630b80
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.078034531Z" level=error msg="Stop container error: Stop container 1b18343beedfbe58403017fa532b85604c7ec2c96f15bd503747c19ac37f6
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.078074791Z" level=error msg="Stop container error: Stop container 1fb54295ff5ecc734bf12c576880131cb98011cb98e37b5fa982bdd257b69
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.078113450Z" level=error msg="Stop container error: Stop container b8e52eafa29a8b02263894b3d0d1371a92f1656fea981a6b9842c42b5d939
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.078150890Z" level=error msg="Stop container error: Stop container 9b9021078f15bc3ea03770c0c135e978326f8e279e60e9663885218070026
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:18.990280 30078 prober.go:106] Liveness probe for "social-sync-deployment-2745944389-rftmf_on-deploy-dev(80a79ba8-04b6-11e7-be05-42010
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: E0310 06:53:19.219709 30078 eviction_manager.go:204] eviction manager: unexpected err: failed ImageStats: failed to list docker images - operation tim
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:19.285843 30078 logs.go:41] http: TLS handshake error from 127.0.0.1:39414: write tcp 127.0.0.1:10250->127.0.0.1:39414: write: broken pipe
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:19.400005 30078 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:19.400065 30078 prober.go:98] No ref for container "docker://6d63f67520d9b76446a00e1f6d81422f12f2fa93a1a9f85a656c0b49e457ba0c" (social-acc
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:19.400079 30078 prober.go:106] Liveness probe for "social-accounts-deployment-983093656-h9frj_on-deploy-dev(8071bfd6-04b6-11e7-be05-42
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:19.400318 30078 prober.go:98] No ref for container "docker://963021c2befd5e53a61c16ba2f7c97446b4c045bbf92f723e3b899c4fb2cde21" (post-metri
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:19.400333 30078 prober.go:106] Liveness probe for "post-metrics-deployment-556584274-z3p67_on-deploy-dev(7f9d4125-04b6-11e7-be05-42010
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: W0310 06:53:19.400476 30078 prober.go:98] No ref for container "docker://dc65f853b22eb25bdfaf1ce5bf1d0d6f48e57379caffa526f80a71b086d5247f" (notificati
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 health-monitor.sh[1432]: [1.9K blob data]
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.078188154Z" level=error msg="Stop container error: Stop container 8ee3de7c4dd56136b8c8a444f9b58316d190d2dad496472e233f23bf27596
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.078226785Z" level=error msg="Stop container error: Stop container a9fefcd23efb7f6472b209d6e383b8050da054c3f4b1ad2c6bf531f3b1475
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.078276076Z" level=error msg="Stop container error: Stop container 874fdb93aafc0a13bcbeada66f8f031cd52c01f0cec59913a49bf93917ce5
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.565783448Z" level=error msg="Stop container error: Stop container 42b9b796470a3a0a345229227cb7fa223967c56ce3b8e2765c3d9a48e963c
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.565846865Z" level=error msg="Stop container error: Stop container add6806333a7185aa4944b9bde0c9b2be973a09e59d2b80c09e98e549b180
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 docker[24076]: time="2017-03-10T06:53:13.565886676Z" level=error msg="Stop container error: Stop container 5631ba532f8b2a4ac262b97fabd2df07a8fe6b0202879e1347a763a5a8921
Mar 10 06:53:29 gke-microservices-g1-small-25eedb64-s0p6 kubelet[30078]: I0310 06:53:19.400485 30078 prober.go:106] Liveness probe for "notifications-deployment-3662335406-r668m_on-deploy-dev(880c38dc-0425-11e7-be05-420
At every time, when it trying to update a ChromeOS, it starts to occurs docker daemon issues, networking issues etc.
kube-proxy.log
I0310 06:53:17.392671 5 proxier.go:750] Deleting connection tracking state for service
IP 10.3.240.10, endpoint IP 10.0.5.223
Flag --resource-container has been deprecated, This feature will be removed in a later releas
e.
I0310 06:54:12.615435 5 iptables.go:176] Could not connect to D-Bus system bus: dial un
ix /var/run/dbus/system_bus_socket: connect: no such file or directory
I0310 06:54:12.615488 5 server.go:168] setting OOM scores is unsupported in this build
I0310 06:54:12.687932 5 server.go:215] Using iptables Proxier.
I0310 06:54:12.690596 5 server.go:227] Tearing down userspace rules.
I0310 06:54:12.690844 5 healthcheck.go:119] Initializing kube-proxy health checker
I0310 06:54:12.702034 5 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to
131072
I0310 06:54:12.702366 5 conntrack.go:66] Setting conntrack hashsize to 32768
I0310 06:54:12.702927 5 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_tim
eout_established' to 86400
I0310 06:54:12.702951 5 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_tim
eout_close_wait' to 3600
I0310 06:54:12.714134 5 proxier.go:802] Not syncing iptables until Services and Endpoin
ts have been received from master
More logs:
g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:45.445978 3344 docker_manager.go:1975] Need to restart pod infra container for "roles-deployment-1745993421-qxf7z_on-a
Mar 10 06:50:45 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:45.574227 3344 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/e257aff1-055d-1
Mar 10 06:50:45 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:45.575943 3344 docker_manager.go:1975] Need to restart pod infra container for "social-accounts-deployment-983093656-v
Mar 10 06:50:45 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:45.774316 3344 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/e2762a4c-055d-1
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:46.056277 3344 docker_manager.go:1975] Need to restart pod infra container for "tags-srv-deployment-626769860-js4h5_on
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-udevd[6680]: Could not generate persistent MAC address for veth37abc82a: No such file or directory
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: device veth37abc82a entered promiscuous mode
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 3(veth37abc82a) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 3(veth37abc82a) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-networkd[611]: veth37abc82a: Gained carrier
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:46.626937 3344 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:46.627371 3344 conversion.go:134] failed to handle multiple devices for container. Skipping Filesystem stats
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-udevd[6745]: Could not generate persistent MAC address for veth07d02159: No such file or directory
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-networkd[611]: veth07d02159: Gained carrier
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: device veth07d02159 entered promiscuous mode
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 12(veth07d02159) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 12(veth07d02159) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-udevd[6771]: Could not generate persistent MAC address for veth2b02253d: No such file or directory
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-networkd[611]: veth2b02253d: Gained carrier
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: device veth2b02253d entered promiscuous mode
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 23(veth2b02253d) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 23(veth2b02253d) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-udevd[6796]: Could not generate persistent MAC address for veth55143c6b: No such file or directory
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-networkd[611]: veth55143c6b: Gained carrier
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: device veth55143c6b entered promiscuous mode
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 30(veth55143c6b) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 30(veth55143c6b) entered forwarding state
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-udevd[6821]: Could not generate persistent MAC address for vethe38b8eee: No such file or directory
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:46 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-networkd[611]: vethe38b8eee: Gained carrier
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kernel: device vethe38b8eee entered promiscuous mode
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 31(vethe38b8eee) entered forwarding state
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kernel: cbr0: port 31(vethe38b8eee) entered forwarding state
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:47.113442 3344 docker_manager.go:2236] Determined pod ip after infra change: "roles-deployment-1745993421-qxf7z
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:47.115417 3344 kubelet.go:1816] SyncLoop (PLEG): "social-accounts-deployment-983093656-vh2xt-deploy-dev(e257aff
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 docker[3264]: time="2017-03-10T06:50:47.118506356Z" level=error msg="Handler for GET /v1.23/images/b.gcr.io-container-registry/microservice
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kubelet[3344]: I0310 06:50:47.194220 3344 provider.go:119] Refreshing cache for provider: *gcp_credentials.dockerConfigKeyProvider
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-udevd[6847]: Could not generate persistent MAC address for veth2228e3ba: No such file or directory
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Network configuration changed, trying to establish connection.
Mar 10 06:50:47 gke-microservices-g1-small-25eedb64-w265 systemd-timesyncd[570]: Synchronized to time server 169.254.169.254:123 (169.254.169.254).
Question:
This is possible to avoid/reduce amount of restarts and solve networking issues to make our system more stable?
This is pretty interesting. While not a solution I would recommend:
Open a support ticket
Start some nodes from the other system image, container-vm and observe the difference in behaviour
You are using micro instances. Network performance is heavily affected by CPU and storage usage on these instances. See more info at https://cloud.google.com/compute/docs/networks-and-firewalls (egress caps)
Instances that have 0.5 or fewer cores, such as shared-core machine types, are treated as having 0.5 cores, and a network throughput cap of 1 Gbit/sec.
Both persistent disk write I/O and network traffic count towards the instance's network cap. Depending on your needs, ensure your instance can support any desired persistent disk throughput for your applications. For more information, see the persistent disk specifications.
Start more kube-dns and nginx-ingress-controller replicas so you are less affected by single node failures
Related
after upgrading via mc command i get this error when i try to login to the (kind of new) minio console:
Post "https://fqdn.org/": dial tcp 127.0.1.1:443: connect: connection refused
I have a signed and valid SSL Certificate.
Downgrading minio (aka restore Snapshot of VM) solves the problem.
Any ideas?
This is my config:
MINIO_SERVER_URL="https://fqdn.org"
MINIO_ACCESS_KEY="key"
MINIO_VOLUMES="/mnt/hdd2/minio/"
MINIO_OPTS="-C /etc/minio --address :9000 --console-address :9001"
MINIO_SECRET_KEY="minio"
This is my minio startup log:
● minio.service - MinIO
Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-11 08:41:14 CET; 4min 50s ago
Docs: https://docs.min.io
Process: 3567 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCESS)
Main PID: 3568 (minio)
Tasks: 9 (limit: 2351)
Memory: 101.9M
CGroup: /system.slice/minio.service
└─3568 /home/minio/minio server -C /etc/minio --address :9000 --console-address :9001 /mnt/hdd2/minio/
Nov 11 08:41:14 pmit-minio-test systemd[1]: Starting MinIO...
Nov 11 08:41:14 pmit-minio-test systemd[1]: Started MinIO.
Nov 11 08:41:17 pmit-minio-test minio[3568]: WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Nov 11 08:41:17 pmit-minio-test minio[3568]: Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Nov 11 08:41:17 pmit-minio-test minio[3568]: API: https://fqdn.org
Nov 11 08:41:17 pmit-minio-test minio[3568]: Console: https://191.164.213.7:9001 https://127.0.0.1:9001
Nov 11 08:41:17 pmit-minio-test minio[3568]: Documentation: https://docs.min.io
Please see the answer here:
https://github.com/minio/minio/issues/13639#issuecomment-966244704
I had to change this line:
MINIO_SERVER_URL="https://fqdn.org:9000"
I have installed Dante Proxy server by using following methods from the website. But the Server doesn't start and shows the following error. I have tried the steps from other websites also. I searched StackOverflow and saw the same issue in one question. but it has been solved yet. Anyone can solve it or suggest me any other alternative for SOCKS5 proxy server
Job for danted.service failed because the control process exited with error code. See "systemctl status danted.service" and "journalctl -xe" for details.
Error shown in systemctl status danted.service & journalctl -xe
steven#steven-VirtualBox:~$ systemctl status danted.service
● danted.service - LSB: SOCKS (v4 and v5) proxy daemon (danted)
Loaded: loaded (/etc/init.d/danted; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-03-10 18:12:42 IST; 2min 59s ago
Docs: man:systemd-sysv-generator(8)
Process: 3400 ExecStart=/etc/init.d/danted start (code=exited, status=1/FAILURE)
Mar 10 18:12:41 steven-VirtualBox systemd[1]: Starting LSB: SOCKS (v4 and v5) proxy daemon (danted)...
Mar 10 18:12:42 steven-VirtualBox danted[3405]: error: /etc/danted.conf: problem on line 11 near token "eth0": could not resolve hostname "eth0
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Control process exited, code=exited status=1
Mar 10 18:12:42 steven-VirtualBox danted[3400]: Starting Dante SOCKS daemon:
Mar 10 18:12:42 steven-VirtualBox systemd[1]: Failed to start LSB: SOCKS (v4 and v5) proxy daemon (danted).
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Unit entered failed state.
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Failed with result 'exit-code'.
steven#steven-VirtualBox:~$ journalctl -xe
-- The result is failed.
Mar 10 18:11:40 steven-VirtualBox systemd[1]: danted.service: Unit entered failed state.
Mar 10 18:11:40 steven-VirtualBox systemd[1]: danted.service: Failed with result 'exit-code'.
Mar 10 18:12:40 steven-VirtualBox sudo[3397]: steven : TTY=pts/18 ; PWD=/home/steven ; USER=root ; COMMAND=/bin/systemctl restart danted
Mar 10 18:12:41 steven-VirtualBox sudo[3397]: pam_unix(sudo:session): session opened for user root by (uid=0)
Mar 10 18:12:41 steven-VirtualBox systemd[1]: Stopped LSB: SOCKS (v4 and v5) proxy daemon (danted).
-- Subject: Unit danted.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit danted.service has finished shutting down.
Mar 10 18:12:41 steven-VirtualBox systemd[1]: Starting LSB: SOCKS (v4 and v5) proxy daemon (danted)...
-- Subject: Unit danted.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit danted.service has begun starting up.
Mar 10 18:12:42 steven-VirtualBox danted[3405]: error: /etc/danted.conf: problem on line 11 near token "eth0": could not resolve hostname "eth0
Mar 10 18:12:42 steven-VirtualBox danted[3405]: alert: mother[1/1]: shutting down
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Control process exited, code=exited status=1
Mar 10 18:12:42 steven-VirtualBox danted[3400]: Starting Dante SOCKS daemon:
Mar 10 18:12:42 steven-VirtualBox sudo[3397]: pam_unix(sudo:session): session closed for user root
Mar 10 18:12:42 steven-VirtualBox systemd[1]: Failed to start LSB: SOCKS (v4 and v5) proxy daemon (danted).
-- Subject: Unit danted.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit danted.service has failed.
--
-- The result is failed.
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Unit entered failed state.
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Failed with result 'exit-code'.
Mar 10 18:12:50 steven-VirtualBox sudo[3407]: steven : TTY=pts/18 ; PWD=/home/steven ; USER=root ; COMMAND=/bin/systemctl status danted
Mar 10 18:12:50 steven-VirtualBox sudo[3407]: pam_unix(sudo:session): session opened for user root by (uid=0)
Mar 10 18:14:38 steven-VirtualBox sudo[3407]: pam_unix(sudo:session): session closed for user root
I had the same issue and came across your question. I fixed it by adding a systemd dependency of network-online.target to the danted.service, based on reading this https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/
Here's how:
sudo systemctl edit danted.service
add this:
[Unit]
After=network-online.target
Wants=network-online.target
save & exit, run this for good measure
sudo systemctl daemon-reload
sudo systemctl enable danted.service
This line is the telltale:
Mar 10 18:12:42 steven-VirtualBox danted[3405]: error: /etc/danted.conf: problem on line 11 near token "eth0": could not resolve hostname "eth0
It looks like there is no interface called eth0.
I had the same issue, found out what the actual interface is called using ifconfig and swapped out eth0 for that.
Find the interface of your device from Terminal with netstat -rn and look at the Iface column. Install netstat with sudo apt install net-tools if you don't have it. Change the settings of external: eth0 to external: xxxx where of course xxxx being your Iface value, in the file /etc/danted.conf.
If you're just starting out and there's not yet saved rules in danted.conf you can simply delete the file with sudo rm /etc/danted.conf and then create a new with sudo nano /etc/danted.conf. If using firewall it is mandatory that you open the port 1080 with sudo ufw allow 1080. In the new empty file danted.conf, paste in
logoutput: syslog
user.privileged: root
user.unprivileged: nobody
# The listening network interface or address.
internal: 0.0.0.0 port=1080
# The proxying network interface or address.
external: xxxx #Replace xxxx with the device's Iface
# socks-rules determine what is proxied through the external interface.
socksmethod: username
# client-rules determine who can connect to the internal interface.
clientmethod: none
client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
Save the file and run
sudo systemctl restart danted.service
sudo systemctl status danted.service
I am using a Raspberry Pi. To reduce I/O on my SD-Card I symlink all important log files to an external USB-mounted Harddrive.
Example:
ln -s /media/usb-device/logs/auth.log /var/log/auth.log
The logging works fine. But fail2ban seems not to like that. When I enable my ssh-monitoring in my /etc/fail2ban/jail.local file,
# [sshd]
enabled = true
bantime = 3600
fail2ban crash during executing this command systemctl restart fail2ban.service
I have tried to hardcode the path:
# logpath = %(sshd_log)s
logpath = /media/usb-devive/logs/auth.log
But fail2ban throws the same error:
fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-04-28 20:42:33 CEST; 45s ago
Docs: man:fail2ban(1)
Process: 3014 ExecStop=/usr/bin/fail2ban-client stop (code=exited, status=0/SUCCESS)
Process: 3045 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
Main PID: 658 (code=killed, signal=TERM)
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
Apr 28 20:42:33 raspberrypi systemd[1]: Stopped Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Start request repeated too quickly.
Apr 28 20:42:33 raspberrypi systemd[1]: Failed to start Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Unit entered failed state.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Failed with result 'exit-code'.
Any ideas?
"devive" in the logpath is spelt incorrectly
I'm a newbie in Docker/Docker Swarm and I want to create a custom mosquitto service on Swarm. I created a custom mosquitto image
lcsf/mosquitto3 from ubuntu:latest, then I added some tools (ping, ipconfig). I can run a single container with docker run and /bin/bash, but I can't create a Swarm service with that image. The service isn't created successfully. There are some outputs below.
Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update
RUN apt-get install -y mosquitto mosquitto-clients
EXPOSE 80 443 1883 8083 8883
Docker service create output:
overall progress: 0 out of 1 tasks
1/1: preparing [========> ]
verify: Detected task failure
This output is shown in a loop, then I stop it using ctrl+c, the service is created, but doesn't run, with 0/1 replicas.
Docker service ps mqtt (my custom name) output, there are 3 nodes.
ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR PORTS
llqr0gysz4bj mqtt.1 lcsf/mosquitto3:latest Docker02 Ready Ready 2 seconds ago
kcwfqovyn2mp \_ mqtt.1 lcsf/mosquitto3:latest Docker03 Shutdown Complete 2 seconds ago
ruisy599nbt4 \_ mqtt.1 lcsf/mosquitto3:latest Docker03 Shutdown Complete 7 seconds ago
xg1lib5x8vt9 \_ mqtt.1 lcsf/mosquitto3:latest Docker02 Shutdown Complete 13 seconds ago
fgm9wu25t0lj \_ mqtt.1 lcsf/mosquitto3:latest Docker03 Shutdown Complete 18 seconds ago
That's it, I hope someone can help me. Thanks in advance and I'm sorry 'bout my English and Stack Overflow skills.
UPDATE #1
Output from journalctl -f -n10 command after tying to create the service:
Sep 25 09:01:03 Docker01 dockerd[1230]: time="2017-09-25T09:01:03.692391553-04:00" level=info msg="Node join event for Docker02-a9b6d39043d3/192.168.222.51"
Sep 25 09:01:15 Docker01 systemd-udevd[31966]: Could not generate persistent MAC address for veth8e5ebcb: No such file or directory
Sep 25 09:01:15 Docker01 systemd-udevd[31967]: Could not generate persistent MAC address for vethaf2978b: No such file or directory
Sep 25 09:01:15 Docker01 kernel: docker0: port 1(vethaf2978b) entered blocking state
Sep 25 09:01:15 Docker01 kernel: docker0: port 1(vethaf2978b) entered disabled state
Sep 25 09:01:15 Docker01 kernel: device vethaf2978b entered promiscuous mode
Sep 25 09:01:15 Docker01 kernel: IPv6: ADDRCONF(NETDEV_UP): vethaf2978b: link is not ready
Sep 25 09:01:15 Docker01 kernel: eth0: renamed from veth8e5ebcb
Sep 25 09:01:15 Docker01 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethaf2978b: link becomes ready
Sep 25 09:01:15 Docker01 kernel: docker0: port 1(vethaf2978b) entered blocking state
Sep 25 09:01:15 Docker01 kernel: docker0: port 1(vethaf2978b) entered forwarding state
Sep 25 09:01:15 Docker01 kernel: docker0: port 1(vethaf2978b) entered disabled state
Sep 25 09:01:15 Docker01 kernel: veth8e5ebcb: renamed from eth0
Sep 25 09:01:15 Docker01 kernel: docker0: port 1(vethaf2978b) entered disabled state
Sep 25 09:01:15 Docker01 kernel: device vethaf2978b left promiscuous mode
Sep 25 09:01:15 Docker01 kernel: docker0: port 1(vethaf2978b) entered disabled state
Sep 25 09:01:33 Docker01 dockerd[1230]: time="2017-09-25T09:01:33.693508463-04:00" level=info msg="Node join event for Docker03-f71a448c54c7/192.168.222.52"
Sep 25 09:01:46 Docker01 dockerd[1230]: time="2017-09-25T09:01:46.541311475-04:00" level=info msg="Node join event for Docker02-a9b6d39043d3/192.168.222.51"
Sep 25 09:01:57 Docker01 dockerd[1230]: sync duration of 3.001217113s, expected less than 1s
Sep 25 09:02:03 Docker01 dockerd[1230]: time="2017-09-25T09:02:03.694876667-04:00" level=info msg="Node join event for Docker03-f71a448c54c7/192.168.222.52"
Sep 25 09:02:33 Docker01 dockerd[1230]: time="2017-09-25T09:02:33.695993259-04:00" level=info msg="Node join event for Docker03-f71a448c54c7/192.168.222.52"
UPDATE #2
This is the output from docker service ps --no-trunc mqtt command
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
bour693j8jbbrt799fz0nkpwr mqtt.1 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker03 Ready Ready 4 seconds ago
wro6254cs94gkijs8s4v9cvim \_ mqtt.1 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker03 Shutdown Complete 4 seconds ago
7vgx2mehaxki2p680fesn5jww \_ mqtt.1 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker03 Shutdown Complete 10 seconds ago
52hv6da6mj72s64po3hze4ham \_ mqtt.1 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker03 Shutdown Complete 15 seconds ago
e3s383vtg0idw8ryxwh2y3gmu \_ mqtt.1 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker03 Shutdown Complete 21 seconds ago
90i30f3riwka8xs187xi7uxt2 mqtt.2 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker02 Ready Ready less than a second ago
p2lzd04tinjdjkwkr26umlh9a \_ mqtt.2 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker02 Shutdown Complete less than a second ago
q8awoj8uu7gad6hvonhl4t9f1 \_ mqtt.2 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker02 Shutdown Complete 6 seconds ago
1fuqt0et7vw1vntd8p62jiiut \_ mqtt.2 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker02 Shutdown Complete 11 seconds ago
k3vlusok792zw0v3yddxqlmg3 \_ mqtt.2 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker02 Shutdown Complete 17 seconds ago
i4tywshqv4pxsyz5tz0z0evkz mqtt.3 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker01 Ready Ready less than a second ago
44ee4iqqpkeome4lokx9ykmbo \_ mqtt.3 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker01 Shutdown Complete less than a second ago
kdx273e9fkpqkafztif1dz35q \_ mqtt.3 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker01 Shutdown Complete 6 seconds ago
l2oewfnwbkia94r6rifbcfi4h \_ mqtt.3 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker01 Shutdown Complete 11 seconds ago
dyekgkd0swsualssw4dtvk681 \_ mqtt.3 lcsf/mosquitto3:latest#sha256:beca44e5f916d08730dd19d9d10dd2dcbd3502866f69316806a63bc094a179a9 Docker01 Shutdown Complete 17 seconds ago
Your issue is your dockerfile. You are running bash command on a swarm service. You need to run a command which doesn't exist
FROM ubuntu:latest
RUN apt-get -y update
RUN apt-get install -y mosquitto mosquitto-clients
EXPOSE 80 443 1883 8083 8883
CMD ["tail", "-f", "/dev/null"]
This is a infinite tail command, which will make sure your container doesn't exit. Run a command in image that is not looking for user input when deploying to swarm.
Just reinstalled Mongodb on my mac (fresh install of mountain lion 10.8) and now my apps are taking ~3 mins to connect.
I put together a simple node script to test this:
var start = (new Date()).getTime();
var mongoose = require('mongoose');
var db = mongoose.connect('mongodb://localhost/passport-mongox',function(err){
var stop = (new Date()).getTime();
console.log('Took this long: ',(stop-start) / 1000 );
});
Both times were 175.273 and 175.316 seconds.
When I connect to an external, hosted mongodb it connects in less than a second,
Any idea why this would happen? Here is my mongo.log:
Fri Feb 1 12:43:25 [initandlisten] MongoDB starting : pid=2262 port=27017 dbpath=/usr/local/var/mongodb 64-bit host=w
Fri Feb 1 12:43:25 [initandlisten] db version v2.2.2, pdfile version 4.5
Fri Feb 1 12:43:25 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267
Fri Feb 1 12:43:25 [initandlisten] build info: Darwin bs-osx-106-x86-64-1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49
Fri Feb 1 12:43:25 [initandlisten] options: { bind_ip: "127.0.0.1", config: "/usr/local/etc/mongod.conf", dbpath: "/usr/local/var/mongodb", logappend: "true", logpath: "/usr/local/var/log/mongodb/mongo.log" }
Fri Feb 1 12:43:25 [initandlisten] journal dir=/usr/local/var/mongodb/journal
Fri Feb 1 12:43:25 [initandlisten] recover : no journal files present, no recovery needed
Fri Feb 1 12:43:26 [websvr] admin web console waiting for connections on port 28017
Fri Feb 1 12:43:26 [initandlisten] waiting for connections on port 27017
Fri Feb 1 12:44:05 [initandlisten] connection accepted from 127.0.0.1:52137 #1 (1 connection now open)
Fri Feb 1 12:44:40 [initandlisten] connection accepted from 127.0.0.1:52152 #2 (2 connections now open)
Fri Feb 1 12:45:15 [initandlisten] connection accepted from 127.0.0.1:52201 #3 (3 connections now open)
Fri Feb 1 12:45:50 [initandlisten] connection accepted from 127.0.0.1:52298 #4 (4 connections now open)
Fri Feb 1 12:46:25 [initandlisten] connection accepted from 127.0.0.1:52325 #5 (5 connections now open)
Fri Feb 1 12:51:26 [conn5] end connection 127.0.0.1:52325 (4 connections now open)
Fri Feb 1 12:51:26 [conn3] end connection 127.0.0.1:52201 (4 connections now open)
Fri Feb 1 12:51:26 [conn4] end connection 127.0.0.1:52298 (4 connections now open)
Fri Feb 1 12:51:26 [conn1] end connection 127.0.0.1:52137 (4 connections now open)
Fri Feb 1 12:51:26 [conn2] end connection 127.0.0.1:52152 (4 connections now open)
Answer from mongoose.js
Cause:
The underlying MongoDB driver defaults to looking for IPv6 addresses,
so the most likely cause is that your localhost DNS mapping isn't configured to handle IPv6.
Solution :
Use 127.0.0.1 instead of localhost or use the family option as shown in the connection docs.
mongoose.connect(url, {family:4}, function(err, connection) {
connection.db(your_db_name);
});
So the answer came from #AdamMeghji on twitter.
My hosts file has always looked like this:
127.0.0.1 localhost
127.0.0.1 test.com
127.0.0.1 wes.dev
I switched that to:
127.0.0.1 localhost test.com wes.dev
and connections went back to 0.015 seconds.