Error i get is:
Job for apache2.service failed because the control process exited with error code.
See "systemctl status apache2.service" and "journalctl -xe" for details.
After running command systemctl status apache2.service i get :
apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset:
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Fri 2020-05-08 13:27:50 IST; 2min 12
Process: 12058 ExecStart=/usr/sbin/apachectl start (code=exited, status=127)
May 08 13:27:50 kaushal systemd[1]: Starting The Apache HTTP Server...
May 08 13:27:50 kaushal apachectl[12058]: /usr/sbin/apachectl: 174: /usr/sbin/ap
May 08 13:27:50 kaushal apachectl[12058]: Action 'start' failed.
May 08 13:27:50 kaushal apachectl[12058]: The Apache error log may have more inf
May 08 13:27:50 kaushal systemd[1]: apache2.service: Control process exited, cod
May 08 13:27:50 kaushal systemd[1]: apache2.service: Failed with result 'exit-co
May 08 13:27:50 kaushal systemd[1]: Failed to start The Apache HTTP Server.
After running journalctl -xe i get:
May 08 13:27:50 kaushal systemd[1]: apache2.service: Failed with result 'exit-co
May 08 13:27:50 kaushal sudo[12024]: pam_unix(sudo:session): session closed for
May 08 13:27:50 kaushal systemd[1]: Failed to start The Apache HTTP Server.
-- Subject: Unit apache2.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit apache2.service has failed.
--
-- The result is RESULT.
May 08 13:27:53 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
May 08 13:27:54 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
May 08 13:27:55 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
May 08 13:27:56 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
May 08 13:28:21 kaushal /usr/lib/gdm3/gdm-x-session[2772]: (EE) client bug: time
May 08 13:28:26 kaushal kernel: psmouse serio4: Touchpad at isa0060/serio4/input
May 08 13:28:26 kaushal kernel: psmouse serio4: Touchpad at isa0060/serio4/input
May 08 13:28:36 kaushal kernel: psmouse serio4: Touchpad at isa0060/serio4/input
May 08 13:28:36 kaushal kernel: psmouse serio4: Touchpad at isa0060/serio4/input
May 08 13:29:53 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
May 08 13:29:54 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
May 08 13:29:55 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
May 08 13:29:56 kaushal kernel: [UFW BLOCK] IN=wlp3s0 OUT= MAC=00:08:ca:f0:27:04
lines 1328-1350/1350 (END)
Related
I am new to Kibana and how its setup.
We are testing setting up Kibana on an Azure VM with Ansible playbooks, all seems to be fine but unfortunately I think during our troubleshooting we made a mistake somewhere and now the Kibana service will not start. The VM is running CentOs, the error we get is
Dec 07 09:47:06 es-vm1 systemd[1]: Started Kibana.
Dec 07 09:47:06 es-vm1 systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
Dec 07 09:47:06 es-vm1 systemd[1]: Unit kibana.service entered failed state.
Dec 07 09:47:06 es-vm1 systemd[1]: kibana.service failed.
Dec 07 09:47:07 es-vm1 systemd[1]: Stopped Kibana.
Dec 07 09:47:07 es-vm1 systemd[1]: Started Kibana.
Dec 07 09:47:07 es-vm1 kibana[11134]: internal/fs/utils.js:332
Dec 07 09:47:07 es-vm1 kibana[11134]: throw err;
Dec 07 09:47:07 es-vm1 kibana[11134]: ^
Dec 07 09:47:07 es-vm1 kibana[11134]: Error: EACCES: permission denied, open '/etc/kibana/kibana.yml'
Dec 07 09:47:07 es-vm1 kibana[11134]: at Object.openSync (fs.js:497:3)
Dec 07 09:47:07 es-vm1 kibana[11134]: at readFileSync (fs.js:393:35)
Dec 07 09:47:07 es-vm1 kibana[11134]: at readYaml (/usr/share/kibana/node_modules/#kbn/apm-config-loader/target_node/utils/read_config.js:25:69)
Dec 07 09:47:07 es-vm1 kibana[11134]: at getConfigFromFiles (/usr/share/kibana/node_modules/#kbn/apm-config-loader/target_node/utils/read_config.js:57:18)
Dec 07 09:47:07 es-vm1 kibana[11134]: at loadConfiguration (/usr/share/kibana/node_modules/#kbn/apm-config-loader/target_node/config_loader.js:30:58)
Dec 07 09:47:07 es-vm1 kibana[11134]: at initApm (/usr/share/kibana/node_modules/#kbn/apm-config-loader/target_node/init_apm.js:18:64)
Dec 07 09:47:07 es-vm1 kibana[11134]: at module.exports (/usr/share/kibana/src/cli/apm.js:27:3)
Dec 07 09:47:07 es-vm1 kibana[11134]: at Object.<anonymous> (/usr/share/kibana/src/cli/dist.js:10:17)
Dec 07 09:47:07 es-vm1 kibana[11134]: at Module._compile (internal/modules/cjs/loader.js:1085:14)
Dec 07 09:47:07 es-vm1 kibana[11134]: at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) {
Dec 07 09:47:07 es-vm1 kibana[11134]: errno: -13,
Dec 07 09:47:07 es-vm1 kibana[11134]: syscall: 'open',
Dec 07 09:47:07 es-vm1 kibana[11134]: code: 'EACCES',
Dec 07 09:47:07 es-vm1 kibana[11134]: path: '/etc/kibana/kibana.yml'
Dec 07 09:47:07 es-vm1 kibana[11134]: }
Dec 07 09:47:07 es-vm1 systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
Dec 07 09:47:07 es-vm1 systemd[1]: Unit kibana.service entered failed state.
Dec 07 09:47:07 es-vm1 systemd[1]: kibana.service failed.
Dec 07 09:47:10 es-vm1 systemd[1]: kibana.service holdoff time over, scheduling restart.
Dec 07 09:47:10 es-vm1 systemd[1]: Stopped Kibana.
Dec 07 09:47:10 es-vm1 systemd[1]: Started Kibana.
Dec 07 09:47:11 es-vm1 kibana[11149]: internal/fs/utils.js:332
Dec 07 09:47:11 es-vm1 kibana[11149]: throw err;
Dec 07 09:47:11 es-vm1 kibana[11149]: ^
Dec 07 09:47:11 es-vm1 kibana[11149]: Error: EACCES: permission denied, open '/etc/kibana/kibana.yml'
Dec 07 09:47:11 es-vm1 kibana[11149]: at Object.openSync (fs.js:497:3)
Dec 07 09:47:11 es-vm1 kibana[11149]: at readFileSync (fs.js:393:35)
Dec 07 09:47:11 es-vm1 kibana[11149]: at readYaml (/usr/share/kibana/node_modules/#kbn/apm-config-loader/target_node/utils/read_config.js:25:69)
Dec 07 09:47:11 es-vm1 systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
Dec 07 09:47:11 es-vm1 systemd[1]: Unit kibana.service entered failed state.
Dec 07 09:47:11 es-vm1 systemd[1]: kibana.service failed.
Dec 07 09:47:14 es-vm1 systemd[1]: kibana.service holdoff time over, scheduling restart.
Dec 07 09:47:14 es-vm1 systemd[1]: Stopped Kibana.
Dec 07 09:47:14 es-vm1 systemd[1]: start request repeated too quickly for kibana.service
Dec 07 09:47:14 es-vm1 systemd[1]: Failed to start Kibana.
Dec 07 09:47:14 es-vm1 systemd[1]: Unit kibana.service entered failed state.
Dec 07 09:47:14 es-vm1 systemd[1]: kibana.service failed.
The permissions on the yml file are as follows
-rwxrwx---. 1 root kibana 130 Dec 2 14:04 kibana.keystore
-rw-r--r--. 1 root root 5089 Dec 7 09:47 kibana.yml
-rw-r--r--. 1 root kibana 216 Nov 4 13:30 node.options
The systemctl status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Tue 2021-12-07 09:47:14 UTC; 9min ago
Docs: https://www.elastic.co
Process: 11149 ExecStart=/usr/share/kibana/bin/kibana --logging.dest="/var/log/kibana/kibana.log" --pid.file="/run/kibana/kibana.pid" (code=exited, status=1/FAILURE)
Main PID: 11149 (code=exited, status=1/FAILURE)
Dec 07 09:47:11 es-vm1 systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
Dec 07 09:47:11 es-vm1 systemd[1]: Unit kibana.service entered failed state.
Dec 07 09:47:11 es-vm1 systemd[1]: kibana.service failed.
Dec 07 09:47:14 es-vm1 systemd[1]: kibana.service holdoff time over, scheduling restart.
Dec 07 09:47:14 es-vm1 systemd[1]: Stopped Kibana.
Dec 07 09:47:14 es-vm1 systemd[1]: start request repeated too quickly for kibana.service
Dec 07 09:47:14 es-vm1 systemd[1]: Failed to start Kibana.
Dec 07 09:47:14 es-vm1 systemd[1]: Unit kibana.service entered failed state.
Dec 07 09:47:14 es-vm1 systemd[1]: kibana.service failed.
Kibana user does exist also
uid=995(kibana) gid=991(kibana) groups=991(kibana)
Could anyone point me in the right direction? What should I do here? I've tried playing around with permissions on the file but the error always seems to be the same.
Solution Update: I was able to resolve this by doing the following chmod 2750 kibana from /etc/ directory it was an error on my part while troubleshooting on the /etc/kibana directory.
I am tearing my hair trying to figure why ElasticSearch is not starting. These are my first days with ES stack, so I am completely helpless.
I am running
sudo systemctrl start elasticsearch
and get error saying that
Job for elasticsearch.service failed because a fatal signal was
delivered to the control process. See "systemctl status
elasticsearch.service" and "journalctl -xe" for details.
When I try to run journalctl -xe, I see lines of super-unclear messages for me:
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered blocking state
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered disabled state
Apr 21 19:16:24 my-pc-15IKB kernel: device veth19babe9 entered promiscuous mode
Apr 21 19:16:24 my-pc-15IKB charon[1828]: 10[KNL] interface veth19babe9 activated
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.2920] manager: (veth8fd9f3d): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.2934] manager: (veth19babe9): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered blocking state
Apr 21 19:16:24 my-pc-15IKB kernel: br-6f050fa6218c: port 2(veth19babe9) entered forwarding state
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: Using default interface naming scheme 'v245'.
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39880]: veth19babe9: Could not generate persistent MAC: No data available
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 21 19:16:24 my-pc-15IKB dockerd[1804]: time="2021-04-21T19:16:24.298236681+03:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using def>
Apr 21 19:16:24 my-pc-15IKB dockerd[1804]: time="2021-04-21T19:16:24.298287953+03:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 200>
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: Using default interface naming scheme 'v245'.
Apr 21 19:16:24 my-pc-15IKB systemd-udevd[39877]: veth8fd9f3d: Could not generate persistent MAC: No data available
Apr 21 19:16:24 my-pc-15IKB containerd[851]: time="2021-04-21T19:16:24.328926349+03:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.cont>
Apr 21 19:16:24 my-pc-15IKB charon[1828]: 11[KNL] interface veth8fd9f3d deleted
Apr 21 19:16:24 my-pc-15IKB kernel: eth0: renamed from veth8fd9f3d
Apr 21 19:16:24 my-pc-15IKB kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth19babe9: link becomes ready
Apr 21 19:16:24 my-pc-15IKB NetworkManager[757]: <info> [1619021784.8637] device (veth19babe9): carrier: link connected
Apr 21 19:16:24 my-pc-15IKB gnome-shell[5088]: Removing a network device that was not added
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: Joining mDNS multicast group on interface veth19babe9.IPv6 with address fe80::a0c2:a3ff:feb8:587a.
Apr 21 19:16:26 my-pc-15IKB charon[1828]: 12[KNL] fe80::a0c2:a3ff:feb8:587a appeared on veth19babe9
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: New relevant interface veth19babe9.IPv6 for mDNS.
Apr 21 19:16:26 my-pc-15IKB avahi-daemon[748]: Registering new address record for fe80::a0c2:a3ff:feb8:587a on veth19babe9.*.
My .yml file looks this:
cluster.name: petlon-app
node.name: my-app-node
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1
http.port: 9200
discovery.type: single-node
Could you please give a hint why does that mean ?
UPDATE:
The output of sudo journalctl -u elasticsearch.service
-- Logs begin at Thu 2021-03-11 13:10:55 MSK, end at Wed 2021-04-21 21:11:37 MSK. --
мар 16 21:23:20 my-pc systemd[1]: Starting Elasticsearch...
мар 16 21:23:41 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
мар 16 21:23:41 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
мар 16 21:23:41 my-pc systemd[1]: Failed to start Elasticsearch.
-- Reboot --
апр 11 15:55:31 my-pc systemd[1]: Starting Elasticsearch...
апр 11 15:55:46 my-pc systemd[1]: Started Elasticsearch.
апр 11 15:56:41 my-pc systemd[1]: Stopping Elasticsearch...
апр 11 15:56:41 my-pc systemd[1]: elasticsearch.service: Succeeded.
апр 11 15:56:41 my-pc systemd[1]: Stopped Elasticsearch.
апр 11 15:56:41 my-pc systemd[1]: Starting Elasticsearch...
апр 11 15:57:01 my-pc systemd[1]: Started Elasticsearch.
апр 11 16:11:06 my-pc systemd[1]: Stopping Elasticsearch...
апр 11 16:11:07 my-pc systemd[1]: elasticsearch.service: Succeeded.
апр 11 16:11:07 my-pc systemd[1]: Stopped Elasticsearch.
-- Reboot --
апр 11 16:12:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 16:13:13 my-pc systemd[1]: Started Elasticsearch.
апр 11 18:51:08 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 18:51:08 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:31:42 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:31:47 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:31:47 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:31:47 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:32:14 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:32:16 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:32:16 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:32:16 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:35:33 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:35:37 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:35:37 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:35:37 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:37:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:37:57 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:37:57 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:37:57 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:38:02 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:38:06 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:38:06 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:38:06 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:41:57 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:42:00 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:42:00 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:42:00 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:46:53 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:46:59 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:46:59 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:46:59 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:49:00 my-pc systemd[1]: Starting Elasticsearch...
апр 11 21:49:03 my-pc systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
апр 11 21:49:03 my-pc systemd[1]: elasticsearch.service: Failed with result 'signal'.
апр 11 21:49:03 my-pc systemd[1]: Failed to start Elasticsearch.
апр 11 21:49:32 my-pc systemd[1]: Starting Elasticsearch...
Elasticsearch working with no issues on http://localhost:9200
And Operating system is Ubuntu 18.04
Here is the error log for Kibana
root#syed-MS-7B17:/var/log# journalctl -fu kibana.service
-- Logs begin at Sat 2020-01-04 18:30:58 IST. --
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: {"type":"log","#timestamp":"2020-04-03T14:52:49Z","tags":["fatal","root"],"pid":7165,"message":"{ Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601\n at Server.setupListenHandle [as _listen2] (net.js:1263:19)\n at listenInCluster (net.js:1328:12)\n at GetAddrInfoReqWrap.doListen (net.js:1461:7)\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:61:10)\n code: 'EADDRNOTAVAIL',\n errno: 'EADDRNOTAVAIL',\n syscall: 'listen',\n address: '7.0.0.1',\n port: 5601 }"}
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: FATAL Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Scheduled restart job, restart counter is at 2.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Stopped Kibana.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Start request repeated too quickly.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Failed to start Kibana.
I have resolved it myself after checking the /etc/hosts file
It was edited by mistake like below
7.0.0.1 localhost
I am trying to google-fluentd stackdriver logging agent on my AWS EC2 instance running Ubuntu 16.04.5 LTS and it fails with the following error. Could anyone help?
*Job for google-fluentd.service failed because the control process exited with error code. See "systemctl status google-fluentd.service" and "journalctl -xe" for details.
invoke-rc.d: initscript google-fluentd, action "start" failed.
google-fluentd.service - LSB: data collector for Treasure Data
Loaded: loaded (/etc/init.d/google-fluentd; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2018-10-08 08:18:33 IST; 6ms ago
Docs: man:systemd-sysv-generator(8)
Process: 7778 ExecStart=/etc/init.d/google-fluentd start (code=exited, status=1/FAILURE)
Oct 08 08:18:33 ip-172-31-23-180 google-fluentd[7778]: from /opt/google-fluentd/embedded/lib/ruby/gems/2.4.0/gems/fluentd-0.14...red)>'
Oct 08 08:18:33 ip-172-31-23-180 google-fluentd[7778]: from /opt/google-fluentd/embedded/bin/fluentd:23:in `load'
Oct 08 08:18:33 ip-172-31-23-180 google-fluentd[7778]: from /opt/google-fluentd/embedded/bin/fluentd:23:in `<top (required)>'
Oct 08 08:18:33 ip-172-31-23-180 google-fluentd[7778]: from /usr/sbin/google-fluentd:7:in `load'
Oct 08 08:18:33 ip-172-31-23-180 google-fluentd[7778]: from /usr/sbin/google-fluentd:7:in `<main>'
Oct 08 08:18:33 ip-172-31-23-180 google-fluentd[7778]: * google-fluentd
Oct 08 08:18:33 ip-172-31-23-180 systemd[1]: google-fluentd.service: Control process exited, code=exited status=1
Oct 08 08:18:33 ip-172-31-23-180 systemd[1]: Failed to start LSB: data collector for Treasure Data.
Oct 08 08:18:33 ip-172-31-23-180 systemd[1]: google-fluentd.service: Unit entered failed state.
Oct 08 08:18:33 ip-172-31-23-180 systemd[1]: google-fluentd.service: Failed with result 'exit-code'.
Hint: Some lines were ellipsized, use -l to show in full.
dpkg: error processing package google-fluentd (--configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of google-fluentd-catch-all-config:
google-fluentd-catch-all-config depends on google-fluentd (>= 1.3.0); however:
Package google-fluentd is not configured yet.
dpkg: error processing package google-fluentd-catch-all-config (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
Errors were encountered while processing:
google-fluentd
google-fluentd-catch-all-config
E: Sub-process /usr/bin/dpkg returned an error code (1)*
Resolved the issue.
There was an issue with the authentication configuration which was not mentioned in the stack driver configuration issue. Followed the link to resolve the issue.
I'm trying to run/install docker on my vServer and can't find information if it's even possible.. I tried CentOS(6&7), Ubuntu, Debian, and fedora now and I'm just not able to get the docker daemon to run.
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: failed (Result: exit-code) since So 2015-04-05 17:12:23 EDT; 16s ago
Docs: http://docs.docker.com
Process: 956 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 956 (code=exited, status=1/FAILURE)
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Starting Docker Applicati...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: inappropriate ioctl for ...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: docker.service: main proc...
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Failed to start Docker Ap...
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Unit docker.service enter...
Hint: Some lines were ellipsized, use -l to show in full.
[root#vvs ~]# systemctl status docker.service -l
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: failed (Result: exit-code) since So 2015-04-05 17:12:23 EDT; 33s ago
Docs: http://docs.docker.com
Process: 956 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 956 (code=exited, status=1/FAILURE)
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Starting Docker Application Container Engine...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="+job serveapi(unix:///var/run/docker.sock)"
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="WARNING: You are running linux kernel version 2.6.32-042stab094.8, which might be unstable running docker. Please upgrade your kernel to 3.8.0."
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="+job init_networkdriver()"
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: inappropriate ioctl for device
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="-job init_networkdriver() = ERR (1)"
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="fatal" msg="inappropriate ioctl for device"
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Failed to start Docker Application Container Engine.
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Unit docker.service entered failed state.
On every system there is a different problem and I'm wasting hours and hours on not solving them ..
http://kb.odin.com/en/125115
This post suggests that it might not work at all on vServer with old kernels, like in my case..
Did anybody actually manage to use docker on a vServer and if yes, which Kernel does your host-system have?
I have a cheap server at https://www.netcix.de if that's important.
The installation page has a section "Check kernel dependencies" which clearly mentions the minimum kernel level to be expected for Docker to run:
Docker in daemon mode has specific kernel requirements. For details, check your distribution in Installation.
A 3.10 Linux kernel is the minimum requirement for Docker. Kernels older than 3.10 lack some of the features required to run Docker containers. These older versions are known to have bugs which cause data loss and frequently panic under certain conditions.
The latest minor version (3.x.y) of the 3.10 (or a newer maintained version) Linux kernel is recommended. Keeping the kernel up to date with the latest minor version will ensure critical kernel bugs get fixed
So if your distros have a kernel too old, or some other requirements not respected (as listed in Installation), that would explain why the docker daemon fails.