help my apacitor is not runnning, actually im running influxdb in the same server that kapacitor and telegraf, but my kapacitor don't work
kapacitor.service - Time series data processing engine.
Loaded: loaded (/lib/systemd/system/kapacitor.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-01-03 17:56:38 UTC; 3s ago
Docs: https://github.com/influxdb/kapacitor
Process: 2502 ExecStart=/usr/bin/kapacitord -config /etc/kapacitor/kapacitor.conf $KAPACITOR_OPTS (code=exited, status=1/FAILURE)
Main PID: 2502 (code=exited, status=1/FAILURE)
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Service hold-off time over, scheduling restart.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Scheduled restart job, restart counter is at 5.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: Stopped Time series data processing engine..
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Start request repeated too quickly.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Failed with result 'exit-code'.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: Failed to start Time series data processing engine..
i did find the solution for myself:
[[influxdb]]
enabled = true
name = "localhost"
default = true
urls = ["http://localhost:8086"]
username = "user"
password = "password"
you must take in count that you will need has an user create in influxdb before
Related
I am installing latest minio on ubuntu 18.04 following the minio installation instruction from here.
after the installation, try to run it with sudo systemctl start minio.service
but it didn't work with message.
...skipping...
● minio.service - MinIO
Loaded: loaded (/etc/systemd/system/minio.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-12-08 17:03:45 CST; 2min 1s ago
Docs: https://docs.min.io
Process: 5072 ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES (code=exited, status=1/FAILURE)
Process: 5050 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCES
Main PID: 5072 (code=exited, status=1/FAILURE)
12月 08 17:03:45 nky systemd[1]: minio.service: Service hold-off time over, scheduling restart.
12月 08 17:03:45 nky systemd[1]: minio.service: Scheduled restart job, restart counter is at 5.
12月 08 17:03:45 nky systemd[1]: Stopped MinIO.
12月 08 17:03:45 nky systemd[1]: minio.service: Start request repeated too quickly.
12月 08 17:03:45 nky systemd[1]: minio.service: Failed with result 'exit-code'.
12月 08 17:03:45 nky systemd[1]: Failed to start MinIO.
it is noted something wrong with 'MINIO_VOLUMES', but I have set the variable in the /etc/default/minio
MINIO_ROOT_USER=myminioadmin
MINIO_ROOT_PASSWORD=minio-secret-key-change-me
# MINIO_VOLUMES sets the storage volume or path to use for the MinIO server.
MINIO_VOLUMES="/mnt/data"
what is wrong with my configuration?
There is nothing obvious wrong with your configuration but you did not post your service file. Almost always this is a permissions issue, you can change the systemd service user to root to test. Common issues after that are that the binary is not present in the location specified in the service file, or not executable.
Elasticsearch working with no issues on http://localhost:9200
And Operating system is Ubuntu 18.04
Here is the error log for Kibana
root#syed-MS-7B17:/var/log# journalctl -fu kibana.service
-- Logs begin at Sat 2020-01-04 18:30:58 IST. --
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: {"type":"log","#timestamp":"2020-04-03T14:52:49Z","tags":["fatal","root"],"pid":7165,"message":"{ Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601\n at Server.setupListenHandle [as _listen2] (net.js:1263:19)\n at listenInCluster (net.js:1328:12)\n at GetAddrInfoReqWrap.doListen (net.js:1461:7)\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:61:10)\n code: 'EADDRNOTAVAIL',\n errno: 'EADDRNOTAVAIL',\n syscall: 'listen',\n address: '7.0.0.1',\n port: 5601 }"}
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: FATAL Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Scheduled restart job, restart counter is at 2.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Stopped Kibana.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Start request repeated too quickly.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Failed to start Kibana.
I have resolved it myself after checking the /etc/hosts file
It was edited by mistake like below
7.0.0.1 localhost
I want to configure Kibana, so, that I can access over https.
I did following changes in Kibana config file (/etc/kibana/kibana.yml):
server.host: 0.0.0.0
server.ssl.enabled: true
server.ssl.key: /etc/elasticsearch/privkey.pem // Using same SSL that I created for elasticsearch
server.ssl.certificate: /etc/elasticsearch/cert.pem // Using same SSL that I created for elasticsearch
elasticsearch.url: https://127.0.0.1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
When I restart/start Kibana, it's giving me below error:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Wed 2019-06-05 14:20:12 UTC; 382ms ago
Process: 32505 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 32505 (code=exited, status=1/FAILURE)
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Failed with result 'exit-code'.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Jun 05 14:20:12 mts-elk-test systemd[1]: Stopped Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Start request repeated too quickly.
Jun 05 14:20:12 mts-elk-test systemd[1]: Failed to start Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Failed with result 'start-limit-hit'.
root#mts-elk-test:/home/ronak# vi /etc/kibana/kibana.yml
I found the solution. There was a problem with file permission.
I copied cert.pem and privkey.pem files from elasticsearch directory to kibana and changed owner with kibana user:
chown kibana:kibana /etc/kibana/cert.pem
chown kibana:kibana /etc/kibana/privkey.pem
Changed path in kibana.yml file:
server.ssl.key: /etc/kibana/privkey.pem
server.ssl.certificate: /etc/kibana/cert.pem
Rstart kibana: service kibana restart
And it worked!
I installed elasticsearch and kibana using this guide:
https://opendistro.github.io/for-elasticsearch-docs/docs/install/
I created SSL for domain and using it in kibana.yml config.
server.ssl.enabled: true
server.ssl.key: /etc/elasticsearch/key.pem
server.ssl.certificate: /etc/elasticsearch/cert.pem
But, when I restart the service I am getting below error.
sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Tue 2019-05-14 19:39:21 UTC; 833ms ago
Process: 50944 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 50944 (code=exited, status=1/FAILURE)
May 14 19:39:21 mts-elk systemd[1]: kibana.service: Unit entered failed state.
May 14 19:39:21 mts-elk systemd[1]: kibana.service: Failed with result 'exit-code'.
May 14 19:39:21 mts-elk systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
May 14 19:39:21 mts-elk systemd[1]: Stopped Kibana.
May 14 19:39:21 mts-elk systemd[1]: kibana.service: Start request repeated too quickly.
May 14 19:39:21 mts-elk systemd[1]: Failed to start Kibana.
May 14 19:39:21 mts-elk systemd[1]: kibana.service: Unit entered failed state.
May 14 19:39:21 mts-elk systemd[1]: kibana.service: Failed with result 'start-limit-hit'.
I don't know where to look for kibana logs other than this.
I am using a Raspberry Pi. To reduce I/O on my SD-Card I symlink all important log files to an external USB-mounted Harddrive.
Example:
ln -s /media/usb-device/logs/auth.log /var/log/auth.log
The logging works fine. But fail2ban seems not to like that. When I enable my ssh-monitoring in my /etc/fail2ban/jail.local file,
# [sshd]
enabled = true
bantime = 3600
fail2ban crash during executing this command systemctl restart fail2ban.service
I have tried to hardcode the path:
# logpath = %(sshd_log)s
logpath = /media/usb-devive/logs/auth.log
But fail2ban throws the same error:
fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-04-28 20:42:33 CEST; 45s ago
Docs: man:fail2ban(1)
Process: 3014 ExecStop=/usr/bin/fail2ban-client stop (code=exited, status=0/SUCCESS)
Process: 3045 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
Main PID: 658 (code=killed, signal=TERM)
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
Apr 28 20:42:33 raspberrypi systemd[1]: Stopped Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Start request repeated too quickly.
Apr 28 20:42:33 raspberrypi systemd[1]: Failed to start Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Unit entered failed state.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Failed with result 'exit-code'.
Any ideas?
"devive" in the logpath is spelt incorrectly