Logstash pipeline stopped inserting data into elasticsearch - elasticsearch

Logstash pipeline is not ingesting data into the elasticsearch index though the pipeline was running. This pipeline was deployed one
year back and it was running well since then. But on 24th May 2021, it stopped ingesting data. Restarting the logstash fixed the issue.
we have checked logstash logs but did find nothing there. Please see the below log.
Aug 26 12:08:37 xyz.com logstash[827]: [2020-08-26T12:08:37,503][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>http://localhost:9200/}
Aug 26 12:08:37 xyz.com logstash[827]: [2020-08-26T12:08:37,619][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Aug 26 12:08:37 xyz.com logstash[827]: [2020-08-26T12:08:37,623][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Aug 26 12:24:22 xyz.com systemd[1]: Stopping logstash...
Aug 26 12:24:22 xyz.com logstash[827]: [2020-08-26T12:24:22,665][WARN ][logstash.runner ] SIGTERM received. Shutting down.
Aug 26 12:24:27 xyz.com logstash[827]: [2020-08-26T12:24:27,864][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>33, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-5.1.9-java/lib/logstash/inputs/beats.rb:212:in `run'"}, {"thread_id"=>25, "name"=>"[main]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>26, "name"=>"[main]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>27, "name"=>"[main]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>28, "name"=>"[main]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>29, "name"=>"[main]>worker4", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>30, "name"=>"[main]>worker5", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>31, "name"=>"[main]>worker6", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>32, "name"=>"[main]>worker7", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}]}}
Aug 26 12:24:27 xyz.com logstash[827]: [2020-08-26T12:24:27,866][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
Aug 26 12:24:29 xyz.com logstash[827]: [2020-08-26T12:24:29,852][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x2c879c55 run>"}
Aug 26 12:24:29 xyz.com logstash[827]: [2020-08-26T12:24:29,855][INFO ][logstash.runner ] Logstash shut down.
Aug 26 12:24:29 xyz.com systemd[1]: Stopped logstash.
May 24 08:11:46 xyz.com systemd[1]: Started logstash.
May 24 08:12:11 xyz.com logstash[19174]: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
May 24 08:12:11 xyz.com logstash[19174]: 2021-05-24 08:12:11,794 main ERROR RollingFileManager (/var/log/logstash/logstash-plain.log) java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied) java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.open0(Native Method)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.open(FileOutputStream.java:270)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
May 24 08:12:11 xyz.com logstash[19174]: at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:640)
May 24 08:12:25 xyz.com logstash[19174]: [2021-05-24T08:12:25,120][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>
May 24 08:12:25 xyz.com logstash[19174]: [2021-05-24T08:12:25,363][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Main process exited, code=killed, status=9/KILL
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Failed with result 'signal'.
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Scheduled restart job, restart counter is at 1.
May 24 15:38:50 xyz.com systemd[1]: Stopped logstash.
May 24 15:38:50 xyz.com systemd[1]: Started logstash.
May 24 15:39:06 xyz.com logstash[25666]: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
May 24 15:39:07 xyz.com logstash[25666]: 2021-05-24 15:39:07,101 main ERROR RollingFileManager (/var/log/logstash/logstash-plain.log) java.io.FileNotFo
May 24 15:39:07 xyz.com logstash[25666]: at java.io.FileOutputStream.open0(Native Method)
May 24 15:39:07 xyz.com logstash[25666]: at java.io.FileOutputStream.open(FileOutputStream.java:270)
Server OS: Ubuntu 18.04 ELK Version: 7.11.1
Need your help to find out the exact reason.

did you try updating the bundles to newer editions

Related

Data ingestion got stuck though the Logstash pipelines are running

We have deployed 10 logstash config files last year. We started all the config files at once by using the folder where we have kept all the config files(as service). On 24th May, we found that, few of them are not pushing data in the elasticsearch, checked from Kibana Discover. Though rest of them were working fine. We checked the status of all 3 ELK components first (by systemctl status), found that all 3 services are running. We have then checked logs of logstash by using journalctl but did not find anything that caused the issue. We then stopped all the PIDs that were running for logstash and then started the logstash pipelines and that fixed the issue.
Please find below the portion of the logs from journalctl:
Aug 26 12:08:37 xyz.com logstash[827]: [2020-08-26T12:08:37,503][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>http://localhost:9200/}
Aug 26 12:08:37 xyz.com logstash[827]: [2020-08-26T12:08:37,619][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Aug 26 12:08:37 xyz.com logstash[827]: [2020-08-26T12:08:37,623][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Aug 26 12:24:22 xyz.com systemd[1]: Stopping logstash...
Aug 26 12:24:22 xyz.com logstash[827]: [2020-08-26T12:24:22,665][WARN ][logstash.runner ] SIGTERM received. Shutting down.
Aug 26 12:24:27 xyz.com logstash[827]: [2020-08-26T12:24:27,864][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>33, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-5.1.9-java/lib/logstash/inputs/beats.rb:212:in `run'"}, {"thread_id"=>25, "name"=>"[main]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>26, "name"=>"[main]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>27, "name"=>"[main]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>28, "name"=>"[main]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>29, "name"=>"[main]>worker4", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>30, "name"=>"[main]>worker5", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>31, "name"=>"[main]>worker6", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}, {"thread_id"=>32, "name"=>"[main]>worker7", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in `read_batch'"}]}}
Aug 26 12:24:27 xyz.com logstash[827]: [2020-08-26T12:24:27,866][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
Aug 26 12:24:29 xyz.com logstash[827]: [2020-08-26T12:24:29,852][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x2c879c55 run>"}
Aug 26 12:24:29 xyz.com logstash[827]: [2020-08-26T12:24:29,855][INFO ][logstash.runner ] Logstash shut down.
Aug 26 12:24:29 xyz.com systemd[1]: Stopped logstash.
May 24 08:11:46 xyz.com systemd[1]: Started logstash.
May 24 08:12:11 xyz.com logstash[19174]: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
May 24 08:12:11 xyz.com logstash[19174]: 2021-05-24 08:12:11,794 main ERROR RollingFileManager (/var/log/logstash/logstash-plain.log) java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied) java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.open0(Native Method)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.open(FileOutputStream.java:270)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
May 24 08:12:11 xyz.com logstash[19174]: at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
May 24 08:12:11 xyz.com logstash[19174]: at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:640)
May 24 08:12:25 xyz.com logstash[19174]: [2021-05-24T08:12:25,120][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>
May 24 08:12:25 xyz.com logstash[19174]: [2021-05-24T08:12:25,363][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Main process exited, code=killed, status=9/KILL
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Failed with result 'signal'.
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
May 24 15:38:50 xyz.com systemd[1]: logstash.service: Scheduled restart job, restart counter is at 1.
May 24 15:38:50 xyz.com systemd[1]: Stopped logstash.
May 24 15:38:50 xyz.com systemd[1]: Started logstash.
May 24 15:39:06 xyz.com logstash[25666]: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
May 24 15:39:07 xyz.com logstash[25666]: 2021-05-24 15:39:07,101 main ERROR RollingFileManager (/var/log/logstash/logstash-plain.log) java.io.FileNotFo
May 24 15:39:07 xyz.com logstash[25666]: at java.io.FileOutputStream.open0(Native Method)
May 24 15:39:07 xyz.com logstash[25666]: at java.io.FileOutputStream.open(FileOutputStream.java:270)
Server OS: Ubuntu 18.04
ELK Version: 7.11.1
Please help us to find the cause of the issue as we are struggling to locate the issue.
Focus at
java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied)
Check your file/folder permission (chmod: read, write, execute).
Use root account, check permission
ls -l /path/to/file
ls -l /var/log/logstash/logstash-plain.log
See more at https://askubuntu.com/a/528433/299516

Kibana installation error "Kibana server is not ready yet" (CentOS)

Working on a Kibana deployment, after installing Kibana & Elasticsearch i get the error 'Kibana server is not ready yet'.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-centos-7
[opc#homer7 etc]$
[opc#homer7 etc]$ sudo systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-02-26 13:56:07 CET; 37s ago
Docs: https://www.elastic.co
Main PID: 18215 (node)
Memory: 208.3M
CGroup: /system.slice/kibana.service
└─18215 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist --logging.dest="/var/log/kibana/kibana.log" --pid.file="/run/kibana/kibana.pid"
Feb 26 13:56:07 homer7 systemd[1]: kibana.service failed.
Feb 26 13:56:07 homer7 systemd[1]: Started Kibana.
[opc#homer7 etc]$
[opc#homer7 etc]$
[opc#homer7 etc]$
[opc#homer7 etc]$ sudo journalctl --unit kibana
-- Logs begin at Fri 2021-02-26 11:31:02 CET, end at Fri 2021-02-26 13:56:57 CET. --
Feb 26 12:15:38 homer7 systemd[1]: Started Kibana.
Feb 26 13:21:25 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:22:55 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:22:55 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:22:55 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:22:55 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:22:55 homer7 systemd[1]: kibana.service failed.
Feb 26 13:25:05 homer7 systemd[1]: Started Kibana.
Feb 26 13:25:29 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:26:59 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:26:59 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:26:59 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:26:59 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:26:59 homer7 systemd[1]: kibana.service failed.
Feb 26 13:27:56 homer7 systemd[1]: Started Kibana.
Feb 26 13:40:53 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:42:23 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:42:23 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:42:23 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:42:23 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:42:23 homer7 systemd[1]: kibana.service failed.
Feb 26 13:42:23 homer7 systemd[1]: Started Kibana.
Feb 26 13:44:09 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:45:40 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:45:40 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:45:40 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:45:40 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:45:40 homer7 systemd[1]: kibana.service failed.
Feb 26 13:45:40 homer7 systemd[1]: Started Kibana.
Feb 26 13:54:37 homer7 systemd[1]: Stopping Kibana...
Feb 26 13:56:07 homer7 systemd[1]: kibana.service stop-sigterm timed out. Killing.
Feb 26 13:56:07 homer7 systemd[1]: kibana.service: main process exited, code=killed, status=9/KILL
Feb 26 13:56:07 homer7 systemd[1]: Stopped Kibana.
Feb 26 13:56:07 homer7 systemd[1]: Unit kibana.service entered failed state.
Feb 26 13:56:07 homer7 systemd[1]: kibana.service failed.
Feb 26 13:56:07 homer7 systemd[1]: Started Kibana.
[opc#homer7 etc]$
[opc#homer7 etc]$
check $systemctl status elasticsearch. I am guessing your elasticsearch service is not started yet.
I guess there are many factors that need to be checked, first of all please go to the config directory of where you installed Kibana and check the kibana.yml by sudo vi kibana.yml and check the port of elastic server that Kibana tries to connect(the default is 9200).
Here is an example of default configuration.
After matching this configuration with your need go to the script file that you save in for Kibana service and check the the [unix] part to if it needs activate elastic service first and if you didn't add "Required" part for Elasticserver make sure that the elastic server is up and run before running Kibana as service, you can also lunch Kibana as shell by going to the bin director of Kibana and lunching Kibana .
Maybe The issue happened due to kibana was unable to access elasticsearch locally.
I think that you have enabled xpack.security plugin for security purpose at elasticsearch.yml by adding a new line :
xpack.security.enabled : true
if so you need to uncomment the two lines on kibana.yml :
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"
elasticsearch.username = "kibana_system"
elasticsearch.password = "your-password"
after saving the changes, restart kibana service :
sudo sservice kibana restart

Updated Elasticservice on Droplet of Digital Ocean, Elasticsearch will no longer start

The error I am receiving when I try to start up elasticsearch
-- Unit elasticsearch.service has begun starting up.
Oct 08 23:54:05 ElasticSearch logstash[1064]: [2020-10-08T23:54:05,137][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=
Oct 08 23:54:05 ElasticSearch logstash[1064]: [2020-10-08T23:54:05,138][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=
Oct 08 23:54:05 ElasticSearch kernel: [UFW BLOCK] IN=eth0 OUT= MAC=76:67:e9:46:24:b8:fe:00:00:00:01:01:08:00 SRC=79.124.62.110 DST=206.189.196.214 LEN=40 TOS=0x00 PREC=0x00 TTL=244 ID=52316 PROTO=
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: output:
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: error:
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: Unrecognized VM option 'UseConcMarkSweepGC'
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: Error: Could not create the Java Virtual Machine.
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: Error: A fatal exception has occurred. Program will exit.
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
Oct 08 23:54:05 ElasticSearch systemd-entrypoint[14701]: at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
It looks a lot like this reported issue and this one.
In your jvm.options file, if you replace this
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
with this
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly
it should work again.

can not change path.data in elasticsearch config

I can not change local ES index location - can not modify path.data .
That's probably some elementary mistake, but I am stuck and greatly appreciate any assistance.
So:
Fresh local installation of ES 7.8.1 under Centos 7, everything runs correctly, if no changes were done in elasticsearch.yml:
But if I try change elasticsearch.yml:
# path.data: /var/lib/elasticsearch'
path.data: /run/media/admin/bvv2/elasticsearch/
(i.e. try to point to external disk), I get after systemctl start elasticsearch:
Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
where in "systemctl status elasticsearch.service" :
● elasticsearch.service - Elasticsearch
Loaded: loaded (/etc/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2020-08-17 16:23:16 MSK; 5min ago
Docs: https://www.elastic.co
Process: 12951 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 12951 (code=exited, status=1/FAILURE)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.cli.Command.main(Command.java:90)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
Aug 17 16:23:16 bvvcomp systemd-entrypoint[12951]: For complete error details, refer to the log at /var/log/elasticsearch/elasticsearch.log
Aug 17 16:23:16 bvvcomp systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Aug 17 16:23:16 bvvcomp systemd[1]: Failed to start Elasticsearch.
Aug 17 16:23:16 bvvcomp systemd[1]: Unit elasticsearch.service entered failed state.
Aug 17 16:23:16 bvvcomp systemd[1]: elasticsearch.service failed.
And in journalctl-xe:
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1568] dhcp4 (wlp2s0): gateway 192.168.1.1
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1569] dhcp4 (wlp2s0): lease time 25200
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1569] dhcp4 (wlp2s0): nameserver '192.168.1.1'
Aug 17 16:29:20 bvvcomp NetworkManager[1112]: <info> [1597670960.1569] dhcp4 (wlp2s0): state changed bound -> bound
Aug 17 16:29:20 bvvcomp dbus[904]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 17 16:29:20 bvvcomp dhclient[1325]: bound to 192.168.1.141 -- renewal in 12352 seconds.
Aug 17 16:29:20 bvvcomp systemd[1]: Starting Network Manager Script Dispatcher Service...
-- Subject: Unit NetworkManager-dispatcher.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit NetworkManager-dispatcher.service has begun starting up.
Aug 17 16:29:20 bvvcomp dbus[904]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Aug 17 16:29:20 bvvcomp systemd[1]: Started Network Manager Script Dispatcher Service.
-- Subject: Unit NetworkManager-dispatcher.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit NetworkManager-dispatcher.service has finished starting up.
--
-- The start-up result is done.
Aug 17 16:29:20 bvvcomp nm-dispatcher[13569]: req:1 'dhcp4-change' [wlp2s0]: new request (4 scripts)
Aug 17 16:29:20 bvvcomp nm-dispatcher[13569]: req:1 'dhcp4-change' [wlp2s0]: start running ordered scripts...
Unfortunately, these advice did not help:
How to move elasticsearch data directory? ;
elasticsearch changing path.logs and/or path.data - fails to start ;
Elasticsearch after change path.data, unable to access 'default.path.data' ;
thats probably new issue, version 7.x bounded ?
Thank you
Update 1 - error log (/var/log/elasticsearch/elasticsearch.log):
[2020-08-18T01:30:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [bvvcomp] triggering scheduled [ML] maintenance tasks
[2020-08-18T01:30:00,014][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [bvvcomp] Deleting expired data
[2020-08-18T01:30:00,052][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [bvvcomp] Completed deletion of expired ML data
[2020-08-18T01:30:00,053][INFO ][o.e.x.m.MlDailyMaintenanceService] [bvvcomp] Successfully completed [ML] maintenance tasks
[2020-08-18T04:30:00,017][INFO ][o.e.x.s.SnapshotRetentionTask] [bvvcomp] starting SLM retention snapshot cleanup task
[2020-08-18T04:30:00,025][INFO ][o.e.x.s.SnapshotRetentionTask] [bvvcomp] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2020-08-18T05:27:08,457][INFO ][o.e.n.Node ] [bvvcomp] stopping ...
[2020-08-18T05:27:08,482][INFO ][o.e.x.w.WatcherService ] [bvvcomp] stopping watch service, reason [shutdown initiated]
[2020-08-18T05:27:08,483][INFO ][o.e.x.w.WatcherLifeCycleService] [bvvcomp] watcher has stopped and shutdown
[2020-08-18T05:27:08,495][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [bvvcomp] [controller/21903] [Main.cc#155] ML controller exiting
[2020-08-18T05:27:08,497][INFO ][o.e.x.m.p.NativeController] [bvvcomp] Native controller process has stopped - no new native processes can be started
[2020-08-18T05:27:08,540][INFO ][o.e.n.Node ] [bvvcomp] stopped
[2020-08-18T05:27:08,541][INFO ][o.e.n.Node ] [bvvcomp] closing ...
[2020-08-18T05:27:08,585][INFO ][o.e.n.Node ] [bvvcomp] closed
[2020-08-18T05:27:19,077][ERROR][o.e.b.Bootstrap ] [bvvcomp] Exception
java.lang.IllegalStateException: Unable to access 'path.data' (/run/media/admin/bvv2/elasticsearch)
at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:70) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:297) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:252) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Security.configure(Security.java:121) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:222) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) [elasticsearch-cli-7.8.1.jar:7.8.1]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) [elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-7.8.1.jar:7.8.1]
Caused by: java.nio.file.AccessDeniedException: /run/media/admin/bvv2
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:313) ~[?:?]
at java.nio.file.Files.createDirectories(Files.java:766) ~[?:?]
at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:389) ~[elasticsearch-7.8.1.jar:7.8.1]
at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:68) ~[elasticsearch-7.8.1.jar:7.8.1]
... 12 more
Permissions:
ls -l /run/media/admin/bvv2
drwxrwsrwx 3 elasticsearch elasticsearch 4096 Aug 17 17:26 elasticsearch
ls -l /run/media/admin
total 4
drwxr-xr-x 11 admin admin 4096 Aug 17 13:22 bvv2
I encountered a similar error and it was caused by incorrect parent directory permissions.
One of the parent directory doesn't allow other unix users to access the directory, more specifically the directory's permission is drwx--x---+. Elasticsearch started after change the permission to drwx--x--x+(chmod 711). You can also try it.

Elasticsearch won't start and no logs centOS

Hi after downloading the latest rpm for CentIS and and installing for the first time I am getting this error in the logs:
Jun 22 09:47:31 ssd316r.simpleservers.co.uk systemd[1]: Starting Elasticsearch...
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd-entrypoint[2501]: ERROR: Temporary file directory [/usr/share/elasticsearch/tmp] does not exist or is not accessible
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: elasticsearch.service: main process exited, code=exited, status=78/n/a
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: Failed to start Elasticsearch.
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: Unit elasticsearch.service entered failed state.
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd[1]: elasticsearch.service failed.
Error is due to below log:
Jun 22 09:47:32 ssd316r.simpleservers.co.uk systemd-entrypoint[2501]:
ERROR: Temporary file directory [/usr/share/elasticsearch/tmp] does
not exist or is not accessible
Can you check /usr/share/elasticsearch/tmp is present on your server or not, if not please create this folder at the same location and make sure your elasticsearch process has write access to it.

Resources