ELK configuration for my application logs forward to elastic search using log stash - elasticsearch

I am new in ELK configuration.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-16-04
I have configure in my local machine and it is work fine.
I want to forward my application logs file to elastic search using log-stash of file beats.
When I have configure all things working fine for system logs.
but I am not able to store my application log to elastic search.
Please help me.
This is my log file:
service.log
{"name":"service name", "hostname":"abc", "pid":4474, "userId":"123", "school_id":"123", "role":"student", "username":"mahi123", "serviceName":"loginService", "level":40, "msg":"successFully fetch trail log", "time":"2019-06-01T10:55:46.482Z","v":0}

Some troubleshooting steps to take care of when logs do not reach Elastisearch:
Check your log parsing configuration file(usually made with the extension .conf). Make sure it's having the right path to scan logs from, right set of filters etc. To see if this .conf file is actually working, one can try:
logstash -f <elasticsearch.conf file path> If this doesn't throw any error on console, that means you are good at this point and will have to move to next step.
Check if Kibana indices are getting created. Run
curl http://<hostipaddress or localhost>:9200/_cat/indices?v.
If yes, go to Kibana Management and create index patterns.
If not, check if your system has enough available memory to serve logstash and elastisearch. free -m would be helpful once you start logstash and elasticsearch services.
Many a times, I have seen people trying ELK setup on a machine which has insufficient RAM(4GB sounds good for a standalone setup).
Check your logstash and Elasticsearch services are up and running. If Elasticsearch is getting down or getting restarted during log parsing or indices creation, that's most probably due to lack of system resources.
-bash-4.2# systemctl status elasticsearch
�� elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-06-05 14:08:26 UTC; 1 weeks 0 days ago
Docs: http://www.elastic.co
Main PID: 1396 (java)
CGroup: /system.slice/elasticsearch.service
������1396 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMS...
Jun 05 14:08:26 cue-bldsvr4 systemd[1]: Started Elasticsearch.
Jun 05 14:08:26 cue-bldsvr4 systemd[1]: Starting Elasticsearch...
-bash-4.2# systemctl status logstash
�� logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-06-05 14:50:52 UTC; 1 weeks 0 days ago
Main PID: 4320 (java)
CGroup: /system.slice/logstash.service
������4320 /bin/java -Xms256m -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFrac...
Jun 05 14:50:52 cue-bldsvr4 systemd[1]: Started logstash.
Jun 05 14:50:52 cue-bldsvr4 systemd[1]: Starting logstash...
Jun 05 14:51:08 cue-bldsvr4 logstash[4320]: Sending Logstash's logs to /var/log/logstash which is now configur...rties
Hint: Some lines were ellipsized, use -l to show in full.
-bash-4.2#

Related

can't start minio server in ubuntu with systemctl start minio

I configured a minio instance server on the ubuntu 18.04 with the guide from https://www.digitalocean.com/community/tutorials/how-to-set-up-an-object-storage-server-using-minio-on-ubuntu-18-04.
after the installation, the server failed to start with the command "sudo systemctl start minio", the error is saying :
root#iZbp1icuzly3aac0dmjz9aZ:~# sudo systemctl status minio
● minio.service - MinIO
Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2021-12-23 17:11:56 CST; 4s ago
Docs: https://docs.min.io
Process: 9085 ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES (code=exited, status=1/FAILURE)
Process: 9084 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCESS)
Main PID: 9085 (code=exited, status=1/FAILURE)
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Main process exited, code=exited, status=1/FAILURE
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Failed with result 'exit-code'.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Service hold-off time over, scheduling restart.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Scheduled restart job, restart counter is at 5.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: Stopped MinIO.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Start request repeated too quickly.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Failed with result 'exit-code'.
Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: Failed to start MinIO.
It looks like the reason is the Variable MINIO_VOLUMES not set in /etc/default/minio.
However, I double check the file from /etc/default/minio
MINIO_ACCESS_KEY="minioadmin"
MINIO_VOLUMES="/usr/local/share/minio/"
MINIO_OPTS="-C /etc/minio --address localhost:9001"
MINIO_SECRET_KEY="minioadmin"
I have set the value MINIO_VOLUMES.
I tried to start manually with minio server --address :9001 /usr/local/share/minio/, it works.
now I don't know what goes wrong with starting the minio server by using the systemctl start minio
I'd recommend sticking to the official documentation wherever possible. It's intended for distributed deployments but the only real change is that your MINIO_VOLUMES will be for a single node/drive.
I would recommend trying a combination of things here:
Review minio.service and ensure the user/group exists
Review file path permissions on the MINIO_VOLUMES value
Now for the why:
My guess without seeing further logs (journalctl -u minio would have been helpful here) is that this is a combination of two things:
the minio.service user/group doesn't have rwx permissions on the /usr/local/share/minio path,
you are missing an environment variable we recently introduced to prevent users from pointing at their root drive (this was intended as a safety measure, but somewhat complicates these kinds of smaller setups).
Take a look at these lines in the minio.service file - I'm assuming that is what you are using based on the instructions in the DO guide.
If you ls -al /usr/local/share/minio I would venture it has ROOT permissions for user and group and limited write access if any.
Hope this helps - for further troubleshooting having at least 10-20 lines from journalctl is invaluable, as it would show the actual error and not just the final quit message.

Install of elastic 7.5 on RHEL 7.8 makes memory violation sig=6 due to JNA

I am installing a brand new elasticsearch 7.5 on OS:Red Hat Enterprise Linux Server release 7.8 (Maipo)
At startup of the service, I have hard failure. here is what the service info provides
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
Active: failed (Result: signal) since Tue 2020-08-25 11:34:39 CEST; 7min ago
Docs: http://www.elastic.co
Process: 102777 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=killed, signal=ABRT)
Main PID: 102777 (code=killed, signal=ABRT)
CGroup: /system.slice/elasticsearch.service
Aug 25 11:34:34 sv-1348lvd44.esante.local systemd[1]: Starting Elasticsearch...
Aug 25 11:34:35 sv-1348lvd44.esante.local elasticsearch[102777]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated...lease.
Aug 25 11:34:39 sv-1348lvd44.esante.local systemd[1]: elasticsearch.service: main process exited, code=killed, status=6/ABRT
Aug 25 11:34:39 sv-1348lvd44.esante.local systemd[1]: Failed to start Elasticsearch.
Aug 25 11:34:39 sv-1348lvd44.esante.local systemd[1]: Unit elasticsearch.service entered failed state.
Aug 25 11:34:39 sv-1348lvd44.esante.local systemd[1]: elasticsearch.service failed.
when using journalctl -xe
Aug 25 11:34:38 sv-1348lvd44.esante.local audispd[824]: node=sv-1348lvd44.esante.local type=ANOM_ABEND msg=audit(1598348078.836:208066): auid=429496 uid=995 gid=991 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 pid=102777 comm="java" reason="memory violation" sig=6
Aug 25 11:34:39 sv-1348lvd44.esante.local systemd[1]: elasticsearch.service: main process exited, code=killed, status=6/ABRT
Aug 25 11:34:39 sv-1348lvd44.esante.local systemd[1]: Failed to start Elasticsearch.
when looking into the dump hs_err_pidXXXX I have.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f4818939b85, pid=52870, tid=52933
#
# JRE version: OpenJDK Runtime Environment (13.0.1+9) (build 13.0.1+9)
# Java VM: OpenJDK 64-Bit Server VM (13.0.1+9, mixed mode, sharing, tiered, compressed oops, concurrent mark sweep gc, linux-amd64)
# Problematic frame:
# C [jna515356041985641679.tmp+0x12b85] ffi_prep_closure_loc+0x15
[OS:Red Hat Enterprise Linux Server release 7.8 (Maipo)
uname:Linux 3.10.0-1127.10.1.el7.x86_64 #1 SMP Tue May 26 15:05:43 EDT 2020 x86_64
libc:glibc 2.17 NPTL 2.17
rlimit: STACK 8192k, CORE 0k, NPROC 4096, NOFILE 65535, AS infinity, DATA infinity, FSIZE infinity
load average:0.08 0.03 0.05
.../...
It works like a charm on CentOS without doing anything.
For RHEL, I already fixed the stuff about JNA by adding ES_TMPDIR=/var/es-temp into /etc/sysconfig/elasticsearch as
Memory seems fine. this is a brand new VM. (no application logs into /var/logs)
Seems that this version is supposed to be supported
I tested with -Xms2g -Xmx2g, -Xms1g -Xmx1g, -Xms512m -Xmx512m but same error.
I don't get what is going wrong. My Next step is to test with another version 7 of elasticsearch.
After 1 day of struggling, I found the solution at https://discuss.elastic.co/t/elasticsearch-v7-6-2-failed-to-start-killed-by-sigabrt-on-rhel-7-7-urgent/231039/11 from Ivan_A_Carrazana_C
I put here a copy of the steps to perform:
Hi
If you are applying a security compliance in your RHEL installation you must change the >path of the TMP directory that will use elasticsearch as Java.
Uncomment at /etc/elasticsearch/jvm.options
-Djava.io.tmpdir=${ES_TMPDIR}
Add in /etc /sysconfig/elasticsearch
ES_TMPDIR=/usr/share/elasticsearch/tmp
Create the /usr/share/elasticsearch/tmp directory and make sure that the owner and group >are elasticsearch and the permissions are 0755
Lastly make sure that /dev/shm doesn't have the noexec attribute with command:
mount | grep tmpfs | grep '/dev/shm'
Expected result:
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
If you get output like these:
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,seclabel)
Add or modify in /etc/fstab the following line:
tmpfs /dev/shm tmpfs defaults,nodev,nosuid 0 0
I had the same problem and this worked for me. Hope i can help you
Seems to be known by elastic but not documented correctly. don't undertand why the tmpfs should in noexec. Would be good to have an JNA expert feedback about it.
For some reason, adding a TMPDIR var to /etc/sysconfig/elasticsearch worked (on 7.7.1) and pointing it to the same location as -Djava.io.tmpdir.
i.e.
TMPDIR="/usr/share/elasticsearch/tmp"
(in my case I actually used /var/lib/elasticsearch/tmp with 0755 permissions on it).
I can't say why, and it doesn't change the call string used if I look at 'ps -aef' . But just having -Djava.io.tmpdir wasn't enough.
This allowed me to get it to work without removing noexec on /tmp and /dev/shm.

Cassandra: Cannot achieve consistency level QUORUM on a specific keyspace

Actually, I'm using Elassandra which is a combination of Cassandra and Elasticsearch.
but the issue might came from Cassandra (from the logs said)
I have two nodes joined as a single datacenter DC1. And I'm trying to install Kibana on one of the node. My Kibana server always says "Kibana server is not ready yet" then I've found that the error is something around Cassandra consistency level.
My cassandra system_auth is set to
system_auth
WITH REPLICATION= {'class' : 'SimpleStrategy',
'DC1' :2 };
and here is the log from manual trigger Kibana service /usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml
FATAL [exception] org.apache.cassandra.exceptions.UnavailableException: Cannot achieve
consistency level QUORUM :: {"path":"/.kibana_1","query":{"include_type_name":true},"body":"
{\"mappings\":{\"doc\":{\"dynamic\":\"strict\",\"properties\":{\"config\":
{\"dynamic\":\"true\",\"properties\":{\"buildNum\":
{\"type\":\"keyword\"}}},\"migrationVersion\":
{\"dynamic\":\"true\",\"type\":\"object\"},\"type\":{\"type\":\"keyword\"},\"namespace\":
{\"type\":\"keyword\"},\"updated_at\":{\"type\":\"date\"},\"index-pattern\":{\"properties\":
{\"fieldFormatMap\":{\"type\":\"text\"},\"fields\":{\"type\":\"text\"},\"intervalName\":
{\"type\":\"keyword\"},\"notExpandable\":{\"type\":\"boolean\"},\"sourceFilters\":
{\"type\":\"text\"},\"timeFieldName\":{\"type\":\"keyword\"},\"title\":
{\"type\":\"text\"},\"type\":{\"type\":\"keyword\"},\"typeMeta\":
{\"type\":\"keyword\"}}},\"visualization\":{\"properties\":{\"description\":
{\"type\":\"text\"},\"kibanaSavedObjectMeta\":{\"properties\":{\"searchSourceJSON\":
{\"type\":\"text\"}}},\"savedSearchId\":{\"type\":\"keyword\"},\"title\":
{\"type\":\"text\"},\"uiStateJSON\":{\"type\":\"text\"},\"version\":
{\"type\":\"integer\"},\"visState\":{\"type\":\"text\"}}},\"search\":{\"properties\":
{\"columns\":{\"type\":\"keyword\"},\"description\":{\"type\":\"text\"},\"hits\":
{\"type\":\"integer\"},\"kibanaSavedObjectMeta\":{\"properties\":{\"searchSourceJSON\":
{\"type\":\"text\"}}},\"sort\":{\"type\":\"keyword\"},\"title\":{\"type\":\"text\"},\"version\":
{\"type\":\"integer\"}}},\"dashboard\":{\"properties\":{\"description\":
{\"type\":\"text\"},\"hits\":{\"type\":\"integer\"},\"kibanaSavedObjectMeta\":{\"properties\":
{\"searchSourceJSON\":{\"type\":\"text\"}}},\"optionsJSON\":{\"type\":\"text\"},\"panelsJSON\":
{\"type\":\"text\"},\"refreshInterval\":{\"properties\":{\"display\":
{\"type\":\"keyword\"},\"pause\":{\"type\":\"boolean\"},\"section\":
{\"type\":\"integer\"},\"value\":{\"type\":\"integer\"}}},\"timeFrom\":
{\"type\":\"keyword\"},\"timeRestore\":{\"type\":\"boolean\"},\"timeTo\":
{\"type\":\"keyword\"},\"title\":{\"type\":\"text\"},\"uiStateJSON\":
{\"type\":\"text\"},\"version\":{\"type\":\"integer\"}}},\"url\":{\"properties\":
{\"accessCount\":{\"type\":\"long\"},\"accessDate\":{\"type\":\"date\"},\"createDate\":
{\"type\":\"date\"},\"url\":{\"type\":\"text\",\"fields\":{\"keyword\":
{\"type\":\"keyword\",\"ignore_above\":2048}}}}},\"server\":{\"properties\":{\"uuid\":
{\"type\":\"keyword\"}}},\"kql-telemetry\":{\"properties\":{\"optInCount\":
{\"type\":\"long\"},\"optOutCount\":{\"type\":\"long\"}}},\"timelion-sheet\":{\"properties\":
{\"description\":{\"type\":\"text\"},\"hits\":{\"type\":\"integer\"},\"kibanaSavedObjectMeta\":
{\"properties\":{\"searchSourceJSON\":{\"type\":\"text\"}}},\"timelion_chart_height\":
{\"type\":\"integer\"},\"timelion_columns\":{\"type\":\"integer\"},\"timelion_interval\":
{\"type\":\"keyword\"},\"timelion_other_interval\":{\"type\":\"keyword\"},\"timelion_rows\":
{\"type\":\"integer\"},\"timelion_sheet\":{\"type\":\"text\"},\"title\":
{\"type\":\"text\"},\"version\":{\"type\":\"integer\"}}}}}},\"settings\":
{\"number_of_shards\":1,\"auto_expand_replicas\":\"0-1\"}}","statusCode":500,"response":"
{\"error\":{\"root_cause\":
[{\"type\":\"exception\",\"reason\":\"org.apache.cassandra.exceptions.UnavailableException:
Cannot achieve consistency level
QUORUM\"}],\"type\":\"exception\",\"reason\":\"org.apache.cassandra.exceptions.UnavailableExcept
ion: Cannot achieve consistency level QUORUM\",\"caused_by\":
{\"type\":\"unavailable_exception\",\"reason\":\"Cannot achieve consistency level
QUORUM\"}},\"status\":500}"}
there are no any indices named 'kibana_1' or any indices contains word kibana. but there are keyspaces named "_kibana_1" and "_kibana"
and that cause Kibana service unable to start
systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-09-10 16:26:14 CEST; 2s ago
Process: 16942 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1
Main PID: 16942 (code=exited, status=1/FAILURE)
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Scheduled restart job, restart counter is at 3.
Sep 10 16:26:14 ns3053180 systemd[1]: Stopped Kibana.
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Start request repeated too quickly.
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Failed with result 'exit-code'.
Sep 10 16:26:14 ns3053180 systemd[1]: Failed to start Kibana.
I think this is your problem:
system_auth WITH REPLICATION= {'class' : 'SimpleStrategy', 'DC1' :2 };
The SimpleStrategy class does not accept datacenter/RF pairs as parameters. It has one parameter, which is simply replication_factor:
ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'SimpleStrategy', 'replication_factor' :2 };
By contrast, the NetworkTopologyStrategy takes the parameters you have provided above:
ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'NetworkTopologyStrategy', 'DC1' :2 };
IMO, there really isn't much of a need for SimpleStrategy. I never use it.
Note: If you're going to query at LOCAL_QUORUM, you should have at least 3 replicas. Or at the very least, an odd number capable of computing a majority. Because quorum of 2 is, well, 2. So querying at quorum with only 2 replicas doesn't really help you.

Elasticsearch won't start and no logs

I've been trying to start ES for hours and I can't seem to be able to do so.
The command sudo service elasticsearch status prints out :
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since ven. 2019-01-11 12:22:33 CET; 5min ago
Docs: http://www.elastic.co
Process: 16713 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=$PID_DIR/elasticsearch.pid -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.confi Main PID: 16713 (code=exited, status=1/FAILURE)
janv. 11 12:22:33 glamuse systemd[1]: Started Elasticsearch.
janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Unit entered failed state.
janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
I've increased the memory and done all the fixes I could find on the internet, but I can't seem to figure out what's going on, there's not even a single log generated today... So I don't even have any trace of where the error could be.
I'm using ES version 1.7.2 (yes, it's old, but that shouldn't be a problem as it does work, and no I can't upgrade because my Elastica uses this version, anyways ...)
I'm using a vagrant machine, so it's a unix based system.
My config is as follow (removed all the useless comments) :
index.number_of_shards: 10
index.number_of_replicas: 1
bootstrap.mlockall: true
network.bind_host: 0
network.host: 0.0.0.0
indices.recovery.max_bytes_per_sec: 200mb
indices.store.throttle.max_bytes_per_sec : 200mb
script.engine.groovy.inline.search: on
script.engine.groovy.inline.aggs: on
script.engine.groovy.inline.update: on
index.query.bool.max_clause_count: 100000
I also have this conf :
ES_HEAP_SIZE=4g
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited
START_DAEMON=true
ES_USER=elasticsearch
ES_GROUP=elasticsearch
LOG_DIR=/var/log/elasticsearch
DATA_DIR=/var/lib/elasticsearch
WORK_DIR=/tmp/elasticsearch
CONF_DIR=/etc/elasticsearch
CONF_FILE=/etc/elasticsearch/elasticsearch.yml
RESTART_ON_UPGRADE=true
Any idea how can I debug this?

elasticsearch changing path.logs and/or path.data - fails to start

Here's my config
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /mulelogs/elasticsearch
path.logs: /mulelogs/elasticsearch
When I restart ElasticSearch this is what I get:
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2016-01-25 06:33:40 UTC; 9s ago
Docs: http://www.elastic.co
Process: 22213 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=${PID_DIR}/elasticsearch.pid -Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR} -Des.default.path.data=${DATA_DIR} -Des.default.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 22212 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 22213 (code=exited, status=1/FAILURE)
elasticsearch[22213]: at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1074)
elasticsearch[22213]: at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1061)
elasticsearch[22213]: at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:88)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:217)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:256)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
elasticsearch[22213]: Refer to the log for complete error details.
systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
systemd[1]: Unit elasticsearch.service entered failed state.
systemd[1]: elasticsearch.service failed.
The path is an attached volume which is accessible via /mulelogs/
drwxrwxrwx. 4 root root 4096 Jan 25 05:11 .
dr-xr-xr-x. 18 root root 4096 Jan 25 06:24 ..
drwxrwxrwx. 4 elasticsearch elasticsearch 4096 Jan 25 05:21 elasticsearch
drwxrwxrwx. 2 root root 16384 Jan 20 01:20 lost+found
I tried chown and chmod just to see if the permission is the problem, but it still didn't work.
How do I fix this?
Thanks in ad
Notes:
OS: CentOS 7
ElasticSearch : 2.1
I have installed ELK following this steps:
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
try changing the paths
path.data: /mulelogs/elasticsearch
path.logs: /mulelogs/elasticsearch
to absolute
I had a fresh install and had the same error.
Check if you have a folder in your path.data directory with the name of your cluster. If yes, try to delete it (if possible and you don't loose data).
After deleting this and restarting the service it went ok (another folder called nodes was created)
change mode to 777 for new lib and log directories and files.
check the log file, if it shows error message like:
java.lang.IllegalStateException: detected index data in
default.path.data [/var/lib/elasticsearch] where there should not be
any; check the logs for details
as the above error, you have to delete the nodes directory in old lib folder. (Backup first, index data will be gone.)

Resources