Error in changing the log directory for Elasticsearch - elasticsearch

I am changing the path of -
path.data: /var/log/elasticsearch to path.data: /data/elasticsearchdata/log/elasticsearch/
in elasticsearch.yml
file after creating the folder and moving the files/folders from ../elasticsearch to /data/elasticsearchdata/log/
but after doing the changes in - elasticsearch.yml I have run the command as -
sudo systemctl restart elasticsearch
But getting this error -
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2021-12-15 14:53:14 UTC; 7s ago
Docs: https://www.elastic.co
Process: 1678664 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 1678664 (code=exited, status=1/FAILURE)
Dec 15 14:53:14 ip-10-10-6-161 systemd-entrypoint[1678664]: path.logs: /data/elasticsearchda ...
Can anyone let me know what I am missing ?

ONLY WAY to move your data is
setup repository (snapshot/restore)
create snapshot of all indices
shut down ELK cluster and edit path.data in elasticsearch.yml
start ELK cluster
restore snapshot
data should appear in the new location

Related

ELK configuration for my application logs forward to elastic search using log stash

I am new in ELK configuration.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-16-04
I have configure in my local machine and it is work fine.
I want to forward my application logs file to elastic search using log-stash of file beats.
When I have configure all things working fine for system logs.
but I am not able to store my application log to elastic search.
Please help me.
This is my log file:
service.log
{"name":"service name", "hostname":"abc", "pid":4474, "userId":"123", "school_id":"123", "role":"student", "username":"mahi123", "serviceName":"loginService", "level":40, "msg":"successFully fetch trail log", "time":"2019-06-01T10:55:46.482Z","v":0}
Some troubleshooting steps to take care of when logs do not reach Elastisearch:
Check your log parsing configuration file(usually made with the extension .conf). Make sure it's having the right path to scan logs from, right set of filters etc. To see if this .conf file is actually working, one can try:
logstash -f <elasticsearch.conf file path> If this doesn't throw any error on console, that means you are good at this point and will have to move to next step.
Check if Kibana indices are getting created. Run
curl http://<hostipaddress or localhost>:9200/_cat/indices?v.
If yes, go to Kibana Management and create index patterns.
If not, check if your system has enough available memory to serve logstash and elastisearch. free -m would be helpful once you start logstash and elasticsearch services.
Many a times, I have seen people trying ELK setup on a machine which has insufficient RAM(4GB sounds good for a standalone setup).
Check your logstash and Elasticsearch services are up and running. If Elasticsearch is getting down or getting restarted during log parsing or indices creation, that's most probably due to lack of system resources.
-bash-4.2# systemctl status elasticsearch
�� elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-06-05 14:08:26 UTC; 1 weeks 0 days ago
Docs: http://www.elastic.co
Main PID: 1396 (java)
CGroup: /system.slice/elasticsearch.service
������1396 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMS...
Jun 05 14:08:26 cue-bldsvr4 systemd[1]: Started Elasticsearch.
Jun 05 14:08:26 cue-bldsvr4 systemd[1]: Starting Elasticsearch...
-bash-4.2# systemctl status logstash
�� logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-06-05 14:50:52 UTC; 1 weeks 0 days ago
Main PID: 4320 (java)
CGroup: /system.slice/logstash.service
������4320 /bin/java -Xms256m -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFrac...
Jun 05 14:50:52 cue-bldsvr4 systemd[1]: Started logstash.
Jun 05 14:50:52 cue-bldsvr4 systemd[1]: Starting logstash...
Jun 05 14:51:08 cue-bldsvr4 logstash[4320]: Sending Logstash's logs to /var/log/logstash which is now configur...rties
Hint: Some lines were ellipsized, use -l to show in full.
-bash-4.2#

Elasticsearch won't start and no logs

I've been trying to start ES for hours and I can't seem to be able to do so.
The command sudo service elasticsearch status prints out :
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since ven. 2019-01-11 12:22:33 CET; 5min ago
Docs: http://www.elastic.co
Process: 16713 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=$PID_DIR/elasticsearch.pid -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.confi Main PID: 16713 (code=exited, status=1/FAILURE)
janv. 11 12:22:33 glamuse systemd[1]: Started Elasticsearch.
janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Unit entered failed state.
janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
I've increased the memory and done all the fixes I could find on the internet, but I can't seem to figure out what's going on, there's not even a single log generated today... So I don't even have any trace of where the error could be.
I'm using ES version 1.7.2 (yes, it's old, but that shouldn't be a problem as it does work, and no I can't upgrade because my Elastica uses this version, anyways ...)
I'm using a vagrant machine, so it's a unix based system.
My config is as follow (removed all the useless comments) :
index.number_of_shards: 10
index.number_of_replicas: 1
bootstrap.mlockall: true
network.bind_host: 0
network.host: 0.0.0.0
indices.recovery.max_bytes_per_sec: 200mb
indices.store.throttle.max_bytes_per_sec : 200mb
script.engine.groovy.inline.search: on
script.engine.groovy.inline.aggs: on
script.engine.groovy.inline.update: on
index.query.bool.max_clause_count: 100000
I also have this conf :
ES_HEAP_SIZE=4g
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited
START_DAEMON=true
ES_USER=elasticsearch
ES_GROUP=elasticsearch
LOG_DIR=/var/log/elasticsearch
DATA_DIR=/var/lib/elasticsearch
WORK_DIR=/tmp/elasticsearch
CONF_DIR=/etc/elasticsearch
CONF_FILE=/etc/elasticsearch/elasticsearch.yml
RESTART_ON_UPGRADE=true
Any idea how can I debug this?

Elasticsearch: Node suddenly stops working with no shards available exception

I have a single node elk installation which was working fine up until I discovered that kibana was unable to connect to elasticsearch.
systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2018-12-28 14:21:33 EET; 5min ago
Docs: http://www.elastic.co
Main PID: 1193 (java)
Tasks: 87
Memory: 3.0G
CPU: 5min 39.675s
CGroup: /system.slice/elasticsearch.service
├─1193 /usr/bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=
└─1548 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
but here is a gist with the exceptions;
for some weird reason after a restart the node seems to recover.
How can this behavior be explained?

After Tor project installation, I got error

I am installing tor in my ubuntu 18.04 as per link.After completing all the steps, i am getting this error
$ sudo service tor status
● tor.service - Anonymizing overlay network for TCP (multi-instance-master)
Loaded: loaded (/lib/systemd/system/tor.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2018-07-06 11:47:19 IST; 13min ago
Main PID: 10894 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4554)
CGroup: /system.slice/tor.service
Jul 06 11:47:19 aks-Vostro-1550 systemd[1]: Starting Anonymizing overlay network for TCP (multi-instance-master)...
Jul 06 11:47:19 aks-Vostro-1550 systemd[1]: Started Anonymizing overlay network for TCP (multi-instance-master).
My /lib/systemd/system/tor.service file is:
# This service is actually a systemd target,
# but we are using a service since targets cannot be reloaded.
[Unit]
Description=Anonymizing overlay network for TCP (multi-instance-master)
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/true
ExecReload=/bin/true
[Install]
WantedBy=multi-user.target
I will be thankful for your help and support.
I have solved my problem in Ubuntu 18.04 using the suggestion given by link

elasticsearch changing path.logs and/or path.data - fails to start

Here's my config
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /mulelogs/elasticsearch
path.logs: /mulelogs/elasticsearch
When I restart ElasticSearch this is what I get:
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2016-01-25 06:33:40 UTC; 9s ago
Docs: http://www.elastic.co
Process: 22213 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=${PID_DIR}/elasticsearch.pid -Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR} -Des.default.path.data=${DATA_DIR} -Des.default.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 22212 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 22213 (code=exited, status=1/FAILURE)
elasticsearch[22213]: at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1074)
elasticsearch[22213]: at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1061)
elasticsearch[22213]: at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:88)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:217)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:256)
elasticsearch[22213]: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
elasticsearch[22213]: Refer to the log for complete error details.
systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
systemd[1]: Unit elasticsearch.service entered failed state.
systemd[1]: elasticsearch.service failed.
The path is an attached volume which is accessible via /mulelogs/
drwxrwxrwx. 4 root root 4096 Jan 25 05:11 .
dr-xr-xr-x. 18 root root 4096 Jan 25 06:24 ..
drwxrwxrwx. 4 elasticsearch elasticsearch 4096 Jan 25 05:21 elasticsearch
drwxrwxrwx. 2 root root 16384 Jan 20 01:20 lost+found
I tried chown and chmod just to see if the permission is the problem, but it still didn't work.
How do I fix this?
Thanks in ad
Notes:
OS: CentOS 7
ElasticSearch : 2.1
I have installed ELK following this steps:
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
try changing the paths
path.data: /mulelogs/elasticsearch
path.logs: /mulelogs/elasticsearch
to absolute
I had a fresh install and had the same error.
Check if you have a folder in your path.data directory with the name of your cluster. If yes, try to delete it (if possible and you don't loose data).
After deleting this and restarting the service it went ok (another folder called nodes was created)
change mode to 777 for new lib and log directories and files.
check the log file, if it shows error message like:
java.lang.IllegalStateException: detected index data in
default.path.data [/var/lib/elasticsearch] where there should not be
any; check the logs for details
as the above error, you have to delete the nodes directory in old lib folder. (Backup first, index data will be gone.)

Resources