Could anybody help me how to shutdown elasticSearch completely ! It starts automatically when system starts.
Yes, it should because you are initialing it in the config file, which fires every time the system starts.
To answer your question, I believe this answer should help.
It probably runs as a service. If it is on linux remove the service file, usually on /etc/init.d/elasticsearch
If it is on windows - there is a service.bat file on the installation/bin folder, you can uninstall using:
service.bat remove
If you are using Ubuntu 15.04+
systemctl disable elasticsearch
For Ubuntu < 15.04
To toggle a service from starting or stopping permanently you would need to:
echo manual | sudo tee /etc/init/SERVICE.override
where the stanza manual will stop Upstart from automatically loading the service on next boot. Any service with the .override ending will take precedence over the original service file. You will only be able to start the service manually afterwards. If you do not want this then simply delete the .override.
For more details, you may check this
I need to change the server_name of a running rethinkdb instance. I have stopped the server and update the /etc/rethinkdb/instances.d/default.conf file and then removed the metadata & rethinkdb_data from /var/lib/rethinkdb/default/data location. Finally when i executed the rethinkdb --config-file /etc/rethinkdb/instance.d/default.conf it shows that server is ready but didn't came back to bash shell prompt.
Can someone clarify on this.
Thanks in advance.
To start the rethinkdb daemon in the background, use:
/etc/init.d/rethinkdb start
It will automatically read the configuration file in /etc/rethinkdb/instance.d/default.conf
Elasticsearch won't start using ./bin/elasticsearch.
It raises the following exception:
- ElasticsearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/home/user1/elasticsearch-1.4.4/data/elasticsearch]
I checked the permissions on the same location and the location has 777 permissions on it and is owned by user1.
ls -al /home/user1/elasticsearch-1.4.4/data/elasticsearch
drwxrwxrwx 3 user1 wheel 4096 Mar 8 13:24 .
drwxrwxrwx 3 user1 wheel 4096 Mar 8 13:00 ..
drwxrwxrwx 52 user1 wheel 4096 Mar 8 13:51 nodes
What is the problem?
Trying to run elasticsearch 1.4.4 on linux without root access.
I had an orphaned Java process related to Elasticsearch. Killing it solved the lock issue.
ps aux | grep 'java'
kill -9 <PID>
I got this same error message, but things were mounted fine and the permissions were all correctly assigned.
Turns out that I had an 'orphaned' elasticsearch process that was not being killed by the normal stop command.
I had to manually kill the process and then restarting elasticsearch worked again.
the reason is another instance is running!
first find the id of running elastic.
ps aux | grep 'elastic'
then kill using kill -9 <PID_OF_RUNNING_ELASTIC>.
There were some answers to remove node.lock file but that didn't help since the running instance will make it again!
In my situation I had wrong permissions on the ES dir folder. Setting correct owner solved it.
# change owner
chown -R elasticsearch:elasticsearch /data/elasticsearch/
# to validate
ls /data/elasticsearch/ -la
# prints
# drwxr-xr-x 2 elasticsearch elasticsearch 4096 Apr 30 14:54 CLUSTER_NAME
After I upgraded the elasticsearch docker-image from version 5.6.x to 6.3.y the container would not start anymore because of the aforementioned error
Failed to obtain node lock
In my case the root-cause of the error was missing file-permissions
The data-folder used by elasticsearch was mounted from the host-system into the container (declared in the docker-compose.yml):
volumes:
- /var/docker_folders/common/experimental-upgrade:/usr/share/elasticsearch/data
This folder could not be accessed anymore by elasticsearch for reasons I did not understand at all. After I set very permissive file-permissions to this folder and all sub-folders the container did start again.
I do not want to reproduce the command to set those very permissive access-rights on the mounted docker-folder, because it is most likely a very bad practice and a security-issue. I just wanted to share the fact that it might not be a second process of elasticsearch running, but actually just missing access-rights to the mounted folder.
Maybe someone could elaborate on the apropriate rights to set for a mounted-folder in a docker-container?
As with many others here replying, this was caused by wrong permissions on the directory (not owned by the elasticsearch user). In our case it was caused by uninstalling Elasticsearch and reinstalling it (via yum, using the official repositories).
As of this moment, the repos do not delete the nodes directory when they are uninstalled, but they do delete the elasticsearch user/group that owns it. So then when Elasticsearch is reinstalled, a new, different elasticsearch user/group is created, leaving the old nodes directory still present, but owned by the old UID/GID. This then conflicts and causes the error.
A recursive chown as mentioned by #oleksii is the solution.
You already have ES running. To prove that type:
curl 'localhost:9200/_cat/indices?v'
If you want to run another instance on the same box you can set node.max_local_storage_nodes in elasticsearch.yml to a value larger than 1.
Try the following:
1. find the port 9200, e.g.: lsof -i:9200
This will show you which processes use the port 9200.
2. kill the pid(s), e.g. repeat kill -9 pid for each PID that the output of lsof showed in step 1
3. restart elasticsearch, e.g. elasticsearch
I had an another ElasticSearch running on the same machine.
Command to check : netstat -nlp | grep 9200 (9200 - Elastic Port)
Result : tcp 0 0 :::9210 :::* LISTEN 27462/java
Kill the process by,
kill -9 27462
27462 - PID of ElasticSearch instance
Start the elastic search and it may run now.
In my case, this error was caused by not mounting the devices used for the configured data directories using "sudo mount".
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
It directly shows it doesn't have permission to obtain a lock. So need to give permissions.
In my case the /var/lib/elasticsearch was the dir with missing permissions (CentOS 8):
error: java.io.IOException: failed to obtain lock on /var/lib/elasticsearch/nodes/0
To fix it, use:
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
To add to the above answers there could be some other scenarios in which you can get the error.In fact I had done a update from 5.5 to 6.3 for elasticsearch.I have been using the docker compose setup with named volumes for data directories.I had to do a docker volume prune to remove the stale ones.After doing that I was no longer facing the issue.
If anyone is seeing this being caused by:
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/docker/es]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
The solution is to set max_local_storage_nodes in your elasticsearch.yml
node.max_local_storage_nodes: 2
The docs say to set this to a number greater than one on your development machine
By default, Elasticsearch is configured to prevent more than one node from sharing the same data path. To allow for more than one node (e.g., on your development machine), use the setting node.max_local_storage_nodes and set this to a positive integer larger than one.
I think that Elasticsearch needs to have a second node available so that a new instance can start. This happens to me whenever I try to restart Elasticsearch inside my Docker container. If I relaunch my container then Elasticsearch will start properly the first time without this setting.
Mostly this error occurs when you kill the process abruptly. When you kill the process, node.lock file may not be cleared. you can manually remove the node.lock file and start the process again, it should work
For me the error was a simple one: I created a new data directory /mnt/elkdata and changed the ownership to the elastic user. I then copied the files and forgot to change the ownership afterwards again.
After doing that and restarting the elastic node it worked.
check these options
sudo chown 1000:1000 <directory you wish to mount>
# With docker
sudo chown 1000:1000 /data/elasticsearch/
OR
# With VM
sudo chown elasticsearch:elasticsearch /data/elasticsearch/
If you are on windows then try this:
Kill any java processes
If the start batch is interrupted in between then rather than closing the terminal, press ctrl+c to properly stop the elastic search service before you exit the terminal.
I check this topic: Sphinx error: unknown local index "INDEX_NAME" in search request , but it's closed and it's not resolve problem in my rails application.
I update TS to 3.0.2, and include it to deploy.rb. I also change code in model. Now it works in development, and some examples work in test, and some not. But after successful deployment I get error:
ThinkingSphinx::SphinxError (unknown local index 'user_core' in search request):
I try rebuild, restart, and other things, but it doesn't work :(
Can anybody help me?
Thanks!
It looks like there's already a Sphinx daemon running that Thinking Sphinx hasn't stopped (if you're still getting the same error) so I'd recommend killing that rogue searchd process (which you should be able to find via ps aux | grep searchd - and, if the permissions are fine, killall searchd will stop that Sphinx process).
I have Sphinx running as a Service on Windows Server 2003
I also have the ff cronjob running every 2 min to update the index:
C:\sphinx\bin\indexer.exe -c C:\sphinx\bin\sphinx.conf --rotate delta
and every 12 h:
C:\sphinx\bin\indexer.exe -c C:\sphinx\bin\sphinx.conf --rotate --all
However somehow the task every 1m ran, but there was no update on my website at all. The reindex run successfully.
The only time it update on website is to have my service restart.
What could be the problem here? I could not create a cron job to restart the service just for update. Since it could seriously affect searching operation.
Try changing the setting preopen_indexes to 0 (zero).
I had the same problem. If you run the searchd service as debug, you can see it gives a 'Broken pipe' error. This is caused because the process has his index files always open.
If you set the preopen_indexes to 0, it will only open if you search (Yes, it's a bit slower than opening it once)
I found the answer at the sphinx forum, http://sphinxsearch.com/forum/view.html?id=572