How to make elasticsearch apply new configuration?
I changed one string in file ~ES_HOME/config/elasticsearch.yml:
# Disable HTTP completely:
#
http.enabled: false
Then tried to reload elasticsearch:
elasticsearch reload
Then tried to restart elasticsearch:
elasticsearch restart
Then checked and see that http requests are still acceptable to elastic search.
So my settings are not applied.
My os is os X. ElasticSearch version is 1.2.0
Strangely or not so, the supposed way to do it is just to stop the service, and start it again :)
I.E. get its pid (running ps axww | grep elastic), and then kill ESpid ; just be sure to use the TERM signal, to give it a chance to close properly.
Some *nix elasticsearch distros have control scripts wrappers for start/stop , but I don't think OS X does.
And on a side note, you have probably found the Cluster Update Settings API, and though it provides quite a few options, regretfully it can't be used to change that particular setting.
HTH
P.S. And yep, in Windows setup the services.msc is the way to do it, but doubt this is helpful for you :)
When you have installed the current version (7.4 at the time of writing) of Elasticsearch on macOS with Homebrew, you can run:
brew services restart elastic/tap/elasticsearch-full
This will restart Elasticsearch and reload te configuration.
If ElasticSearch is installed on Windows setup then you have to restart ElasticSearch windows service.
Thanks.
Related
I installed the opendistro alerting plugin in my kibana running on k8s deployment from the lifecycle postart, the installation is successful , but in the UI of kibana i can't see the plugin buttons , so after searching it appears that i have to restart the kibana that is running on a pod , how can i achieve that without losing the image that has the plugin installed,because restarting the pod makes me lose the previous image and the installation happens again ,Am running kibana 7.10.2
You could try to achieve this using init containers and related stuff, basically try to find out what the install is doing and mimic those changes. A cleaner approach, also the recommended way by Elastic, is to bring your own image:
https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-kibana-plugins.html
Can elasticsearch 6.3.x be run on a user other than elasticsearch on centos 7? If yes how do I configure it.
https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html#_installation_example_with_tar
If you use this installation method, you can just give the ownership of the folder to any user you need
If you use RPM, then I suspect the answer is no - https://unix.stackexchange.com/questions/295944/is-is-possible-to-override-an-rpms-service-user-account-during-installation
I want to use ElasticSearch instead of MongoDB. How can I achieve this?
Is there a way to install everything from scratch and configure it? Configuration is the challenging part. I'm looking for tutorials explaining how to replace MongoDB with Elasticsearch.
There's not an easy way to substitute MongoDB with Elasticsearch on the stack.
However, you can easily install a Bitnami Elasticsearch stack (https://bitnami.com/stack/elasticsearch) on a different directory. For instance, if you have your MEAN stack on the default directory (/opt/bitnami/), you can install the Elasticsearch stack at /opt/elasticsearch/ and then edit the environment/control scripts of the original stack so you disable MongoDB and add the ability to control Elasticsearch.
If you want to have everything in the same VM, then I advise you to use our Elasticsearch installer: https://bitnami.com/stack/elasticsearch/installer
This way you would have your MEAN stack and, in addition, an Elasticsearch stack. Then you can disable MongoDB if you don't plan to use it at all.
sudo /opt/bitnami/ctlscript.sh stop mongodb
sudo mv /opt/bitnami/mongodb/scripts/ctl.sh
sudo /opt/bitnami/mongodb/scripts/ctl.sh.disabled
Credits -
jsalmeron - bitnami
I have a running Elasticsearch node in a machine. It is healthy, works fine. I can run any requests on the browser without issues. I have a zeppelin notebook where I set up the elastic interpreter. The following is how the interpreter looks like.
elasticsearch.basicauth.password
elasticsearch.basicauth.username
elasticsearch.client.type transport
elasticsearch.cluster.name elasticsearch
elasticsearch.host 127.0.0.1
elasticsearch.port 9300
elasticsearch.result.size 10
zeppelin.interpreter.localRepo /path/to/repo
After I open a notebook, I write:
%elasticsearch
GET /
And the result is this
Bad URL (it should be /index/type/id)
Or
%elasticsearch
GET /
Error : None of the configured nodes are available: [{#transport#-1}{ip}{ip:port}]
Even though
GET host:port/
works just fine in a browser.
What have I done wrong?
Edit:
In addition, I am using Zeppelin 0.7.1 and Elastic 5.4
The elasticSearch interpreter in Zeppelin works differently compared to the browser URL query.
You can think elasticsearch interpreter as a DSL converter which uses its own language. For example, to send GET query for a document, you need to specify index, type and id.
Here is the documentation for the elasticsearch interpreter in Zeppelin 0.7.1
http://zeppelin.apache.org/docs/0.7.1/interpreter/elasticsearch.html#using-the-elasticsearch-interpreter
Try to go to the configuration of elasticsearch interpreter (under the top right menu) and change the value of the paramter
elasticsearch.client.type to http
Also check if the cluster name is correct.
Then you can find here samples of how to create the requests: https://zeppelin.apache.org/docs/0.6.1/interpreter/elasticsearch.html
I installed the New Relic java agent with elasticsearch (Not the elasticsearch new-relic plugin, mind you).
when running elasticsearch with either:
sudo /usr/share/elasticsearch/elasticsearch
or
sudo service elasticsearch start
It works fine, and the data flows to my dashboard.
However, whern running as a service, the logfile in /usr/share/elasticsearch/newrelic/log is not written to, so I cannot debug what is happening to new relic.
Any idea why?
But why do you need to see the NewRelic agent's log in the first place? For what it's worth, you may want to be aware of http://sematext.com/spm/index.html