Why won't a put call to the elasticsearch /_cluster/settings endpoint respect an update of settings? - elasticsearch

I'm running Elasticsearch 2.3.3, and am looking to set up a cluster so that I can have a simulation of a production ready set up. This is set up on two Azure VMs with docker.
I'm looking at the /_cluster/settings api, to allow myself to update settings. According to the elasticserch documentation, it should be possible to update settings on clusters.
I've run on each machine the command:
docker run -d --name elastic -p 9200:9200 -p 9300:9300 elasticsearch --cluster.name=api-update-test
so now each machine sees itself as the one master and data node in a 1 machine cluster. I have then made a put request to one of these to tell it where to find the discovery.zen.ping.unicast.hosts, and to update the discovery.zen.minimum_master_nodes, with the following command (in powershell)
curl
-Method PUT
-Body '{"persistent":
{"discovery.zen.minimum_master_nodes":2,
"discovery.zen.ping.unicast.hosts":["<machine-one-ip>:9300"]}
}'
-ContentType application/json
-Uri http://<machine-two-ip>:9200/_cluster/settings
The response comes back invariably with a 200 response, but a confirmation of the original settings: {"acknowledged":true,"persistent":{},"transient":{}}
Why won't elasticsearch respect this request and update these settings? It should be noted, this also happens when I use the precise content of the sample request in the documentation.

I always used this approach:
curl -XPUT "http://localhost:9200/_cluster/settings" -d'
{
"persistent": {
"discovery.zen.minimum_master_nodes": 2
}
}'
And, also, only discovery.zen.minimum_master_nodes is dynamically update-able. The other one is not.

Related

ERROR: Failed to determine the health of the cluster

I am running Elasticsearch and kibana, I am not sure of the status of my elasticsearsh cluster (if its red, yellow, or green) but it seems I need to get a token generated by elasticsearch as in the screenshot when I ran bin/elasticsearch-create-enrollment-token --scope kibana from the right directory it errors out ERROR: Failed to determine the health of the cluster..
According Ioannis Kakavas in discuss.elastic, "CLI tools extending BaseRunAsSuperuserCommand should only connect to the local node". When I run in a local node, it works. But when I run in the elasticsearch container in a cluster, it doesn't work. The solution was execute the elastic-search-reset-password and elasticsearch-create-enrollment-token scripts, respectively, like this (inside the elasticsearch container):
/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u elastic --url https://localhost:9200
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url https://localhost:9200
I encountered the same problem, and I just redid the process - unzipped the ES and kibana zip files again, and ran bin/elasticsearch in the newly created directory. Look for a message that is encapsulated in a formatted box that contains both the password for the elastic user, and the enrollment token for Kibana (the token is only valid for 30 minutes). This message will only appear once, the first time you run elasticsearch.
I proceeded to run bin/kibana for Kibana and configured it in the browser, and everything worked out from there. Hope this helps!
I have the exact issue:
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
ERROR: Failed to determine the health of the cluster.
But after I restart the elasticsearch service:
$ sudo systemctl restart elasticsearch.service
then it works:
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: xxxxxx
Two possible solutions:
Make sure that you have enough disk space.
Your VPN might be causing the issue.
The enrollment Token will be present in the terminal itself. You just need to scroll up till you find it when you are installing.
The reason for the error - ERROR: Failed to determine the health of the cluster is due to the fact that Elastic has not been installed yet and running that command is like calling a function without defining it.

Elasticsearch REST search API

I have problem with remote address of elasticsearch nad REST API (with getting search results)
I'm using ELK stack created by jHispter (logstash + Elasticsearch + Kibana). When I use REST search API (by cURL) with external server address I get fewer results than when I use localhost:
$ curl -X GET "http://localhost:9200/logstash-*/_search?q=Method:location"
{"took":993,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8994099,"max_score":5.0447145,"hits":[..]}}
when executed from different server it returns smaller number of shards and hits:
$ curl -X GET "http://SERVER_URL/logstash-*/_search?q=Method:location"
{"took":10,"timed_out":false,"_shards":
{"total":120,"successful":120,"skipped":0,"failed":0},"hits":
{"total":43,"max_score":7.5393815,"hits":[..]}}
If I create ssh tunnel it works:
ssh -L 9201:SERVER_URL:9200 elk-stack
and now:
$ curl -X GET "localhost:9201/logstash-*/_search?q=Method:location"
{"took":640,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8995082,"max_score":5.0447145,"hits":[..]}}
so there must be some problem with accessing data outside of localhost but I cant find in configuration how to change it (maybe some kind of default behaviour to prevent data leakage when accessing from remote?)
you should config your host
for this , In the config/elasticsearch.yml file put this line:
network.host: 0.0.0.0

API + restart the services that restart is required

after adding new parameter and value to the ambari cluster , we need to restart the service to take affect
from ambari GUI restart the service is required , and we can see that because restart button is colored with orange
so my question is
we need API command that restart only the services that restart is required?
in order to restart all relevant services ( that need's restart ) the following syntax is the answer
curl -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"RequestInfo":{"command":"RESTART","context":"Restart all required services","operation_level":"host_component"},"Requests/resource_filters":[{"hosts_predicate":"HostRoles/stale_configs=true"}]}' http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/requests

How to completelly remove Ranger Admin Server and Ranger KMS from the Ambari 2.3 cluster

I have added Ranger Service (Ranger Admin Server, Ranger Usersync, and Ranger KMS) to the existing Ambari 2.3 cluster (4 nodes), running on the Ubuntu 14.04 Servers. All services are on the Master Node. However, it doesn't installed correctly and now it shows 'Install Failed' on the left hand side column of available Services on the Main Ambari page, and I believe this is what puting the Master node down. I can't find any option available to Delete the Service on the Ambari Web UI. I followed this tutorial, but without success. Every time I try to delete whole service with the following command
curl -u admin:admin -X DELETE http://AMBARI_SERVER_HOST:8080/api/v1/clusters/c1/services/SERVICENAME it ends up with an error: "400"... something...
you need to add "X-Requested-By ... "
So my commands according to our system was:
curl -u admin:admin -X "X-Requested-By: ambari" DELETE http://localhost:8080/api/v1/clusters/cluster1/services/ranger
Also I've tried:
curl -u admin:admin -X "X-Requested-By: ambari" DELETE http://localhost:8080/api/v1/clusters/cluster1/services/rangeradmin
And finally:
curl -u admin:admin -X "X-Requested-By: ambari" DELETE http://localhost:8080/api/v1/clusters/cluster1/services/RangerAdmin
My thoughts is that as far as these services were not installed properly, the system can't see them. Or maybe some other issues..
However, I still cannot figure out what the actual command is, and is it possible at all to remove the service?!? I know I can hide this issue, with the 'Turn On Maintanace Mode' as option, and in that case Master Node will run as normal, but I want to completelly get rid of this Service, as I don't need it anymore. Any help appriciated, as I spent half a day trying to remove it, with no success.
sorted. if anyone interesting to delete Ambari services (in my case it was a RANGER) from the command line, run the following:
// get the service
curl -u admin:admin -X GET http://HOST_NAME:8080/api/v1/clusters/CLUSTER_NAME/services/RANGER
// stop the service
curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://HOST_NAME:8080/api/v1/clusters/CLUSTER_NAME/services/RANGER
// delete the service
curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE http://HOST_NAME:8080/api/v1/clusters/CLUSTER_NAME/services/RANGER
P.S. Simply put your hostname instead of HOST_NAME and your cluster name instead CLUSTER_NAME
Hope it helps anyone with the same issues.

Kill a framework in Mesos

I have a Mesos cluster and was running a Spark shell connected to it. I shut down the client, but Mesos still believes the framework should be active.
I am trying to have Mesos drop the framework by using DELETE with curl
(https://issues.apache.org/jira/browse/MESOS-1390)
but I am getting no response from the server. Also, I am not sure how exactly to connect to the master: I have a multi-master setup managed by ZooKeeper, and I was trying to connect just to the active master:
curl -X DELETE http://<active master url>:5050/framworks/<framework id>
Can anyone verify if the above is the correct request?
I am using mesos-0.20.0.
Thanks
There is a restfull option calling by post the url http://your_mesos:5050/master/teardown passing frameworkId parameter
curl -d#/tmp/post.txt -X POST http://your_mesos:5050/master/teardown
/tmp/post.txt is a file with the follow content:
frameworkId=23423-23423-234234-234234
I know is late but for future askers
EDIT: The endpoint is now called teardown.
Example (thanks #Jeff): curl -X POST http://your_mesos:5050/master/teardown -d 'frameworkId=23423-23423-234234-234234'
Just to keep this up to date: The master endpoint was renamed to teardown i.e. http://localhost:5050/master/teardown is the new way to go.
TEARDOWN Request (JSON):
POST /master/teardown HTTP/1.1
Host: masterhost:5050
Content-Type: application/json
frameworkId=12220-3440-12532-2345
TEARDOWN Response:
HTTP/1.1 200 Ok
Riffing on #montells work, a one-liner would be
echo "frameworkId= 23423-23423-234234-234234" | curl -d#- -X POST http://localhost:5050/master/shutdown
Even though that JIRA issue mentions DELETE (in comments) it's not how framework shutdown is implemented. You need to do a POST request to /shutdown endpoint.
Examples: https://github.com/apache/mesos/blob/master/src/tests/teardown_tests.cpp
Regarding why the spark framework is not removed after you shutdown the client, I'm guessing it is because spark uses a high failover timeout? Nonetheless, I'm surprised that Mesos UI shows it as active instead of inactive.
Add this in your .bashrc:
#Mesos
killtask(){ curl -XPOST http://mesos_url:5050/master/teardown -d 'frameworkId='$#''; } ;
Sample usage:
killtask 123

Resources