Elasticsearch 2.4.6 issue with delete and create index - shell

I am using elasticsearch 2.4.6 installed in centos 7 and was working fine. But suddenly the same has started behaving wierd.
I had initially created an index and now when i delete it does acknowledge, but if i delete again it still acknowledge when it should throw error of no index found
The testserver is working fine as below:-
[root#localhost ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"acknowledged":true}
[root#localhost ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"taxsutra","index":"taxsutra"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"taxsutra","index":"taxsutra"},"status":404}
But when in production the result is as below which is incorrect: -
[root#server-03 ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"acknowledged":true}
[root#server-03 ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"acknowledged":true}
Similarly even while creating index in production server, the output shows the index exists even when it doesnot.
Appreciate Help.

Related

Is it possible to POST kibana index patterns into the .kibana index of elasticsearch?

I have a kibana instance that is brought up using docker and ansible. When this kibana instance is being brought up the Elasticsearch instance it's connected to is already running. I apply some index templates using curl and want to do something similar for index patterns and later on visualizations and dashboards.
I've succeeded in using the kibana API to do this but in my scenario I need this to be done automatically and before the kibana instance is up and running so I get a connection refused since kibana obviously isn't running yet.
Both ES and kibana are running on version 6.2.x
This CURL should work for you:
curl -XPOST "http://localhost:9200/.kibana/doc/index-pattern:my-index-pattern-name" -H 'Content-Type: application/json' -d'
{
"type" : "index-pattern",
"index-pattern" : {
"title": "my-index-pattern-name*",
"timeFieldName": "execution_time"
}
}'

Elasticsearch REST search API

I have problem with remote address of elasticsearch nad REST API (with getting search results)
I'm using ELK stack created by jHispter (logstash + Elasticsearch + Kibana). When I use REST search API (by cURL) with external server address I get fewer results than when I use localhost:
$ curl -X GET "http://localhost:9200/logstash-*/_search?q=Method:location"
{"took":993,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8994099,"max_score":5.0447145,"hits":[..]}}
when executed from different server it returns smaller number of shards and hits:
$ curl -X GET "http://SERVER_URL/logstash-*/_search?q=Method:location"
{"took":10,"timed_out":false,"_shards":
{"total":120,"successful":120,"skipped":0,"failed":0},"hits":
{"total":43,"max_score":7.5393815,"hits":[..]}}
If I create ssh tunnel it works:
ssh -L 9201:SERVER_URL:9200 elk-stack
and now:
$ curl -X GET "localhost:9201/logstash-*/_search?q=Method:location"
{"took":640,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8995082,"max_score":5.0447145,"hits":[..]}}
so there must be some problem with accessing data outside of localhost but I cant find in configuration how to change it (maybe some kind of default behaviour to prevent data leakage when accessing from remote?)
you should config your host
for this , In the config/elasticsearch.yml file put this line:
network.host: 0.0.0.0

Not able to create new user for elasticsearch using elasticsearch-http-user-auth

I am not able to create a new user for Elasticsearch using elasticsearch-http-user-auth plugin. I want to create a user per index so that index will be accessible only for that particular user.
Elasticsearch v5.1.2
Elasticsearch-http-user-auth plugin v5.1.2
Added configuration in elasticsearch.yml according to the doc
elasticfence.disabled: false
elasticfence.root.password: rootpassword
Ran below commands to get the user list
curl -u root:rootpassword http://localhost:9200/_httpuserauth?mode=list
[]
For creating a user ran below commands but I got error
curl -u root:rootpassword http://localhost:9200/_httpuserauth?mode=adduser&username=admin&password=somepassword123
[1] 28647
[2] 28648
[vagrant#localhost ~]$ User already exists : null
Please help to solve this issue.
Run below commands
curl -u root:rootpassword "http://localhost:9200/_httpuserauth?mode=adduser&username=admin&password=somepassword123"
instead of
curl -u root:rootpassword http://localhost:9200/_httpuserauth?mode=adduser&username=admin&password=somepassword123

Why won't a put call to the elasticsearch /_cluster/settings endpoint respect an update of settings?

I'm running Elasticsearch 2.3.3, and am looking to set up a cluster so that I can have a simulation of a production ready set up. This is set up on two Azure VMs with docker.
I'm looking at the /_cluster/settings api, to allow myself to update settings. According to the elasticserch documentation, it should be possible to update settings on clusters.
I've run on each machine the command:
docker run -d --name elastic -p 9200:9200 -p 9300:9300 elasticsearch --cluster.name=api-update-test
so now each machine sees itself as the one master and data node in a 1 machine cluster. I have then made a put request to one of these to tell it where to find the discovery.zen.ping.unicast.hosts, and to update the discovery.zen.minimum_master_nodes, with the following command (in powershell)
curl
-Method PUT
-Body '{"persistent":
{"discovery.zen.minimum_master_nodes":2,
"discovery.zen.ping.unicast.hosts":["<machine-one-ip>:9300"]}
}'
-ContentType application/json
-Uri http://<machine-two-ip>:9200/_cluster/settings
The response comes back invariably with a 200 response, but a confirmation of the original settings: {"acknowledged":true,"persistent":{},"transient":{}}
Why won't elasticsearch respect this request and update these settings? It should be noted, this also happens when I use the precise content of the sample request in the documentation.
I always used this approach:
curl -XPUT "http://localhost:9200/_cluster/settings" -d'
{
"persistent": {
"discovery.zen.minimum_master_nodes": 2
}
}'
And, also, only discovery.zen.minimum_master_nodes is dynamically update-able. The other one is not.

How to completelly remove Ranger Admin Server and Ranger KMS from the Ambari 2.3 cluster

I have added Ranger Service (Ranger Admin Server, Ranger Usersync, and Ranger KMS) to the existing Ambari 2.3 cluster (4 nodes), running on the Ubuntu 14.04 Servers. All services are on the Master Node. However, it doesn't installed correctly and now it shows 'Install Failed' on the left hand side column of available Services on the Main Ambari page, and I believe this is what puting the Master node down. I can't find any option available to Delete the Service on the Ambari Web UI. I followed this tutorial, but without success. Every time I try to delete whole service with the following command
curl -u admin:admin -X DELETE http://AMBARI_SERVER_HOST:8080/api/v1/clusters/c1/services/SERVICENAME it ends up with an error: "400"... something...
you need to add "X-Requested-By ... "
So my commands according to our system was:
curl -u admin:admin -X "X-Requested-By: ambari" DELETE http://localhost:8080/api/v1/clusters/cluster1/services/ranger
Also I've tried:
curl -u admin:admin -X "X-Requested-By: ambari" DELETE http://localhost:8080/api/v1/clusters/cluster1/services/rangeradmin
And finally:
curl -u admin:admin -X "X-Requested-By: ambari" DELETE http://localhost:8080/api/v1/clusters/cluster1/services/RangerAdmin
My thoughts is that as far as these services were not installed properly, the system can't see them. Or maybe some other issues..
However, I still cannot figure out what the actual command is, and is it possible at all to remove the service?!? I know I can hide this issue, with the 'Turn On Maintanace Mode' as option, and in that case Master Node will run as normal, but I want to completelly get rid of this Service, as I don't need it anymore. Any help appriciated, as I spent half a day trying to remove it, with no success.
sorted. if anyone interesting to delete Ambari services (in my case it was a RANGER) from the command line, run the following:
// get the service
curl -u admin:admin -X GET http://HOST_NAME:8080/api/v1/clusters/CLUSTER_NAME/services/RANGER
// stop the service
curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://HOST_NAME:8080/api/v1/clusters/CLUSTER_NAME/services/RANGER
// delete the service
curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE http://HOST_NAME:8080/api/v1/clusters/CLUSTER_NAME/services/RANGER
P.S. Simply put your hostname instead of HOST_NAME and your cluster name instead CLUSTER_NAME
Hope it helps anyone with the same issues.

Resources