Is it possible to POST kibana index patterns into the .kibana index of elasticsearch? - bash

I have a kibana instance that is brought up using docker and ansible. When this kibana instance is being brought up the Elasticsearch instance it's connected to is already running. I apply some index templates using curl and want to do something similar for index patterns and later on visualizations and dashboards.
I've succeeded in using the kibana API to do this but in my scenario I need this to be done automatically and before the kibana instance is up and running so I get a connection refused since kibana obviously isn't running yet.
Both ES and kibana are running on version 6.2.x

This CURL should work for you:
curl -XPOST "http://localhost:9200/.kibana/doc/index-pattern:my-index-pattern-name" -H 'Content-Type: application/json' -d'
{
"type" : "index-pattern",
"index-pattern" : {
"title": "my-index-pattern-name*",
"timeFieldName": "execution_time"
}
}'

Related

Elasticsearch REST search API

I have problem with remote address of elasticsearch nad REST API (with getting search results)
I'm using ELK stack created by jHispter (logstash + Elasticsearch + Kibana). When I use REST search API (by cURL) with external server address I get fewer results than when I use localhost:
$ curl -X GET "http://localhost:9200/logstash-*/_search?q=Method:location"
{"took":993,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8994099,"max_score":5.0447145,"hits":[..]}}
when executed from different server it returns smaller number of shards and hits:
$ curl -X GET "http://SERVER_URL/logstash-*/_search?q=Method:location"
{"took":10,"timed_out":false,"_shards":
{"total":120,"successful":120,"skipped":0,"failed":0},"hits":
{"total":43,"max_score":7.5393815,"hits":[..]}}
If I create ssh tunnel it works:
ssh -L 9201:SERVER_URL:9200 elk-stack
and now:
$ curl -X GET "localhost:9201/logstash-*/_search?q=Method:location"
{"took":640,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8995082,"max_score":5.0447145,"hits":[..]}}
so there must be some problem with accessing data outside of localhost but I cant find in configuration how to change it (maybe some kind of default behaviour to prevent data leakage when accessing from remote?)
you should config your host
for this , In the config/elasticsearch.yml file put this line:
network.host: 0.0.0.0

Elasticsearch 2.4.6 issue with delete and create index

I am using elasticsearch 2.4.6 installed in centos 7 and was working fine. But suddenly the same has started behaving wierd.
I had initially created an index and now when i delete it does acknowledge, but if i delete again it still acknowledge when it should throw error of no index found
The testserver is working fine as below:-
[root#localhost ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"acknowledged":true}
[root#localhost ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"taxsutra","index":"taxsutra"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"taxsutra","index":"taxsutra"},"status":404}
But when in production the result is as below which is incorrect: -
[root#server-03 ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"acknowledged":true}
[root#server-03 ~]# curl -XDELETE 'http://localhost:9200/taxsutra'
{"acknowledged":true}
Similarly even while creating index in production server, the output shows the index exists even when it doesnot.
Appreciate Help.

Graylog cannot connect to Elasticsearch in Kubernetes cluster

I deployed Graylog on a Kubernetes cluster and everything was working fine, until I decided to add an environment variable and update the graylog deployment.
Now, some things stopped working. I can see that all inputs are running and they are accepting messages:
However, if I try to see the received messages, it returns 500 error with the following message:
The docs say that the Graylog container needs a service called elasticsearch
docker run --link some-mongo:mongo --link some-elasticsearch:elasticsearch -p 9000:9000 -e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" -d graylog2/server
And if I attach to the graylog pod and curl elasticsearch:9200, I see a successful result:
{
"name" : "Vixen",
"cluster_name" : "graylog",
"cluster_uuid" : "TkZtckzGTnSu3JjERQNf4g",
"version" : {
"number" : "2.4.4",
"build_hash" : "fcbb46dfd45562a9cf00c604b30849a6dec6b017",
"build_timestamp" : "2017-01-03T11:33:16Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}
But if the graylog logs say that it is trying to connect to the localhost:
Again, everything was working to this day. Why is it trying to connect to the localhost, not the elastic search service?
Looks like it was a version problem. I downgraded the graylog container to the previous stable version: 2.2.3-1 and it started working again.
My guess is that when I updated the images today, it pulled the latest version which corrupted some things
you may want to try add elastichost to graylog.conf
https://github.com/Graylog2/graylog2-server/blob/master/misc/graylog.conf
at line 172
# List of Elasticsearch hosts Graylog should connect to.
# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
# requires authentication.
#
# Default: http://127.0.0.1:9200
#elasticsearch_hosts = http://node1:9200,http://user:password#node2:19200
you can make your own graylog.conf and add this to your dockerfile then build with it.
Actually, Graylog has shifted to HTTP API in Graylog 2.3. Therefore, the method of connecting to Elasticsearch cluster has changed. You can now just provide the IP addresses of the ES nodes instead of zen_ping_unicast_hosts. This is the commit which changed this setting - https://github.com/Graylog2/graylog2-server/commit/4213a2257429b6a0803ab1b52c39a6a35fbde889.
This also enables us to connect AWS ES service as well which was not possible earlier. See this thread of discussion to get more insights - https://github.com/Graylog2/graylog2-server/issues/1473

Why won't a put call to the elasticsearch /_cluster/settings endpoint respect an update of settings?

I'm running Elasticsearch 2.3.3, and am looking to set up a cluster so that I can have a simulation of a production ready set up. This is set up on two Azure VMs with docker.
I'm looking at the /_cluster/settings api, to allow myself to update settings. According to the elasticserch documentation, it should be possible to update settings on clusters.
I've run on each machine the command:
docker run -d --name elastic -p 9200:9200 -p 9300:9300 elasticsearch --cluster.name=api-update-test
so now each machine sees itself as the one master and data node in a 1 machine cluster. I have then made a put request to one of these to tell it where to find the discovery.zen.ping.unicast.hosts, and to update the discovery.zen.minimum_master_nodes, with the following command (in powershell)
curl
-Method PUT
-Body '{"persistent":
{"discovery.zen.minimum_master_nodes":2,
"discovery.zen.ping.unicast.hosts":["<machine-one-ip>:9300"]}
}'
-ContentType application/json
-Uri http://<machine-two-ip>:9200/_cluster/settings
The response comes back invariably with a 200 response, but a confirmation of the original settings: {"acknowledged":true,"persistent":{},"transient":{}}
Why won't elasticsearch respect this request and update these settings? It should be noted, this also happens when I use the precise content of the sample request in the documentation.
I always used this approach:
curl -XPUT "http://localhost:9200/_cluster/settings" -d'
{
"persistent": {
"discovery.zen.minimum_master_nodes": 2
}
}'
And, also, only discovery.zen.minimum_master_nodes is dynamically update-able. The other one is not.

How to open console/window to use CURL command in elastic search?

I am using winodws 8.1.
I have installed elastic search, now I want to create an index in elastic search
so,which IDE I can use? or how to open to user CURL command like below?
curl -XPUT 'http://localhost:9200/depst/semployee/11' -d '{ "name": "xxxx"}'
how to open window which allow to type above command?
You can use cygwin with the curl extension installed or you can - for example - to manualy querying.
You can also install the plugin head for elasticsearch wich permit you to test your index, more information about head plugin here:
https://github.com/mobz/elasticsearch-head

Resources