Programmatically set Kibana's default index pattern - elasticsearch

A Kibana newbie would like to know how to set default index pattern programmatically rather than setting it on the Kibana UI through web browser during the first time viewing Kibana UI as mentioned on page https://www.elastic.co/guide/en/kibana/current/setup.html

Elasticsearch stores all Kibana metadata information under .kibana index. Kibana configurations like defaultIndex and advance settings are stored under index/type/id .kibana/config/4.5.0 where 4.5.0 is the version of your Kibana.
So you can achieve setting up or changing defaultIndex with following steps:
Add index to Kibana which you want to set as defaultIndex. You can do that by executing following command:
curl -XPUT http://<es node>:9200/.kibana/index-pattern/your_index_name -d '{"title" : "your_index_name", "timeFieldName": "timestampFieldNameInYourInputData"}'
Change your Kibana config to set index added earlier as defaultIndex:
curl -XPUT http://<es node>:9200/.kibana/config/4.5.0 -d '{"defaultIndex" : "your_index_name"}'
Note: Make sure your giving correct index_name everywhere, valid timestamp field name and kibana version for example if you are using kibana 4.1.1 then you can replace 4.5.0 with 4.1.1 .

In kibana:6.5.3 this can be achieved this calling the kibana api.
curl -X POST "http://localhost:5601/api/saved_objects/index-pattern/logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d'
{
"attributes": {
"title": "logstash-*",
"timeFieldName": "#timestamp"
}
}
'
the Docs are here it does mention that the feature is experimental.

Related

How to migrate Elasticsearch to a new server?

The old server will remove, that Elasticsearch version is v6.8, the new server installed same version. Now, I'll migrate all data to the new server. Is my operation correct?
Old server: elasticsearch.yml add path.repo, for example
path.repo:["/data/backup"]
Restart Elasticsearch services
Old server
curl -H "Content-Type: application/json"
-XPUT http://192.168.50.247:9200/_snapshot/my_backup
-d '{ "type": "fs", "settings":
{ "location": "/data/backup","compress": true }}'
Create backup
curl -XPUT http://192.168.50.247:9200/_snapshot/my_backup/news0618
Restore database(new server ip:192.168.10.49 ):
curl -XPOST http://192.168.10.49:9200/_snapshot/my_backup/news0618/_restore
Are these operations can migrate all the data?
If you are using fs as a snapshot repository location then it will not work as your new instance is hosted on different host it will not have access to your old hosts file system. You need to use any shared location like volume mounts, S3 or Azure blob etc.
You should use reindexing rather than snapshot and restore. Its pretty simpler. Refer this link for remote reindexing:
Steps:
Whitelist remote host in elasticsearch.yaml using the reindex.remote.whitelist property in your new Elasticsearch instance:
reindex.remote.whitelist: "192.168.50.247:9200"
Restart new Elasticsearch instance.
Reindexing:
curl -X POST "http://192.168.10.49:9200/_reindex?pretty" -H 'Content-Type: application/json' -d'
{
"source": {
"remote": {
"host": "http://192.168.50.247:9200"
},
"index": "source-index-name",
"query": {
"match_all": {}
}
},
"dest": {
"index": "dest-index-name"
}
}
'
Refer this link for reindexing many indices
Warning: The destination should be configured as wanted before calling _reindex. Reindex does not copy the settings from the source or its associated template. Mappings, shard counts, replicas, and so on must be configured ahead of time.
Hope this helps!

Elastic search snapshot restore another cluster

How to restore elastic search snapshot another cluster? without repository-s3, repository-hdfs, repository-azure, repository-gcs.
This answer is wrt Elastic Search 7.14. So, it is possible to host a snapshot repository on a NFS. Since, you would like to restore snapshot of one cluster to another, you would need to meet the following pre-requisites:
The NFS should be accessible from both source and destination cluster.
The version of the source and destination cluster should be the same. At most, the destination cluster can be 1 major version higher than the source cluster. Eg: you can restore a 5.x snaphot. to a 6.x cluster, but not a 7.x cluster.
Ensure that the shared NFS directory is owned by uid:gid = 1000:0 (elasticsearch user), and appropriate permissions are given (chmod -R 777 <appropriate NFS directory> as elasticsearch user)
Now, I am detailing the steps that you could take to copy the data.
Create a registry of type fs on the source cluster:
PUT http://10.29.61.189:9200/_snapshot/registry1
{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/snapshotrepo",
"compress": true
}
}
Take a snapshot on the created registry:
PUT http://10.29.61.189:9200/_snapshot/registry1/snapshot_1?wait_for_completion=true
{
"indices": "employee,manager",
"ignore_unavailable": true,
"include_global_state": false,
"metadata": {
"taken_by": "binita",
"taken_because": "test snapshot restore"
}
}
Create a registry of type url on the destination cluster. Type url will ensure that the same registry (in terms of the shared NFS path) will be read-only wrt the destination cluster. Destination cluster can only restore/read snapshot info, but can not write snapshots.
PUT http://10.29.59.165:9200/_snapshot/registry1
{
"type": "url",
"settings": {
"url": "file:/usr/share/elasticsearch/snapshotrepo"
}
}
Restore the snapshot generated from source cluster (in step no 2) to the destination cluster.
POST http://10.29.59.165:9200/_snapshot/registry1/snapshot_1/_restore
For more info, refer : https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshots-restore-snapshot.html
Finally i found the solution.it work fine.please read carefully and do.
if you have question contact me waruna94kithruwan#gmail.com.
I have two elastic search cluster.i want a migrate elastic_01 data to elastic_02.
i mean elastic_01 snapshot restore to elastic_02.let's go.
Importent
verify elastic_01 and elastic_02 has this folder "/home/snapshot/".
if not exist create this folder first.
set correct permission to this folder.
please verify elastic_01 and elatic_02 versions same or match.
[elasticsearch snapshot documentation]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
(01) set elastic_01 snapshot settings
$ curl -XPUT '/_snapshot/first_backup' -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/home/snapshot/",
"compress": true
}
}'
(2) add snapshot location to elasticsearch.yml (elastic_01)
edit elasticsearch.yml file and add this code line and save.
$ path.repo: ["/home/snapshot/"]
(03) create snapshot (elastic_01)
$ curl -XPUT "/_snapshot/first_backup/snapshot_1?wait_for_completion=true"
(04) set elastic_02 snapshot settings
$ curl -XPUT '/_snapshot/first_backup' -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/home/snapshot/",
"compress": true
}
}'
(05) add snapshot location to elasticsearch.yml (elastic_02)
edit elasticsearch.yml file and add this code line and save.
$ path.repo: ["/home/snapshot/"]
(06) create snapshot (elastic_02)
$ curl -XPUT "/_snapshot/first_backup/snapshot_1?wait_for_completion=true"
(07) copy elastic_01 snapshot to >>>> elastic_02
delete elastic_02 snapshot folder content $ rm -rf /home/snapshot/*
copy elastic_01 snapshot folder content to elastic_02 snapshot folder
(08) list snapshot
$ curl -XGET '/_snapshot/first_backup/_all?pretty'
it will show backup indexes and snapshot related data
(09) restore elastic search snapshot
$ curl -XPOST "/_snapshot/first_backup/snapshot_1/_restore?wait_for_completion=true"
NOTE: We need to make parameter "include_global_state" to "true" to restore the template as per link " https://www.elastic.co/guide/en/elasticsearch/client/curator/current/option_include_gs.html"
curl -X POST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"include_global_state": true
}
'
{
"accepted" : true
}
Your idea is to first create a snapshot on nodeB, then delete its data and overwrite nodeA's data to this location?
But according to elastic's documentation, nodeB should mount the NFS directory in a read-only manner, so it does not have write permissions, such as using the type: url repository.
PUT _snapshot/local
{
"type": "url",
"settings": {
"url": "file:/home/esdata/snapshot"
}
}

Content-type header not supported

I am following this link for elasticsearch.
https://www.elastic.co/blog/a-practical-introduction-to-elasticsearch
I am trying following curl to post the json data.
curl -XPOST "http://localhost:9200/shakespeare/_bulk?pretty" --data-binary #D:\data\shakespeare.json
But I am getting error like below:
{
"error" : "Content-Type header [application/x-www-form-urlencoded] is not supported",
"status" : 406
}
You need to set the content type in the header to application/json:
curl -XPOST -H "Content-Type: application/json" "http://localhost:9200/shakespeare/_bulk?pretty" --data-binary #D:\data\shakespeare.json
I received the same error after updating from an older version to 6.x.x. I'm not using curl statements directly, I'm using python's elasticsearch plugin.
In my case, I needed to install the newer version of the plugin that corresponded to the updated elasticsearch server.
pip install elasticsearch==6.3.1
Make sure you run it in the same Python Environment that your code is executing in.
Hope this saves someone some headache.
I found a solution you can set a default header that will overwrite previous header:
RestClient restClient = RestClient
.builder(new HttpHost(url, port, scheme))
.setDefaultHeaders(new Header[]{
new BasicHeader("Content-type", "application/json")
})
.build();

create project for sonarqube with the rest-api / web-api

we try to automate the creation of projects (including user/group Management) in sonarqube and I already found the Web-API-documentation in our sonarqube 5.6-Installation. But if I try to create a project with the following settings
JSON-File create-project.json:
{"key": "test1", "name": "Testprojekt1"}
curl-request
curl --noproxy '*' -D -X POST -k -u admin:admin -H 'content-type: application/json' -d create_project.json http://localhost:9000/api/projects/create
I get the Error:
{"err_code":400,"err_msg":"Missing parameter: key"}
It's a bit strange because if I try e.g. the URL:
http://localhost:9000/api/projects/index
I get the list of the projects I created manuelly and if I try a request like
curl -u admin:admin -X POST 'http://localhost:9000/api/projects/create?key=myKey&name=myProject'
it works too, but I would like to use the new api because it looks like it support much more function that the 4.X API of sonarqube.
Maybe someone here can help me with this problem, if would very thanksful for every useful hint.
best regards
Dan
I found this question because I got the same "parameter missing" error message.
So what we both did not understand: The SQ API expects the parameters as plain URL parameters and not as json formatted parameters as most REST APIs do today.
PS: Would be nice if this could be added to the SQ documentation.

elasticsearch bulk script does not work neither with elasticsearch.yml change

When I try to run a curl command like:
curl -s -XPOST localhost:9200/_bulk --data-binary "#bulk_prova.elastic"; echo
Where bulk_prova.elastic is:
{ "update" : {"_id" : "1", "_type" : "type1", "_index" : "indexName"} }{ "script" : "ctx._source.topic = \"topicValue\""}
I got this error
{"took":19872,"errors":true,"items":[{"update":{"_index":"indexName","_type":"type1","_id":"1","status":400,"error":{"type":"illegal_argument_exception","reason":"failed to execute script","caused_by":{"type":"script_exception","reason":"scripts of type [inline], operation [update] and lang [groovy] are disabled"}}}}]}
I searched to solve the issue and I've managed the elasticsearch.yml file to enable the dynamic script, but every time that I try to change the file and stop elastic when I restart the elasticsearch service it does not start.
Due to this strange behavior I do not know how to do to solve the issue.
I have the 2.2.0 version and my intention is to add a field to a index (for now) or more than an index (once the problem is solved)
In Elasticsearch 2.3 it has been modified from:
script.disable_dynamic: false
TO:
script.file: true
script.indexed: true

Resources