when i wanted to view the logs on kibana, i recieved this error :
1 of 37 shards failed The data you are seeing might be incomplete or wrong.
this is Response:
{
"took": 10,
"timed_out": false,
"_shards": {
"total": 21,
"successful": 20,
"skipped": 20,
"failed": 1,
"failures": [
{
"shard": 0,
"index": "tourism-2022.12.11",
"node": null,
"reason": {
"type": "no_shard_available_action_exception",
"reason": null,
"index_uuid": "j2J6dUvTQ_q7qeyyU56bag",
"shard": "0",
"index": "tourism-2022.12.11"
}
}
]
},
"hits": {
"total": 0,
"max_score": 0,
"hits": []
}
}
i delete some indexes
expand pvc
but doesnt work anything
Via Kibana console check the cluster status, if you haven't got Kibana available convert the command to curl request.
Check the Elasticsearch cluster status:
GET /_cluster/health?wait_for_status=yellow&timeout=50s
Check index status:
GET /_cat/indices/tourism-2022.12.11?v=true&s=index
All shards are green, have you got documents available in your index ?
Related
Executing the following
http://172.21.21.151:9200/printer-stats-*/_doc/_count
I get the following response
{
"count": 19299,
"_shards": {
"total": 44,
"successful": 44,
"skipped": 0,
"failed": 0
}
}
How can i modify the query to only return
{
"count": 19299
}
On _search queries we can use filter_path to only get the desired output, but this is not working on count as it seems.
I also tried to add a body like the following
{
"_shards": false
}
But it throws the following error
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "request does not support [_shards]",
"line": 2,
"col": 3
}
],
"type": "parsing_exception",
"reason": "request does not support [_shards]",
"line": 2,
"col": 3
},
"status": 400
}
My version is 7.9.2
Probably this has been asked before, but I have not found a relevant question.
filter_path definitely works on the _count endpoint:
http://172.21.21.151:9200/printer-stats-*/_doc/_count?filter_path=count
Response =>
{
"count": 19299
}
I'am trying to move from an ES cluster to another, in order to plan an update.
The both are same version (6.4). To achieve this, i'am using this command :
curl -XPOST -H "Content-Type: application/json" http://new_cluster/_reindex -d#reindex.json
And the reindex.json, is looking like this :
{
"source": {
"remote": {
"host": "http://old_cluster:9199"
},
"index": "megabase.33.2",
"query": {
"match_all": {}
}
},
"dest": {
"index": "megabase.33.2"
}
}
I whitelisted one the new cluster the old cluster, and its works but i can't go to the end of the migration of data, because i have this error, and i don't understand what it means here :
{
"took":1762,
"timed_out":false,
"total":8263428,
"updated":5998,
"created":5001,
"deleted":0,
"batches":11,
"version_conflicts":0,
"noops":0,
"retries":{
"bulk":0,
"search":0
},
"throttled_millis":0,
"requests_per_second":-1.0,
"throttled_until_millis":0,
"failures":[
{
"index":"megabase.33.2",
"type":"persona",
"id":"noYOA3IBTWbNbLJUqk6T",
"cause":{
"type":"mapper_parsing_exception",
"reason":"failed to parse [adr_inse]",
"caused_by":{
"type":"illegal_argument_exception",
"reason":"For input string: \"2A004\""
}
},
"status":400
}
]
}
The record in the original cluster look like this :
{
"took": 993,
"timed_out": false,
"_shards": {
"total": 4,
"successful": 4,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0,
"hits": [
{
"_index": "megabase.33.2",
"_type": "persona",
"_id": "noYOA3IBTWbNbLJUqk6T",
"_score": 0,
"_source": {
"address": "Obfucated",
"adr_inse": "2A004",
"age": 10,
"base": "Obfucated",
"city": "Obfucated",
"cp": 20167,
"email_md5": "Obfucated",
"fraicheur": "2020-01-12T19:39:04+01:00",
"group": 1,
"latlon": "Obfucated",
"partner": "Obfucated",
"partnerbase": 2,
"sex": 2,
"sms_md5": "Obfucated"
}
}
]
}
}
Any clue on what i'am doing wrong ?
Thanks a lot
Found out, the mapping is not well created when using only the reindex method.
So i drop the new indice, recreate mapping using elasticdump :
elasticdump --input=http://oldcluster/megabase.33.2 --output=http://newcluster/megabase.33.2 --type=mapping
Then run the previous script, everything works flawless (and was rather quick)
I want to apply document level security in elastic, but once I provide more than one value in user metadata I get no matches.
I am creating a role and a user in elastic and passing values inside user metadata to the role on whose basis the search should happen. It works fine if I give one value.
For creating role:
PUT _xpack/security/role/my_policy
{
"indices": [{
"names": ["my_index"],
"privileges": ["read"],
"query": {
"template": {
"source": "{\"bool\": {\"filter\": [{\"terms_set\": {\"country_name\": {\"terms\": {{#toJson}}_user.metadata.country_name{{/toJson}},\"minimum_should_match_script\":{\"source\":\"params.num_terms\"}}}}]}}"
}
}
}]
}
And for user:
PUT _xpack/security/user/jack_black
{
"username": "jack_black",
"password":"testtest",
"roles": ["my_policy"],
"full_name": "Jack Black"
"email": "jb#tenaciousd.com",
"metadata": {
"country_name": ["india" , "japan"]
}
}
I expect the output to be results for india and japan only. If the user searches for anything else they should get no results.
However, I do not see any results at all:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
I have following elastic search query, I want to apply timeout. So I used
"timeout" param.
GET testdata-2016.04.14/_search
{
"size": 10000,
"timeout": "1ms"
}
I have set timeout to be 1ms, but I observed that query is taking time about more than 5000ms. I have tried the query as below also:
GET testdata-2016.04.14/_search?timeout=1ms
{
"size": 10000
}
IN both cases, I am getting below response after approx. 5000ms.
{
"took": 126,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 26536,
"max_score": 1,
"hits": [
{
...................
...................
}
}
}
I am not sure what is happening here. Is anything missing in above queries ?
Please help.
I have tried to find out solution on google but didn't find any working solution.
I'm trying to make ElasticSearch run over my Mongodb server, everything looks fine, but every query I do returns me 0 hits. Always.
My installation and configuration log:
Installed Mongodb 2.6.4
Up and running. No problems here. I have like 7000 products inside "products" collection.
.2 Created replica set.
Confirmed with rs.status() on Mongo shell that it's created and it's
the primary replica Changed mongod.conf with resplSet = rs0
oplogSize=100
.3. Restarted MongoDB
.4. Initiated the replica set
On mongo shell rs.initiate(). Everything fine.
.5. Installed ElasticSearch 1.3.2
{
"status": 200,
"name": "Franz Kafka",
"version": {
"number": "1.3.2",
"build_hash": "dee175dbe2f254f3f26992f5d7591939aaefd12f",
"build_timestamp": "2014-08-13T14:29:30Z",
"build_snapshot": false,
"lucene_version": "4.9"
},
"tagline": "You Know, for Search"
}
.6. Installed Mapper plugin
.7. Installed River plugin
.8. Create index
curl -XPUT 'http://localhost:9200/indexprueba/products/_meta?pretty=true' -d '{
"type": "mongodb",
"mongodb": {
"db": "test",
"collection": "products"
},
"index": {
"name": "probando1",
"type": "products"
}
}'
it returns:
{
"_index": "indexprueba",
"_type": "products",
"_id": "_meta",
"_version": 1,
"created": true
}
--------EDIT---------
8.5 Restore database
I didn't do this. Once I've created the index, I restore my database with mongorestore and this is what I get:
connected to: 127.0.0.1:27017
2014-09-08T08:17:17.773+0000 /var/backup/bikebud/products.bson
2014-09-08T08:17:17.773+0000 going into namespace [test.products]
Restoring to test.products without dropping. Restored data will be inserted without raising errors; check your server log
6947 objects found
2014-09-08T08:17:18.043+0000 Creating index: { key: { _id: 1 }, name: "_id_", ns: "test.products" }
2014-09-08T08:17:18.456+0000 /var/backup/bikebud/retailers.bson
2014-09-08T08:17:18.457+0000 going into namespace [test.retailers]
Restoring to test.retailers without dropping. Restored data will be inserted without raising errors; check your server log
20 objects found
2014-09-08T08:17:18.457+0000 Creating index: { key: { _id: 1 }, name: "_id_", ns: "test.retailers" }
So I understand from here that my indexes are created and linked to the database
--------EDIT---------
.9. Create simple query
curl -XGET `'http://127.0.0.1:9200/indexprueba/_search?pretty=true&q=*:*'`
Always returns:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
----------------EDIT-------------------
After the edit, this is what I get:
{
"took": 14,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1.0,
"hits": [
{
"_index": "testindex1",
"_type": "products",
"_id": "1",
"_score": 1.0,
"_source": {
"type": "mongodb",
"mongodb": {
"servers": [
{
"host": "127.0.0.1",
"port": 27017
}
],
"options": {
"secondary_read_preference": true
},
"db": "test",
"collection": "products"
}
}
}
]
}
}
So now I get hits, but is the index itself. I was expecting to get all products from my database. I start to think I don't understand at all what elasticsearch does. Any clue??
----------------EDIT-------------------
I don't know what I'm missing here. Please, any advice?
----------------EDIT-------------------
It looks like it was a version problem. I have to downgrade ES to 1.2.2 (I'm using 1.3.2).
"Resolved"