Efficient way to retrieve all _ids in ElasticSearch - elasticsearch

What is the fastest way to get all _ids of a certain index from ElasticSearch? Is it possible by using a simple query? One of my index has around 20,000 documents.

Edit: Please also read the answer from Aleck Landgraf
You just want the elasticsearch-internal _id field? Or an id field from within your documents?
For the former, try
curl http://localhost:9200/index/type/_search?pretty=true -d '
{
"query" : {
"match_all" : {}
},
"stored_fields": []
}
'
Note 2017 Update: The post originally included "fields": [] but since then the name has changed and stored_fields is the new value.
The result will contain only the "metadata" of your documents
{
"took" : 7,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 4,
"max_score" : 1.0,
"hits" : [ {
"_index" : "index",
"_type" : "type",
"_id" : "36",
"_score" : 1.0
}, {
"_index" : "index",
"_type" : "type",
"_id" : "38",
"_score" : 1.0
}, {
"_index" : "index",
"_type" : "type",
"_id" : "39",
"_score" : 1.0
}, {
"_index" : "index",
"_type" : "type",
"_id" : "34",
"_score" : 1.0
} ]
}
}
For the latter, if you want to include a field from your document, simply add it to the fields array
curl http://localhost:9200/index/type/_search?pretty=true -d '
{
"query" : {
"match_all" : {}
},
"fields": ["document_field_to_be_returned"]
}
'

Better to use scroll and scan to get the result list so elasticsearch doesn't have to rank and sort the results.
With the elasticsearch-dsl python lib this can be accomplished by:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
es = Elasticsearch()
s = Search(using=es, index=ES_INDEX, doc_type=DOC_TYPE)
s = s.fields([]) # only get ids, otherwise `fields` takes a list of field names
ids = [h.meta.id for h in s.scan()]
Console log:
GET http://localhost:9200/my_index/my_doc/_search?search_type=scan&scroll=5m [status:200 request:0.003s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.005s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.005s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.003s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.005s]
...
Note: scroll pulls batches of results from a query and keeps the cursor open for a given amount of time (1 minute, 2 minutes, which you can update); scan disables sorting. The scan helper function returns a python generator which can be safely iterated through.

For elasticsearch 5.x, you can use the "_source" field.
GET /_search
{
"_source": false,
"query" : {
"term" : { "user" : "kimchy" }
}
}
"fields" has been deprecated.
(Error: "The field [fields] is no longer supported, please use [stored_fields] to retrieve stored fields or _source filtering if the field is not stored")

Elaborating on answers by Robert Lujo and Aleck Landgraf,
if you want the IDs in a list from the returned generator, here is what I use:
from elasticsearch import Elasticsearch
from elasticsearch import helpers
es = Elasticsearch(hosts=[YOUR_ES_HOST])
hits = helpers.scan(
es,
query={"query":{"match_all": {}}},
scroll='1m',
index=INDEX_NAME
)
ids = [hit['_id'] for hit in hits]

Another option
curl 'http://localhost:9200/index/type/_search?pretty=true&fields='
will return _index, _type, _id and _score.

I know this post has a lot of answers, but I want to combine several to document what I've found to be fastest (in Python anyway). I'm dealing with hundreds of millions of documents, rather than thousands.
The helpers class can be used with sliced scroll and thus allow multi-threaded execution. In my case, I have a high cardinality field to provide (acquired_at) as well. You'll see I set max_workers to 14, but you may want to vary this depending on your machine.
Additionally, I store the doc ids in compressed format. If you're curious, you can check how many bytes your doc ids will be and estimate the final dump size.
# note below I have es, index, and cluster_name variables already set
max_workers = 14
scroll_slice_ids = list(range(0,max_workers))
def get_doc_ids(scroll_slice_id):
count = 0
with gzip.open('/tmp/doc_ids_%i.txt.gz' % scroll_slice_id, 'wt') as results_file:
query = {"sort": ["_doc"], "slice": { "field": "acquired_at", "id": scroll_slice_id, "max": len(scroll_slice_ids)+1}, "_source": False}
scan = helpers.scan(es, index=index, query=query, scroll='10m', size=10000, request_timeout=600)
for doc in scan:
count += 1
results_file.write((doc['_id'] + '\n'))
results_file.flush()
return count
if __name__ == '__main__':
print('attempting to dump doc ids from %s in %i slices' % (cluster_name, len(scroll_slice_ids)))
with futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
doc_counts = executor.map(get_doc_ids, scroll_slice_ids)
If you want to follow along with how many ids are in the files, you can use unpigz -c /tmp/doc_ids_4.txt.gz | wc -l.

For Python users: the Python Elasticsearch client provides a convenient abstraction for the scroll API:
from elasticsearch import Elasticsearch, helpers
client = Elasticsearch()
query = {
"query": {
"match_all": {}
}
}
scan = helpers.scan(client, index=index, query=query, scroll='1m', size=100)
for doc in scan:
# do something

you can also do it in python, which gives you a proper list:
import elasticsearch
es = elasticsearch.Elasticsearch()
res = es.search(
index=your_index,
body={"query": {"match_all": {}}, "size": 30000, "fields": ["_id"]})
ids = [d['_id'] for d in res['hits']['hits']]

Inspired by #Aleck-Landgraf answer, for me it worked by using directly scan function in standard elasticsearch python API:
from elasticsearch import Elasticsearch
from elasticsearch.helpers import scan
es = Elasticsearch()
for dobj in scan(es,
query={"query": {"match_all": {}}, "fields" : []},
index="your-index-name", doc_type="your-doc-type"):
print dobj["_id"],

This is working!
def select_ids(self, **kwargs):
"""
:param kwargs:params from modules
:return: array of incidents
"""
index = kwargs.get('index')
if not index:
return None
# print("Params", kwargs)
query = self._build_query(**kwargs)
# print("Query", query)
# get results
results = self._db_client.search(body=query, index=index, stored_fields=[], filter_path="hits.hits._id")
print(results)
ids = [_['_id'] for _ in results['hits']['hits']]
return ids

Url -> http://localhost:9200/<index>/<type>/_query
http method -> GET
Query -> {"query": {"match_all": {}}, "size": 30000, "fields": ["_id"]}

Related

Elasticsearch Deduplication

I have a collection of documents where each document looks like
{
"_id": ... ,
"Author": ...,
"Content": ....,
"DateTime": ...
}
I would like to issue one query to the collection so that I get in response the oldest document from each author. I am considering using a terms aggregation but when I do that I get a list of buckets, being the unique Author values, telling me nothing about which of their documents is the oldest. Furthermore, that approach requires a subsequent call to ES, which is undesirable.
Any advice you could offer would be greatly appreciated. Thanks.
You can use collapse in elastic search.
It will return top 1 record per author sorted on DateTime
{
"size": 10,
"collapse": {
"field": "Author.keyword"
},
"sort": [
{
"DateTime": {
"order": "desc"
}
}
]
}
Result
"hits" : [
{
"_index" : "index83",
"_type" : "_doc",
"_id" : "e1QwrnABAWOsYG7tvNrB",
"_score" : null,
"_source" : {
"Author" : "b",
"Content" : "ADSAD",
"DateTime" : "2019-03-11"
},
"fields" : {
"Author.keyword" : [
"b"
]
},
"sort" : [
1552262400000
]
},
{
"_index" : "index83",
"_type" : "_doc",
"_id" : "elQwrnABAWOsYG7to9oS",
"_score" : null,
"_source" : {
"Author" : "a",
"Content" : "ADSAD",
"DateTime" : "2019-03-10"
},
"fields" : {
"Author.keyword" : [
"a"
]
},
"sort" : [
1552176000000
]
}
]
}
EDIT 1:
{
"size": 10,
"collapse": {
"field": "Author.keyword"
},
"sort": [
{
"DateTime": {
"order": "desc"
}
}
],
"aggs":
{
"authors": {
"terms": {
"field": "Author.keyword", "size": 10 },
"aggs": {
"doc_count": { "value_count": { "field":
"Author.keyword"
}
}
}
}
}
}
There's no simple way of doing it directly with one call to Elasticsearch. Fortunately, there's a nice article on Elastic Blog showing some methods of doing it.
One these methods is using logstash to remove duplicates. Other method include using a Python script that can be found on this github repository:
#!/usr/local/bin/python3
import hashlib
from elasticsearch import Elasticsearch
es = Elasticsearch(["localhost:9200"])
dict_of_duplicate_docs = {}
# The following line defines the fields that will be
# used to determine if a document is a duplicate
keys_to_include_in_hash = ["CAC", "FTSE", "SMI"]
# Process documents returned by the current search/scroll
def populate_dict_of_duplicate_docs(hits):
for item in hits:
combined_key = ""
for mykey in keys_to_include_in_hash:
combined_key += str(item['_source'][mykey])
_id = item["_id"]
hashval = hashlib.md5(combined_key.encode('utf-8')).digest()
# If the hashval is new, then we will create a new key
# in the dict_of_duplicate_docs, which will be
# assigned a value of an empty array.
# We then immediately push the _id onto the array.
# If hashval already exists, then
# we will just push the new _id onto the existing array
dict_of_duplicate_docs.setdefault(hashval, []).append(_id)
# Loop over all documents in the index, and populate the
# dict_of_duplicate_docs data structure.
def scroll_over_all_docs():
data = es.search(index="stocks", scroll='1m', body={"query": {"match_all": {}}})
# Get the scroll ID
sid = data['_scroll_id']
scroll_size = len(data['hits']['hits'])
# Before scroll, process current batch of hits
populate_dict_of_duplicate_docs(data['hits']['hits'])
while scroll_size > 0:
data = es.scroll(scroll_id=sid, scroll='2m')
# Process current batch of hits
populate_dict_of_duplicate_docs(data['hits']['hits'])
# Update the scroll ID
sid = data['_scroll_id']
# Get the number of results that returned in the last scroll
scroll_size = len(data['hits']['hits'])
def loop_over_hashes_and_remove_duplicates():
# Search through the hash of doc values to see if any
# duplicate hashes have been found
for hashval, array_of_ids in dict_of_duplicate_docs.items():
if len(array_of_ids) > 1:
print("********** Duplicate docs hash=%s **********" % hashval)
# Get the documents that have mapped to the current hashval
matching_docs = es.mget(index="stocks", doc_type="doc", body={"ids": array_of_ids})
for doc in matching_docs['docs']:
# In this example, we just print the duplicate docs.
# This code could be easily modified to delete duplicates
# here instead of printing them
print("doc=%s\n" % doc)
def main():
scroll_over_all_docs()
loop_over_hashes_and_remove_duplicates()
main()

How to Query just all the documents name of index in elasticsearch

PS: I'm new to elasticsearch
http://localhost:9200/indexname/domains/<mydocname>
Let's suppose we have indexname as our index and i'm uploading a lot of documents at <mydoc> with domain names ex:
http://localhost:9200/indexname/domains/google.com
http://localhost:9200/indexname/domains/company.com
Looking at http://localhost:9200/indexname/_count , says that we have "count": 119687 amount of documents.
I just want my elastic search to return the document names of all 119687 entries which are domain names.
How do I achieve that and is it possible to achieve that in one single query?
Looking at the example : http://localhost:9200/indexname/domains/google.com I am assuming your doc_type is domains and doc id/"document name" is google.com.
_id is the document name here which is always part of the response. You can use source filtering to disable source and it will show only something like below:
GET indexname/_search
{
"_source": false
}
Output
{
...
"hits" : [
{
"_index" : "indexname",
"_type" : "domains",
"_id" : "google.com",
"_score" : 1.0
}
]
...
}
If documentname is a field that is mapped, then you can still use source filtering to include only that field.
GET indexname/_search
{
"_source": ["documentname"]
}

In Elastic search ,how to get "-id" value of a document by providing unique content present in the document

I have few documents ingested in Elastic search. A sample document is as below.
"_index": "author_index",
"_type": "_doc",
"_id": "cOPf2wrYBik0KF", --Automatically generated by Elastic search after ingestion
"_score": 0.13956004,
"_source": {
"author_data": {
"author": "xyz"
"author_id": "123" -- This is unique id for each document
"publish_year" : "2016"
}
}
Is there a way to get the auto-generated _id by sending author_id from High-level Rest Client?
I tried researching solutions.But all the solutions are only fetching the document using _id. But I need the reverse operation.
Actual Output expected: cOPf2wrYBik0KF
The SearchHit provides access to basic information like index, document ID and score of each search hit, so with Search API you can do it this way on Java,
String index = hit.getIndex();
String id = hit.Id() ;
OR something like this,
SearchResponse searchResponse =
client.prepareSearch().setQuery(matchAllQuery()).get();
for (SearchHit hit : searchResponse.getHits()) {
String yourId = hit.id();
}
SEE HERE: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-search.html#java-rest-high-search-response
You can use source filtering.You can turn off _source retrieval as you are interested in just the _id.The _source accepts one or more wildcard patterns to control what parts of the _source should be returned.(https://www.elastic.co/guide/en/elasticsearch/reference/7.0/search-request-source-filtering.html):
GET /author_index
{
"_source" : false,
"query" : {
"term" : { "author_data.author_id" : "123" }
}
}
Another approach will also give for the _id for the search.The stored_fields parameter is about fields that are explicitly marked as stored in the mapping, which is off by default and generally not recommended:
GET /author_index
{
"stored_fields" : ["author_data.author_id", "_id"],
"query" : {
"term" : { "author_data.author_id" : "123" }
}
}
Output for both above queries:
"hits" : [
{
"_index" : "author_index",
"_type" : "_doc",
"_id" : "cOPf2wrYBik0KF",
"_score" : 6.4966354
}
More details here: https://www.elastic.co/guide/en/elasticsearch/reference/7.0/search-request-stored-fields.html

Attempting to use Elasticsearch Bulk API when _id is equal to a specific field

I am attempting to bulk insert documents into an index. I need to have _id equal to a specific field that I am inserting. I'm using ES v6.6
POST productv9/_bulk
{ "index" : { "_index" : "productv9", "_id": "in_stock"}}
{ "description" : "test", "in_stock" : "2001"}
GET productv9/_search
{
"query": {
"match": {
"_id": "2001"
}
}
}
When I run the bulk statement it runs without any error. However, when I run the search statement it is not getting any hits. Additionally, I have many additional documents that I would like to insert in the same manner.
What I suggest to do is to create an ingest pipeline that will set the _id of your document based on the value of the in_stock field.
First create the pipeline:
PUT _ingest/pipeline/set_id
{
"description" : "Sets the id of the document based on a field value",
"processors" : [
{
"set" : {
"field": "_id",
"value": "{{in_stock}}"
}
}
]
}
Then you can reference the pipeline in your bulk call:
POST productv9/doc/_bulk?pipeline=set_id
{ "index" : {}}
{ "description" : "test", "in_stock" : "2001"}
By calling GET productv9/_doc/2001 you will get your document.

How to get documents size(in bytes) in Elasticsearch

I am new to elasticsearch. I need to get the size of the documents of the query results.
Example:--
this is a document. (19bytes).
this is also a document. (24bytes)
content:{"a":"this is a document", "b":"this is also a document"}(53bytes)
when I query for the document in ES. I will get the above documents as result. So, the size of both documents is 32bytes. I need the 32bytes in elasticsearch as a result.
Does your document only contain a single field? I'm not sure this is 100% of what you want, but generally you can calculate the length of fields and either store them with the document or calculate them at query time (but this is a slow operation and I would avoid it if possible).
So here's an example with a test document and the calculation for the field length:
PUT test/_doc/1
{
"content": "this is a document."
}
POST test/_update_by_query
{
"query": {
"bool": {
"must_not": [
{
"exists": {
"field": "content_length"
}
}
]
}
},
"script": {
"source": """
if(ctx._source.containsKey("content")) {
ctx._source.content_length = ctx._source.content.length();
} else {
ctx._source.content_length = 0;
}
"""
}
}
GET test/_search
The query result is then:
{
"took" : 6,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [
{
"_index" : "test",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"content" : "this is a document.",
"content_length" : 19
}
}
]
}
}
BTW there are 19 characters (including spaces and dots in that one). If you want to exclude those, you'll have to add some more logic to the script. I would be careful with bytes BTW, since UTF8 might use more than one byte per character (like höhe) and this script is really only counting characters.
Then you can easily use the length in queries and aggregations.
If you want to calculate the size of all the subdocuments combined, use the following:
PUT test/_doc/2
{
"content": {
"a": "this is a document",
"b": "this is also a document"
}
}
POST test/_update_by_query
{
"query": {
"bool": {
"must_not": [
{
"exists": {
"field": "content_length"
}
}
]
}
},
"script": {
"source": """
if(ctx._source.containsKey("content")) {
ctx._source.content_length = 0;
for (item in ctx._source.content.entrySet()) {
ctx._source.content_length += item.getValue().length();
}
}
"""
}
}
GET test/_search
Just note that content can either be of the type text or have a subdocument, but you can't mix that.
There's no way to get elasticsearch docs size by API. The reason is that the doc indexed to Elasticsearch takes different size in the index, depending on whether you store _all, which fields are indexed, and the mapping type of those fields, doc_value and more. also elasticsearch uses deduplication and other methods of compaction, so the index size has no linear correlation with the original documents it contains.
One way to work around it is to calculate the document size in advance before indexing it, and add it as another field in the doc, i.e. doc_size field. then you can query this calculated field, and run aggregations on it.
Note however that as I stated above this does not represent the size of the index, and might be completely wrong - for example if all the docs contain a very long text field with the same value, then Elasticsearch would only store that long value once and reference to it, so the index size would be much smaller.
Elasticsearch now has a _size field, which can be enabled in mappings.
Once enabled, this gives out data size in Bytes.
GET <index_name>/_doc/<doc_id>?stored_fields=_size
Elasticsearch official doc

Resources