I've started using the significant terms aggregation to see which keywords are important in groups of documents as compared to the entire set of documents I've indexed.
It works all great until a lot of documents are indexed. Then for the same query that used to work, elasticsearch only says:
SearchPhaseExecutionException[Failed to execute phase [query],
all shards failed; shardFailures {[OIWBSjVzT1uxfxwizhS5eg][demo_paragraphs][0]:
CircuitBreakingException[Data too large, data for field [text]
would be larger than limit of [633785548/604.4mb]];
My query looks the following:
POST /demo_paragraphs/_search
{
"query": {
"match": {
"django_target_id": 1915661
}
},
"aggregations" : {
"signKeywords" : {
"significant_terms" : {
"field" : "text"
}
}
}
}
And the document structure:
"_source": {
"django_ct": "citations.citation",
"django_target_id": 1915661,
"django_id": 3414077,
"internal_citation_id": "CR7_151",
"django_source_id": 1915654,
"text": "Mucin 1 (MUC1) is a protein heterodimer that is overexpressed in lung cancers [6]. MUC1 consists of two subunits, an N-terminal extracellular subunit (MUC1-N) and a C-terminal transmembrane subunit (MUC1-C). Overexpression of MUC1 is sufficient for the induction of anchorage independent growth and tumorigenicity [7]. Other studies have shown that the MUC1-C cytoplasmic domain is responsible for the induction of the malignant phenotype and that MUC1-N is dispensable for transformation [8]. Overexpression of",
"id": "citations.citation.3414077",
"num_distinct_citations": 0
}
The data that I index are paragraphs from scientifical papers. No document is really large.
Any ideas on how to analyze or solve the problem?
If the data set is to large to compute result on one machine you may need more then one node.
Be thoughtful when planning shard distribution. Make sure that shards are properly distributed so each node is equally stressed when computing heavy queries. A good topology for large data sets is Master-Data-Search configuration where you have one node which acts as master (no data, no queries running on this node). A few nodes are dedicated for holding data (shards) and some nodes are dedicated to execute queries (they do not hold data they use data nodes for partial query execution and combine results). For starter Netflix is using this topology Netflix raigad
Paweł Róg is right you will need much more RAM. For a starter increase java heap size available to each node. See this site for details: ElasticSearch configuration
You have to reasearch how much RAM is enough. Sometimes too much RAM actually slows down ES (unless it was fixed in one of recent versions).
I think there is simple solution.
Please give ES more RAM :D Aggregations require much memory.
Note that coming in elasticsearch 6.0 there is the new significant_text aggregation which doesn't require field data. See https://www.elastic.co/guide/en/elasticsearch/reference/master/search-aggregations-bucket-significanttext-aggregation.html
Related
I am trying to benchmark my elastic search setup by posting documents against a large schema. The two variations of schema are:
indexing enabled for each attribute.
indexing disabled for all the attributes.
My benchmark consists of only going to elastichq cluster and checking spikes in CPU.
However, I don't see the CPU spikes dropping when using the option 2.
Question: Disabling indexing should result in a better performance?
Setup:
Running elastic search on a docker with 1 shard and 1 replica for the index.
Schema with index enabled: https://pastebin.com/uXFkCCzY
Schema with index disabled: https://pastebin.com/FGSAFTMT
Document:
{
"status": "open",
"created_at": "2022-02-14",
"long_12": 123456789,
"division": {
"prop_1": 112211,
"prop_2": false,
"currency": "a brief text"
},
"emails":{
"email": "abc#gmail.com"
}
}
Load test scenario: created 10 Java threads running on i7 laptop and each thread posted 100000 documents with some modification (to keep the document distinct status field value was randomly generated).
More detail on why I am doing this:
So, my Production Elasticsearch (ES) cluster is performing very bad with Read going upwards of 10 second. And apart from all the necessary Read optimization I can do; I am also noticing that ES cluster is generally very busy. And I noticed that my ES index schema doesn't have indexing disabled for any attribute (and we have around 350 attributes).
So, my expectation was that if I set indexing disabled for unnecessary attributes, I can get some wins. However, that's not happening.
Can you please shed some light on:
Does setting index: false and enabled: false should have improved performance.
Am I disabling the index on attributes the right way.
Is my benchmarking technique right
NOTE Document and schema are for reference purpose only the actual schema and document in PROD is quite large. And the result was consistent when benchmarked using a large document.
We are using elasticsearch for the following usecase.
Elasticsearch Version : 5.1.1
Note: We are using AWS managed ElasticSearch
We have a multi-tenanted system where in each tenant stores data for multiple things and number of tenants will increase day by day.
exa: Each tenant will have following information.
1] tickets
2] sw_inventory
3] hw_inventory
Current indexing stratergy is as follows:
indexname:
tenant_id (GUID) exa: tenant_xx1234xx-5b6x-4982-889a-667a758499c8
types:
1] tickets
2] sw_inventory
3] hw_inventory
Issues we are facing:
1] Conflicts for mappings of common fields exa: (id,name,userId) in types ( tickets,sw_inventory,hw_inventory )
2] As the number of tenants are increasing number of indices can reach upto 1000 or 2000 also.
Will it be a good idea if we reverse the indexing stratergy ?
exa:
index names :
1] tickets
2] sw_inventory
3] hw_inventory
types:
tenant_tenant_id1
tenant_tenant_id2
tenant_tenant_id3
tenant_tenant_id4
So there will be only 3 huge indices with N number of types as tenants.
So the question in this case is which solution is better?
1] Many small indices and 3 types
OR
2] 3 huge indices and many types
Regards
I suggest a different approach: https://www.elastic.co/guide/en/elasticsearch/guide/master/faking-it.html
Meaning custom routing where each document has a tenant_id or similar (something that is unique to each tenant) and use that both for routing and for defining an alias for each tenant. Then, when querying documents only for a specific tenant, you use the alias.
You are going to use one index and one type this way. Depending on the size of the index, you consider the existing index size and number of nodes and try to come up with a number of shards in such way that they are split evenly more or less on all data holding nodes and, also, following your tests the performance is acceptable. IF, in the future, the index grows too large and shards become too large to keep the same performance, consider creating a new index with more primary shards and reindex everything in that new one. It's not an approach unheard of or not used or not recommended.
1000-2000 aliases is nothing in terms of capability of being handled. If you have close to 10 nodes, or more than 10, I also do recommend dedicated master nodes with something like 4-6GB heap size and at least 4CPU cores.
Neither approach would work. As others have mentioned, both approaches cost performance and would prevent you from upgrading.
Consider having one index and type for each set of data, e.g. sw_inventory and then having a field within the mapping that differentiates between each tenant. You can then utilize document level security in a security plugin like X-Pack or Search Guard to prevent one tenant from seeing another's records (if required).
Indices created in Elasticsearch 6.0.0 or later may only contain a single mapping type which means that doc_type (_type) is deprecated.
Full explanation you can find here but in summary there are two solutions:
Index per document type
This approach has two benefits:
Data is more likely to be dense and so benefit from compression techniques used in Lucene.
The term statistics used for scoring in full text search are more likely to be accurate because all documents in the same index represent a single entity.
Custom type field
Of course, there is a limit to how many primary shards can exist in a cluster so you may not want to waste an entire shard for a collection of only a few thousand documents. In this case, you can implement your own custom type field which will work in a similar way to the old _type.
PUT twitter
{
"mappings": {
"_doc": {
"properties": {
"type": { "type": "keyword" },
"name": { "type": "text" },
"user_name": { "type": "keyword" },
"email": { "type": "keyword" },
"content": { "type": "text" },
"tweeted_at": { "type": "date" }
}
}
}
}
You use older version of Elastic but the same logic can apply and it would be easer for you to move to newer version when you decide to do that so I think that you should go with separate index structure or in other words 3 huge indices and many types but types as a field in mapping not as _type.
I think both strategies have pros and cons:
Multiple Indexes:
Pros:
- Tenant data is isolated from the others and no query would return results from more than one.
- If total of documents is a very big number, different smaller indexes could give a better performance
Cons: Harder to manage. If each index has few documents you may be wasting a lot of resources.
EDITED: Avoid multiple types in the same index as per comments o performance and deprecation of the feature
I have an Elastic Search 5.2 cluster with 16 nodes (13 data nodes/3 master/24 GB RAM/12 GB Heap). I am performance testing a query and making 50 calls of a search query per second on the Elastic cluster. My query looks like the following -
{
"query": {
"bool": {
"must": [
{
"term": {
"cust_id": "AC-90-464690064500"
}
},
{
"range": {
"yy_mo_no": {
"gt": 201701,
"lte": 201710
}
}
}
]
}
}
}
My index mapping is like the following -
cust_id Keyword
smry_amt Long
yy_mo_no Integer // doc_values enabled
mkt_id Keyword
. . .
. . .
currency_cd Keyword // Total 10 field with 8 Keyword type
The index contains 200 million records and for each cust_id, there may be 100s of records. Index has 2 Replicas. The record size is under 100 bytes.
When I run the performance test for 10 minutes, the query response and performance seems to be very slow. Upon investigating a bit more in details in Kibana monitoring tab, It appears that there is a lot of Garbage Collection activity happening (pls. see Image below) -
I have several question clouding in my mind. I did some research on Range queries but didn't find much on what can cause GC activity in scenarios similar to mine. I also research on Memory usage and GC activity, but most of Elastic documentation refers that young generation GC is normal while Indexing, while search activity mostly use the file system cache that OS maintains. Thats why, in the chart above, Heap is not much used since Search was using File System cache.
So -
What might be causing the garbage collection to happen here ?
The chart shows that the Heap is still available to Elastic Search, and Used Heap is still very less as compared to available. Then what is triggering GC ?
Is the query type causing any internal data structure to be created that is getting disposed off, causing GC ?
The CPU spike may be due to GC activity.
Is there any other efficient way of running the Range query in Elastic Search pre 5.5 versions ?
Profiling the query tells that Elastic is running a TermQuery and a BooleanQuery with the later is costing the most.
Any idea whats going on here ?
Thanks in Advance,
SGSI.
The correct answer depends on index settings but I guess you are using integer type with enabled docValues. This data structure is supposed to support aggregations and sorting but not range queries. The right data type is range.
In case of DocValues elastic/lucene iterates over ALL documents(i.e. full scan) in order to match range query - this require to read and decode every value from DV column - this operation is quite expensive, especially when the index can not be cached by the operating system.
I have 65000 document(pdf,docx,txt,..etc) index in elastic-search using mapper-attachment. now I want to search content in that stored document using following query:
"from" : 0, "size" : 50,
"query": {
"match": {
"my_attachment.content": req.params.name
}
}
but it will take 20-30 seconds for results. It is very slow response. so what i have to do for quick response? any idea?
here is mapping:
"my_attachment": {
"type": "attachment",
"fields": {
"content": {
"type": "string",
"store": true,
"term_vector": "with_positions_offsets"
}
}
}
Since your machine has 4 CPUs and the index 5 shards, I'd suggest switching to 4 primary shards, which means you need to reindex. The reason for this approach is that at any given time one execution of the query will use 4 cores. And for one of the shards the query needs to wait. To have an equal distribution of load at query time, use 4 primary shards (=number of CPU cores) so that when you run the query there will not be too much contention at CPU level.
Also, by providing the output of curl localhost:9200/your_documents_index/_stats I saw that the "fetch" part (retrieving the documents from the shards) is taking 4.2 seconds per operation on average. This is likely the result of having very large documents or of retrieving a lot of documents. size: 50 is not a big number, but combined with large documents it will make the query to return the results in a longer time.
The content field (the one with the actual document in it) has store: true and if you want this for highlighting, the documentation says
In order to perform highlighting, the actual content of the field is required. If the field in question is stored (has store set to true in the mapping) it will be used, otherwise, the actual _source will be loaded and the relevant field will be extracted from it.
So if you didn't disable _source for the index, then that will be used and storing the content is not necessary. Also there is no magic for having a faster fetch, it's strictly related to how large your documents are and how many you want to retrieve. Not using store: true might slighly improve the time.
From nodes stats (curl -XGET "http://localhost:9200/_nodes/stats") there was no indication that the node has memory or CPU problems, so everything boils down to my previous suggestions.
I'm planning a strategy for querying millions of docs in date and user directions.
Option 1 - indexing by user. routing by date.
Option 2 - indexing by date. routing by user.
What are the differences or advantages when using routing or indexing?
One of the design patterns that Shay Banon # Elasticsearch recommends is: index by time range, route by user and use aliasing.
Create an index for each day (or a date range) and route documents on user field, so you could 'retire' older logs and you don't need queries to execute on all shards:
$ curl -XPOST localhost:9200/user_logs_20140418 -d '{
"mappings" : {
"user_log" : {
"_routing": {
"required": true,
"path": "user"
},
"properties" : {
"user" : { "type" : "string" },
"log_time": { "type": "date" }
}
}
}
}'
Create an alias to filter and route on users, so you could query for documents of user_foo:
$ curl -XPOST localhost:9200/_aliases -d '{
"actions": [{
"add": {
"alias": "user_foo",
"filter": {"term": {"user": "foo"}},
"routing": "foo"
}
}]
}'
Create aliases for time windows, so you could query for documents this_week:
$ curl -XPOST localhost:9200/_aliases -d '{
"actions": [{
"add": {
"index": ["user_logs_20140418", "user_logs_20140417", "user_logs_20140416", "user_logs_20140415", "user_logs_20140414"],
"alias": "this_week"
},
"remove": {
"index": ["user_logs_20140413", "user_logs_20140412", "user_logs_20140411", "user_logs_20140410", "user_logs_20140409", "user_logs_20140408", "user_logs_20140407"],
"alias": "this_week"
}
}]
}'
Some of the advantages of this approach:
if you search using aliases for users, you hit only shards where the users' data resides
if a user's data grows, you could consider creating a separate index for that user (all you need is to point that user's alias to the new index)
no performance implications over allocation of shards
you could 'retire' older logs by simply closing (when you close indices, they consume practically no resources) or deleting an entire index (deleting an index is simpler than deleting documents within an index)
Indexing is the process of parsing
[Tokenized, filtered] the document that you indexed[Inverted Index]. It's like appendix of an text book.
When the indexed data exceeds one server limit. instead of upgrading server configurations, add another server and share data with them. This process is called as sharding.
If we search it will search in all shards and perform map reduce and return results.If we group similar data together and search some data in specific data means it reduce processing power and increase speed.
Routing is used to store group of data in particular shards.To select a field for routing. The field should be present in all docs,field should not contains different values.
Note:Routing should be used in multiple shards environment[not in single node]. If we use routing in single node .There is no use of it.
Let's define the terms first.
Indexing, in the context of Elasticsearch, can mean many things:
indexing a document: writing a new document to Elasticsearch
indexing a field: defining a field in the mapping (schema) as indexed. All fields that you search on need to be indexed (and all fields are indexed by default)
Elasticsearch index: this is a unit of configuration (e.g. the schema/mapping) and of data (i.e. some files on disk). It's like a database, in the sense that a document is written to an index. When you search, you can reach out to one or more indices
Lucene index: an Elasticsearch index can be divided into N shards. A shard is a Lucene index. When you index a document, that document gets routed to one of the shards. When you search in the index, the search is broadcasted to a copy of each shard. Each shard replies with what it knows, then results are aggregated and sent back to the client
Judging by the context, "indexing by user" and "indexing by date" refers to having one index per user or one index per date interval (e.g. day).
Routing refers to sending documents to shards as I described earlier. By default, this is done quite randomly: a hash range is divided by the number of shards. When a document comes in, Elasticsearch hashes its _id. The hash falls into the hash range of one of the shards ==> that's where the document goes.
You can use custom routing to control this: instead of hashing the _id, Elasticsearch can hash a routing value (e.g. the user name). As a result, all documents with the same routing value (i.e. same user) land on the same shard. Routing can then be used at query time, so that Elasticsearch queries just one shard (per index) instead of N. This can bring massive query performance gains (check slide 24 in particular).
Back to the question at hand, I would take it as "what are the differences or advantages when breaking data down by index or using routing?"
To answer, the strategy should account for:
how indexing indexing (writing) is done. If there's heavy indexing, you need to make sure all nodes participate (i.e. write similar amounts of data on the same number of shards), otherwise there will be bottlenecks
how data is queried. If queries often refer to a single user's data, it's useful to have data already broken down by user (index per user or routing by user)
total number of shards. The more shards, nodes and fields you have, the bigger the cluster state. If the cluster state size becomes large (e.g. larger than a few 10s of MB), it becomes harder to keep in sync on all nodes, leading to cluster instability. As a rule of thumb, you'll want to stay within a few 10s of thousands of shards in a single Elasticsearch cluster
In practice, I've seen the following designs:
one index per fixed time interval. You'll see this with logs (e.g.
Logstash writes to daily indices by default)
one index per time interval, rotated by size. This maintains constant index sizes even if write throughput varies
one index "series" (either 1. or 2.) per user. This works well if you have few users, because it eliminates filtering. But it won't work with many users because you'd have too many shards
one index per time interval (either 1. or 2.) with lots of shards and routing by user. This works well if you have many users. As Mahesh pointed out, it's problematic if some users have lots of data, leading to uneven shards. In this case, you need a way to reindex big users into their own indices (see 3.), and you can use aliases to hide this logic from the application.
I didn't see a design with one index per user and routing by date interval yet. The main disadvantage here is that you'll likely write to one shard at a time (the shard containing today's hash). This will limit your write throughput and your ability to balance writes. But maybe this design works well for a high-but-not-huge number of users (e.g. 1K), few writes and lots of queries for limited time intervals.
BTW, if you want to learn more about this stuff, we have an Elasticsearch Operations training, where we discuss a lot about architecture, trade-offs, how Elasticsearch works under the hood. (disclosure: I deliver this class)