I have 2 fields type in my index;
doc1
{
"category":"15",
"url":"http://stackoverflow.com/questions/ask"
}
doc2
{
"url":"http://stackoverflow.com/questions/ask"
"requestsize":"231",
"logdate":"22/12/2012",
"username":"mehmetyeneryilmaz"
}
now I need such a query that filter in same url field and returns fields both of documents:
result:
{
"category":"15",
"url":"http://stackoverflow.com/questions/ask"
"requestsize":"231",
"logdate":"22/12/2012",
"username":"mehmetyeneryilmaz"
}
The results given by elasticsearch are always per document, means that if there are multiple documents satisfying your query/filter, they would always appear as a different documents in the result and never merged into a single document. Hence merging them at client side is the one option which you can use. To avoid getting complete document and just to get the relevant fields, you can use "fields" in your query.
If this is not what you need and still needs narrowing down the result from the query itself, you can use top hit aggregations. It will give you the complete list of documents under a single bucket. But it would also have source field which would contain the complete documents itself.
Try giving a read to page:
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/search-aggregations-metrics-top-hits-aggregation.html
Related
I have a db of documents. Every document has a property(keyword) called index (noting to do with the elastic index) and a property(keyword) named superIndex. There can be multiple documents with the same index and multiple documents with the same superIndex in the DB, these fields are not unique.
I run a compound query searching free text on the text content of these documents, with sorting, and get the results I want. However, I get many documents having the same index and/or superIndex. Currently I programmatically filter the result list and take only the first result from each index and superIndex. My requirement is that at the end I'm left with the top results from the sort, the first from each index and superIndex.
Can this be done using elastic query. If so how?
Field collapsing allows you to collapse all search results having the same value in a field (e.g. index). (See Elasticsearch Reference: Field Collapsing)
I have multiple queries that need to filter data on elasticsearch. This queries are returning document ids from indexes that match the filter.
However i need to do another operation depending from user selection, to extract/add document unique id's from previous sum of queries with current query. The maximum number of query search is 5.
Is there an option in elastic so it will extract/add document id's from previous query? Right now i am doing this part in PHP with foreach iteration that takes a lot of time.
Edit
Example :
Ok let say we have one query on same index that contains :
{"query":{"bool":{"filter":[{"wildcard":{"182_empanalyzed":"example"}}]}}}
we will need to substract the document ids from the following query on same index :
{"query":{"bool":{"must_not":[{"nested":{"path":"184","query":{"exists":{"field":"184.*"}}}}]}}}
Keep in mind that this queries are example with only one condition in it, there might be more complexes queries with many fields to be searched on in each query. And from each following query there is an option to substract/add documents ids
We have an elasticsearch that contains over half a billion documents that each have a url field that stores a URL.
The url field mapping currently has the settings:
{
index: not_analyzed
doc_values: true
...
}
We want our users to be able to search URLs, or portions of URLs without having to use wildcards.
For example, taking the URL with path: /part1/user#site/part2/part3.ext
They should be able to bring back a matching document by searching:
part3.ext
user#site
part1
part2/part3.ext
The way I see it, we have two options:
Implement an analysed version of this field (which can no longer have doc_values: true) and do match querying instead of wildcards. This would also require using a custom analyser to leverage the pattern tokeniser to make the extracted terms correct (the standard tokeniser would split user#site into user and site).
Go through our database and for each document create a new field that is a list of URL parts. This field could have doc_values: true still so would be stored off-heap, and we could do term querying on exact field values instead of wildcards.
My question is this:
Which is better for performance: having a list of variable lengths that has doc_values on, or having an analysed field? (ie: option 1 or option 2) OR is there an option 3 that would be even better yet?!
Thanks for your help!
Your question is about a field where you need doc_values but can not index with keyword-analyzer.
You did not mention why you need doc_values. But you did mention that you currently not search in this field.
So I guess that the name of the search-field do not have to be the same: you can copy the field value in an other field which is only for search ( "store": false ). For this new field you can use the pattern-analyzer or pattern-tokenizer for your use case.
It seems that no-one has actually performance tested the two options, so I did.
I took a sample of 10 million documents and created two new indices:
An index with an analysed field that was setup as suggested in the other answer.
An index with a string field that would store all permutations of URL segmentation.
I ran an enrichment process over the second index to populate the fields. The field values on the first index were created when I re-indexed the sample data from my main index.
Then I created a set of gatling tests to run against the indices and compared the gatling results and netdata (https://github.com/firehol/netdata) landscape for each.
The results were as follows:
Regarding the netadata landscape: The analysed field showed a spike - although only a small one - on all elastic nodes. The not_analysed list field tests didn't even register.
It is worth mentioning that enriching the list field with URL segmentation permutations bloated the index by about 80% in our case. So there's a trade off - you never need to do wildcard searches for exact sub-segment matching on URLs, but you'll need a lot more disk to do it.
Update
Don't do this. Go for doc_values. Doing anything with analyzed strings that have a massive number of possible terms will mean massive field data that will, eventually, never fit in the amount of memory you can allocate it.
I have a usecase:
I need to extract pieces of information from a single url and save each piece as separate data units to be shown in different pages. When a user visits a data unit in a page, I wish to list all other data units from the same original url.
I intend to define the original url field as a not_analyzed string field and then use exact match to get all the pieces extracted from the original url.
My question is:
The original url could be very long. How efficient is elasticsearch to do exact matching for very long string? Does elasticsearch use some sort of hash algorithm such as git's for long string exact matching?
This usecase will be heavily used thus quite important for me to get an answer.
Thanks in advance.
To match exact documents in a not_analyzed filed You can use a term query which will :
Find documents that contain the exact term specified in the inverted
index.
For example :
POST _search
{
"query": {
"term" : { "url" : "google.com" }
}
}
I can't really talk in terms of performance. But this query will match as it is , and it won't apply any transformation to the url as it will be not_analyzed.
I created an index myindex in elasticsearch, loaded a few documents into it. When I visit:
localhost:9200/myindex/mytype/1023
I noticed that my particular index has the following metadata for mappings:
mappings: {
mappinggroupname: {
properties: {
Aproperty: {
type: string
}
Bproperty: {
type: string
}
}
}
}
Is there some way to add "store:yes" and index: "analyzed" without having to reload/reindex all the documents?
Note that when i want to view a single document...
i.e. localhost:9200/myindex/mytype/1023
I can see the _source field contains all the fields of that document are and when I go to the "Browser" section of the head plugin it appears that all the columns are correct and corresponding to my fieldnames. So why is it that "stored" is not showing up in metadata? I can even perform a _search on them.
What is the difference between "stored":"true" versus the fact that I can see all my fields and values after indexing all my documents via the means I mention above?
Nope, no way! That's how your documents got indexed in the underlying lucene. The only way to change it is to reindex them all!
You see all those fields because you see the content of the special _source field in lucene, that's stored by default through elasticsearch. You are not storing all the fields separately but you do have the source document that you originally indexed through the _source, a single field that contains the whole document.
Generally the _source field is just enough, you don't usually need to configure every field as stored.
Also, the default is "index":"analyzed" if not specified for all the string fields. That means those fields are indexed and analyzed using the standard analyzer if not specified in the mapping. Therefore, as far as I can see from your mapping those two fields should be indexed, thus searchable.