i am a beginner & developing my very first project with lucene.net i.e. an address search utility, lucene.net 3.0.3
using standard analyzer, query parser, (suppose i have a single field, Stored & Analyzed as well)
- sample data : (every row is a document with a single field)
(Postcode and street column concatenated)
UB6 9AH Greenford Road something
UB6 9AP Greenford Road something
UB1 3EB Greenford Road something
PR8 3JT Greenford Road something
HA1 3QD something Greenford Road
SM1 1JY something Greenford Road something
Searching
StringBuilder customQuery = new StringBuilder();
customQuery.Append(_searchFieldName + ":\"" + searchTerm + "\"^" + (wordsCount));
// this is for phrase matching
foreach (var word in words.Where(word => !string.IsNullOrEmpty(word)))
{
customQuery.Append(" +" + _searchFieldName + ":" + word + "*");
}
// this is prefix match for each word
Query query = _parser.Parse(customQuery.ToString());
_searcher.Search(query, collector);
all above (searching) working fine
Question
if i search for "Greenford road" ,
i may want that row that has 'SM1' should come up (means i want to priorities result as per postcode)
i have tested Query-Time-Boost and it works fine
but i may have a long list of priority postcodes sometimes (so i don't want to loop over each postcode and set its priority at query time
I WANT DOCUMENT TIME BOOSTING
but whatever document boost i set (at the time of indexing), it doesn't effect my search results
doc.Add(new Field(SearchFieldName, SearchField, Field.Store.YES, Field.Index.ANALYZED));
if (condition == true)
{
doc.Boost = 2; // or 5 or 200 etc (nothing works)
}
please HELP
i tried to understand similarity and scoring, but its too much mathematics there...
please help....
I recently had this problem myself and I think it might be due to wildcard queries (It was in my case at least). There is another post here that explains the issue better, and provides a possible solution:
Lucene .net Boost not working when using * wildcard
Related
I have a dataset with documents that are identifiable by three fields, let's say "name","timestamp" and "country". Now, I use elasticsearch-dsl-py, but I can read native elasticsearch queries, so I can accept those as answers as well.
Here's my code to get a single document by the three fields:
def get(name, timestamp, country):
search = Item.search()
search = search.filter("term", name=name)
search = search.filter("term", timestamp=timestamp)
search = search.filter("term", country=country)
search = search[:1]
return search.execute()[0]
This is all good, but sometimes I'll need to get 200+ items and calling this function means 200 queries to ES.
What I'm looking for is a single query that will take a list of the three field-identifiers and return all the documents matching it, no matter the order.
I've tried using ORs + ANDs but unfortunately the performance is still poor, although at least I'm not making 200 round trips to the server.
def get_batch(list_of_identifiers):
search = Item.search()
batch_query = None
for ref in list_of_identifiers:
sub_query = Q("match", name=ref["name"])
sub_query &= Q("match", timestamp=ref["timestamp"])
sub_query &= Q("match", country=ref["country"])
if not batch_query:
batch_query = sub_query
else:
batch_query |= sub_query
search = search.filter(batch_query)
return search.scan()
Is there a faster/better approach to this problem?
Is using a multi-search going to be the faster option than using should/musts (OR/ANDs) in a single query?
EDIT: I tried multi-search and there was virtually no difference in the time. We're talking about seconds here. For 6 items it takes 60ms to get the result, for 200 items we're talking about 4-5 seconds.
When benchmarking Apache Lucene v7.5 I noticed a strange behavior:
I indexed the English Wikipedia dump (5,677,776 docs) using Lucene with the SimpleAnalyzer (No stopwords, no stemming)
Then I searched the index with the following queries:
the totalHits=5,382,873
who totalHits=1,687,254
the who totalHits=5,411,305
"the who" totalHits=8,827
The result number for the Boolean query the who is both larger than the result number for the single term the and the result number for the single term who, when it should be smaller than both.
Is there an explanation for that?
Code snippet:
analyzer = new SimpleAnalyzer();
MultiFieldQueryParser parser = new MultiFieldQueryParser(new String[]{"title", "content","domain","url"},analyzer);
// Parse
Query q = parser.parse(querystr);
// top-10 results
int hitsPerPage = 10;
IndexReader indexReader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(indexReader);
// Ranker
TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage);
// Search
searcher.search(q, collector);
// Retrieve the top-10 documents
TopDocs topDocs=collector.topDocs();
ScoreDoc[] hits = topDocs.scoreDocs;
totalHits=topDocs.totalHits;
System.out.println("query: "+querystr + " " + hits.length+" "+String.format("%,d",totalHits));
The explanation is that the default operator is OR and not AND as you assume. Searching for the who returns documents that have either the or who or both.
the - 5,382,873
who - 1,687,254
the OR who - 5,411,305
I.e. most documents that contain who also contains the, except for 28 432 documents which are added to the result set when you retrieve both.
You can change this behavior by changing the default operator:
parser.setDefaultOperator(QueryParserBase.AND_OPERATOR)
I am beginning with ElasticSearch and really like it, hovewer I am stuck with quite simple scenario.
I am indexing such structure of a Worker:
NAME SURENAME ID AGE SEX NAME_SURENAME BIRTH_DATE
NAME_SURENAME - not analyzed - this field is indexed for grouping purposes
NAME, SURENAME - analyzed
The task is simple - search 5 unique workers sorted by birth_date (unique means the same name and surename, even if they are in different age and are different people)
I read about aggregation queries and as I understand, I can get only aggregations without documents. Unfortunatelly I aggregate by name and surename so I won't have other fields in results in buckets, like for example document ID field at least. But I also read about TopHit aggregation, that it returns document, and i tried it - the second idea below.
I have two ideas
1) Not use aggregations, just search 5 workers, filter duplicates in java and again search workers and filter duplicates in Java till I reach 5 unique results
2) Use aggregations. I event tried it like below, it even works on test data but since it is my first time, please advice, whether it works accidentially or it is done correctly? So generally I thought I could get 5 buckets with one TopHit document. I have no idea how TopHit document is chosen but it seems to work. Below is the code
String searchString = "test";
BoolQueryBuilder query = boolQuery().minimumNumberShouldMatch(1).should(matchQuery("name", searchString).should(matchQuery("surename", searchString));
TermsBuilder terms = AggregationBuilders.terms("namesAgg").size(5);
terms.field("name_surename");
terms.order(Terms.Order.aggregation("birthAgg", false)).subAggregation(AggregationBuilders.max("birthAgg")
.field("birth_date")
.subAggregation(AggregationBuilders.topHits("topHit").setSize(1).addSort("birth_date", SortOrder.DESC));
SearchRequestBuilder searchRequestBuilder = client.prepareSearch("workers")
.addAggregation(terms).setQuery(query).setSize(1).addSort(SortBuilders.fieldSort("birth_date")
.order(SortOrder.DESC));
Terms aggregations = searchRequestBuilder.execute().actionGet().getAggregations().get("namesAgg");
List<Worker> results = new ArrayList<>();
for (Terms.Bucket bucket : aggregations.getBuckets()) {
Optional<Aggregation> first = bucket.getAggregations().asList().stream().filter(aggregation -> aggregation instanceof TopHits).findFirst();
SearchHit searchHitFields = ((TopHits) first.get()).getHits().getHits()[0];
Transformer<SearchHit, Worker> transformer = transformers.get(Worker.class);
Worker transform = transformer.transform(searchHitFields);
results.add(transform);
}
return results;//
I am using Elastic Search with Titan. How can I do pagination in ES with titan?
I saw THIS and so was trying this:
Iterable<Result<Vertex>> vertices = g.indexQuery("search","v.testTitle:(mytext)")
.addParameter(new Parameter("from", 0))
.addParameter(new Parameter("size", 2)).vertices();
for (Result<Vertex> result : vertices) {
Vertex tv = result.getElement();
System.out.println(tv.getProperty("testTitle")+ ": " + result.getScore());
}
The thing is it return all 4-5 records not in the size of 2
parameters are not yet supported. The method only exists for future implementations.
However, you can currently limit your result. The following code should work:
Iterable<Result<Vertex>> vertices = g.indexQuery("search","v.testTitle:(mytext)")
.limit(2).vertices();
for (Result<Vertex> result : vertices) {
Vertex tv = result.getElement();
System.out.println(tv.getProperty("testTitle")+ ": " + result.getScore());
}
...but you can't specify an offset.
Cheers,
Daniel
I don know anyrthing about titan.But for implementing pagination concept in Elasticsearch ,you can use scroll concept.It will help a lot and Its like db cursor.. it reduces CPU usage a lot.
Refer http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-scroll.html
I was used to the traditional way of doing database searching with the following
using wildcards for term searches
using where clause for specific data like addresses and names
but at other times, I found these common methods to produce code that is so bloated, especially when it comes to complex searches.
Are there algorithms out there that you use for complex database searching? I tried to look for some but had a hard time doing so. I stumbled accross the binary search but I can't find a use for it :(
EDIT: Here's a pseudocode of a search I was working on. It uses jquery range sliders for maximum and minimum searching
query = 'select * from table'
if set minprice and not set maxprice
if minprice = 'nomin'
query += ' where price < maxprice'
else
query += ' where price < maxprice and price < minprice'
if not set minprice and set maxprice
if maxprice = 'nomax'
query += ' where price > minprice'
else
query += ' where price > minprice and price < maxprice'
if set maxprice and set minprice
if maxprice = 'nomax'
query += ' where price > minprice'
else
query += ' where price > minprice and price < maxprice'
this may not be the codebase by which you base your answers. I'm looking for more elegant ways of doing database searching.
EDIT by elegant I mean ways of rewriting the code to to achieve faster queries at less lines of code
Alright, I'm still not very clear on what you want, but I'll give it a shot...
If you're trying to speed up the query, you don't need to worry about "improved algorithms". Just make sure that any columns that you're searching on (price in your example) have an index on them, and the database will take care of searching efficiently. It's very good at it, I promise.
As for reducing the amount of code, again, I can't speak for every case, but your above pseudocode is bloated because you're handling the exact same case multiple times. My code for something like that would be more like this (pseudocode, no particular language):
if (set(minprice) and minprice != 'nomin')
conditions[] = 'price > minprice'
if (set(maxprice) and maxprice != 'nomax')
conditions[] = 'price < maxprice'
query = 'select * from table'
if (notempty(conditions))
query += ' where '+conditions.join(' and ')
Remeber speed of a query is not just the query itself. Also, greatly depends on how the db is structured. Is this a std relational layout, or a star, or? Are your keys indexed, and do you have secondary indexes? Are you expecting to bring back a lot of data, or just a couple of rows? Are you searching on columns where the db has to do a text search, or on numeric values. And of course, on top of that, how is the db physically layed out? index's and heavy hit tables on seperate drives? and so forth. Like the previous people mentioned, maybe a specific example would be more helpful in trying to solve
When interfacing with a database, you're far better off with a complex and ugly query than with an 'elegant' query which has you duplicating database search functionality inside your application. Each call to the database has a cost associated with it. If you write code to search a database within your application, it's virtually guaranteed to be more expensive.
Unless you are actually writing a database (tall order), let the database do the searching.
try to focus on reorganizing your query building process.
query = select + ' where ' + filter1 + filter2
select = 'select * from table'
filter1 = '';
if set minprice
if minprice = 'nomin'
filter1 = price > minprice'
else
filter1 = 'price < minprice'
and so on ... 'til the building the full query :
query = select;
if any filter on
query += ' where '
first = true
if set filter 1
if not first
query += ' and '
query += filter1
and so on...
you can put your filters in an array. it is more 'scalable' for your code.
The major problem with your code is that it unnecessarily mulls over every possible combination of set(minprice) and set(maxprice), while they can be treated independently:
query = 'select * from table'
conditions = [] #array of strings representing conditions
if set(minprice):
conditions.append("price < minprice")
if set(maxprice):
conditions.append("price > maxprice")
if len(conditions)>0:
query += ' WHERE ' + " and ".join(conditions)
In general it is beneficial to separate generation of conditions (the if set(...) lines above) from building the actual query. This way you don't need a separate if to generate (or skip) an "AND" or "WHERE" before each generated condition but instead you can just process it in one place (the last two lines above) adding the infixes as necessary.