Passing parameters to a couchbase view - view

I'm looking to search for a particular JSON document in a bucket and I don't know its document ID, all I know is the value of one of the sub-keys. I've looked through the API documentation but still confused when it comes to my particular use case:
In mongo I can do a dynamic query like:
bucket.get({ "name" : "some-arbritrary-name-here" })
With couchbase I'm under the impression that you need to create an index (for example on the name property) and use startKey / endKey but this feels wrong - could you still end up with multiple documents being returned? Would be nice to be able to pass a parameter to the view that an exact match could be performed on. Also how would we handle multi-dimensional searches? i.e. name and category.
I'd like to do as much of the filtering as possible on the couchbase instance and ideally narrow it down to one record rather than having to filter when it comes back to the App Tier. Something like passing a dynamic value to the mapping function and only emitting documents that match.
I know you can use LINQ with couchbase to filter but if I've read the docs correctly this filtering is still done client-side but at least if we could narrow down the returned dataset to a sensible subset, client-side filtering wouldn't be such a big deal.
Cheers

So you are correct on one point, you need to create a view (an index indeed) to be able to query on on the content of the JSON document.
So in you case you have to create a view with this kind of code:
function (doc, meta) {
if (doc.type == "youtype") { // just a good practice to type the doc
emit(doc.name);
}
}
So this will create a index - distributed on all the nodes of your cluster - that you can now use in your application. You can point to a specific value using the "key" parameter

Related

Custom query with Deepstackai haystack

I am exploring deepset haystack and found it very interesting for multiple use cases like a chatbot, search engine, document search, etc
But have not found any reference where I can create multiple indexes for different documents and search based on indexes. I thought of using meta tags for conditional search(on a particular area) by tagging the documents first and then using the params parameter of query API but the same doesn't seem to work and throws an error(I used its vanilla docker-compose based setup)
You can use multiple indices in the same document store if you want to support multiple use cases, indeed. The write_documents method of the document store has a parameter index so that you can store documents for your different use cases in different indices. In the same way, you can pass an index parameter to the query method.
As you expected, there is an alternative solution that uses the meta field of documents. However, the format needs to be slightly different. Your query needs to have the following format:
{"query": "What's the capital town?", "params": {"filters": {"name": "75_Algeria75.txt"}}}
and your documents need to have the following format:
{'text': 'Algeria is...', 'meta':{'name': "75_Algeria75.txt"}}

Couchdb get the changed document with each change notification

I'm quite sure that I want to be notified with the inserted document by each insertion in the couch db.
something like this:
http://localhost:5058/db-name/_chnages/_view/inserted-document
And I like the response to be something like the following:
{
"id":"0552065465",
"name":"james"
.
.
.
}
Reconnecting to the database for giving the actual document by each notification can cause performance issues.
Can I define a view that return the actual document by each change?
There are 3 possible way to define if a document was just added:
You add a status field to your document with a specific status for new documents.
If the revision starts with a 1- but it's not 100% accurate according to this if you do replication.
In the changes response, check if the number of revision of the document is equal to one. If so, it means it was just added(best solution IMO)
If you want to query the _changes endpoint and directly get the newly inserted documents, you can use the approach #1 and use a filter function that only returns documents with status="new".
Otherwise, you should go with approach #3 and filter the _changes responses locally. Eg: your application would receive all changes and only handle documents with revisions array count equal to 1.
And as you mentioned, you want to receive the document, not only the _id and the _rev. To do so, you can simply add the query parameter: include_docs=true

Give advantage to search by phrase in sort SOLR

Search query which I send to SOLR is:
?q=iphone 4s&sort=sold desc
By default the search works great, but the problem appears when I want to
sort results by some field for eg. sold - No. of sold products.
SOLR finds all the results which have: (iphone 4s) or (iphone) or (4s)
So, when I apply sort by field 'sold' first result is: "iPhone 3GS..." which is problem.
I need the results by phrase ("iphone 4s") first and then the rest of the results - all sorted by sold.
So, the questions are:
Is it possible to have query like this, and how?
q=iphone 4s&sort={some algoritam for phrase results first} desc, sold desc
Or, can I perform this by setting up query analyzer and how?
At the moment this is solved by sending 2 requests to SOLR,
first with phrase "iphone 4s" and, if this returns 0 results,
I perform second request without the phrase - only: iphone 4s.
If sorting by score, id, field is not sufficient, Lucene lets you implement custom sorting mechanism by providing your own subclass of FieldComparatorSource abstract base class.
With in that custom-sort-logic, you can implement the way that realizes your requirements.
Example Java code:
If(modelNum1.equals(modelNum2)){
//return based on number of units sold.
}else{
//ALWAYS return a value such that the preferred model beats others.
}
DISCLAIMER: This may lead to maintenance problems as you will have to change the logic when a new phone model arrives.
Steps:
1) Sort object accepts FieldComparatorSource type instance during instantiation.
2) Extend the FieldComparatorSource
3) You've to load the required field information that participates in 'SORTING' using FieldCache within the FieldComparatorSource in setNextReader()
4) Override the FieldComparatorSource.newComparator() to return your custom FieldComparator.
5) In the method FieldComparator.compare(slot1DocId, slot2DocId), you may include your custom logic by accessing the corresponding field information, via loaded FieldCache, using the docIds passed in.
Incorporating Lucene code into Solr as a plug-in should not trouble you..
EDIT:
Can not use space in that function. Term is only without space.
As of Solr3.1, sorting can also be done on arbitrary function queries
(as in FunctionQuery) that produce a single value per document.
So, I will use function termfreq in sort
termfreq(field,term) returns the number of times the term appears in
the field for that document.
Search query will be
q=iphone 4s&sort=termfreq(product_name,"iphone 4s") desc, sold desc
Note: The function termfreq is active from Solr 4.0 version

Lucene filter with docIds

I'm trying to do the following: I want to create a set of candidates by querying each field separately and then adding the top k matches to this set. After I'm done with that, I need to run another query on this candidate set.
The way how I implemented it right now is using a QueryWrapperFilter with a BooleanQuery that matches the unique id field of each candidate document. However, this means I have to call IndexSearcher.doc().get("docId") for each candidate document before I can add it to my BooleanQuery, which is the major bottleneck. I'm only loading the docId field via MapFieldSelector("docId).
I wanted to create my own Filter class, but I can't use the internal Lucene doc ids directly, because they are specified per segment. Any thoughts on how to approach this?
Instead of reading the stored docId, index the field (it probably already is) and use the FieldCache to retrieve docIds much faster. Then instead of using the docIds in a BooleanQuery, try using a TermsFilter or FieldCacheTermsFilter. The latter documentation describes the performance trade-offs.

CouchDB Views: created_at greater than a passed value

I'm trying to write a couchdb view that takes a created_at timestamp in a sortable format (2009/05/07 21:40:17 +0000) and returns all documents that have a greater created_at value.
I'm specifically using couch_foo but if I can figure out how to write the view I can create it in futon or in the couch_foo model instead of letting couch_foo do it for me.
I've searched all around and can't figure out the map/reduce to do this, if it's possible.
This is the kind of problem I ran into initially before I fully understood how views work.
The key to the understanding is that the view is only run once for each (revision of) a document. In other words, when you query a view, you don't run the function, you simply look up the results of when the function ran. As such, there is no way to pass any user-submitted parameters into a view.
How then to compare a value in a view with a user-submitted value? The secret is to emit that field as a key in the map function and rely on letting couchdb order by the keys.
Your map function would be something like
"map" : "function(doc) { emit(doc.created_at, doc); }"
and you would query it like so:
http://localhost:5984/db/_design/ddoc/_view/view?startkey=%222009/05/07%2021:40:17 +0000%22
I have taken the liberty of uriEncoding the quotes and spaces in the url so that it should be usable as is.
You want to write a view that creates a key of the timestamp field in that format, then query it with the startkey parameter.
So the view would look something like:
"map" : "function(doc) { emit(doc.timestamp_field, doc) }"
And your URL would be something like:
http://mysever/database/_design/mydoc/_view/myview?startkey="2009/05/07 21:40:17 +0000"
The HTTP view API page on the Wiki has more info. You may also consider the User Mailing List.
Please mind that couchdb works only on json values. If the timezone if the document stored in couchdb is different to the timezone of your startkey the query likely will fail.

Resources