Does Parse now support more than 1 geopoint per object? - parse-platform

It used to be, and the documentation still says: Each PFObject class may only have one key with a PFGeoPoint object.
But in my tests today, I created an object with 2 GeoPoint columns, was able to query on either GeoPoint, and was able to modify and save either GeoPoint. Previously, this would lead to an error like: only 1 ParseGeoPoint object can be stored in a class.
Is this really supported now?
Some additional info: I first have to create the 2 geoPoint columns in the data browser. If they don't exist and my iPhone code tries to save an object with 2 geoPoints, then I get the "only one GeoPoint field may exist in an object". But as long as the 2 columns exist, my client code appears to be able to use both.

As of July 2015, Parse still does not support more than one GeoPoint column on a class. They have, however, fixed the Data Browser to prevent users from creating two GeoPoint columns.

Got this response from Parse (in the Google Group forum):
Hmm, that sounds like a problem with the data browser's mechanism of altering the schema. Could you report a bug? I would not recommend using objects created in this way - the underlying data store can only index one geopoint field per object, so whichever field gets indexed second just will have the index fail and you won't be able to run queries against it.

The solution is to put the second GeoPoint (which you will not be able to search on) into a singleton array.

Related

Elasticsearch: Add field on the fly like driving distance by a user searching

Is there a way to dynamically append a driving distance of the CONSUMER in ES?
I am trying to create an app and sort PROVIDERS by driving distance.
One thing I could think of is get all the PROVIDERS within a range then put them in an array. Iterate to add driving distance then sort array. But this is not efficient.
Any suggestions?
thank you!
You can use runtime fields to achieve what you want, A runtime field is a field that is evaluated at query time. Runtime fields enable you to:
Add fields to existing documents without reindexing your data Start
working with your data without understanding how it’s structured
Override the value returned from an indexed field at query time
Define fields for a specific use without modifying the underlying
schema
For more information you can check Elasticsearch blog post here, and for runtime fields here.

Deleting documents not belonging to an index

I have been evaluating elasticsearch 5.1.1. My data upload happens via NEST. I have used two different types and different index names while testing. Now that I have a better understanding of the API, I have settled on a type. I deleted all the indices and created a new one.
My documents have their own ID and I have fluent code as follows
config.InferMappingFor<SearchFriendlyIssue>(ib => ib.IdProperty(p => p.Id));
When I upload documents, the API comes back as "Updated". This is strange, since I just created a new index. What is worse, my new index only contains one document. What I expected is to have a Created response. The code to add data is as per the API documentation
var searchObject = new SearchFriendlyIssue(issue);
var response = Client.Index(searchObject, idx => idx.Index(Index));
Console.WriteLine(response.Result.ToString());
I think I am missing something around how types and indices interact. How do I get rid of my unreachable documents? Rather more specifically how do I get them into my index so they can be deleted or dealt with?
Looks like the assumption I had unreachable documents was wrong. Instead, the declaration for the ID property wasn't working, and I was overwriting the same document over and over again. My bad!

ES custom dynamic mapping field name change

I have a use case which is a bit similar to the ES example of dynamic_template where I want certain strings to be analyzed and certain not.
My document fields don't have such a convention and the decision is made based on an external schema. So currently my flow is:
I grab the inputs document from the DB
I grab the approrpiate schema (same database, currently using logstash for import)
I adjust the name in the document accordingly (using logstash's ruby mutator):
if not analyzed I don't change the name
if analyzed I change it to ORIGINALNAME_analyzed
This will handle the analyzed/not_analyzed problem thanks to dynamic_template I set but now the user doesn't know which fields are analyzed so there's no easy way for him to write queries because he doesn't know what's the name of the field.
I wanted to use field name aliases but apparently ES doesn't support them. Are there any other mechanisms I'm missing I could use here like field rename after indexation or something else?
For example this ancient thread mentions that field.sub.name can be queried as just name but I'm guessing this has changed when they disallowed . in the name some time ago since I cannot get it to work?
Let the user only create queries with the original name. I believe you have some code that converts this user query to Elasticsearch query. When converting to Elasticsearch query, instead of using the field name provided by the user alone use both the field names ORIGINALNAME as well as ORIGINALNAME_analyzed. If you are using a match query, convert it to multi_match. If you are using a term query, convert it to a bool should query. I guess you get where I am going with this.
Elasticsearch won't mind if a field does not exists. This can be a problem if there is already a field with _analyzed appended in its original name. But with some tricks that can be fixed too.

Query is not understandable - using field Fulltext search [Tags] = "foo"

I have a problem that only happens rarely with FT search. but once it happens it stays. I use the following search term in the FT search box in Lotus Notes
[Tags] = "foo"
in most application this search term work fine. but for some applications this search term gives the error "query is not understandable".
It does not matter if I replace the value, e.g [Tags] = "boo" produce the same result. and also FIELD Tags = "boo". for the record [Tag] = "foo" works fine so it seem be issues with the field or field name.
The problem only happens in some applications. Once this problem start happening no views can be searched using that search query and I get the error message everytime I search.
It does not help to remove, compact and re-create the FT index.
I get the same error in xpages when using the same search query in a view data source.
I have seen this problem using other fieldnames as well in other application.
If I remove the FT index the search query works
Creating a new copy of the "broken" database does not resolve the problem
I tried to have only one document in database, create a new FT index. the document in view does not have the field "Tags" still not working. (there are other forms in db with the fieldname "Tags")
This is a real show stopper for me as I have built some of my XPages based on search values from specific fields
In my own invstigation of this problem I think it has to do with some sort of bug in the FT index. There seem to be some data contained in documents or forms that causes the FT index to not work correctly.
I am looking for a solution to this problem as I have not found a way to repair it once it has become broken.
Update:
It does not help to follow this procedure
https://www-304.ibm.com/support/docview.wss?uid=swg21261002
Here is my debug info
[1078:0002-2250] IN FTGSearch
[1078:0002-2250] option = 0x400219
[1078:0002-2250] Query: ( FIELD Tags = "foo")
[1078:0002-2250] OUT FTGSearch error = F09
[1078:0002-2250] FTGSearch: found=0, returned=0, start=0, count=0, limit=0
It sounds like you need to fix the UNK table with a compact. Here is the listing of compact options, use a copy style not in place.
http://www-01.ibm.com/support/docview.wss?uid=swg21084388
If Tags field is sometimes numeric, I would advise looking at the database design. The UNK table is a table of all fields in the NSF. The first time a field name is used, it is stored in the UNK table as that data type. Full text searching uses that data type and only that data type. If you have a field Tags on more than one form in a database, once numeric and once text, you're in for big trouble with full text searches. The datatype in searches will depend on which datatype the field was on the first document saved which had that field. Even if you delete all documents that have it as numeric, you won't change the UNK table without the compact. Sounds like that's what you have here. Ensure the database never stores Tags as numeric. Delete or change all docs with it stored numeric. Then compact.
Thank you all for answering. I learned a whole lot about UNK tables and FT index today.
The problem was that I had a numeric field called "Tags" in a form that I hadn't looked at and really didn't think that it would contain a field by that name.
after using the DDE search I found all instances of the tags field and could eaily locate the problem form. I renamed the field in the form, removed the FT indx , used compact -c and recreated the ft index. now everythig is working fine.
One other thing to notice is that I have several databases with the same design but only a few of them had the ft index problem, the reason for this is probably because some of these databases was created after the form with the faulty Tags field was created.
I am so happy to have solved this.
lessons learned
If you plan to use fulltext index in your application. make sure you do not have the same field name in different forms and use different field types.
from now on I will probably use shared fields more :-)
One more thing we discovered
You actually do not need notes peek to find out which field tpe is stored in the UNK table. you can use the "Fields" button in the searchbar. if you select the field and the right hand box displays "contains" you know the unk table has a text field type set.

Passing parameters to a couchbase view

I'm looking to search for a particular JSON document in a bucket and I don't know its document ID, all I know is the value of one of the sub-keys. I've looked through the API documentation but still confused when it comes to my particular use case:
In mongo I can do a dynamic query like:
bucket.get({ "name" : "some-arbritrary-name-here" })
With couchbase I'm under the impression that you need to create an index (for example on the name property) and use startKey / endKey but this feels wrong - could you still end up with multiple documents being returned? Would be nice to be able to pass a parameter to the view that an exact match could be performed on. Also how would we handle multi-dimensional searches? i.e. name and category.
I'd like to do as much of the filtering as possible on the couchbase instance and ideally narrow it down to one record rather than having to filter when it comes back to the App Tier. Something like passing a dynamic value to the mapping function and only emitting documents that match.
I know you can use LINQ with couchbase to filter but if I've read the docs correctly this filtering is still done client-side but at least if we could narrow down the returned dataset to a sensible subset, client-side filtering wouldn't be such a big deal.
Cheers
So you are correct on one point, you need to create a view (an index indeed) to be able to query on on the content of the JSON document.
So in you case you have to create a view with this kind of code:
function (doc, meta) {
if (doc.type == "youtype") { // just a good practice to type the doc
emit(doc.name);
}
}
So this will create a index - distributed on all the nodes of your cluster - that you can now use in your application. You can point to a specific value using the "key" parameter

Resources