Why do I sometimes get embedded attributes and sometime not? - parse-platform

I'm using parse-platform with React-Redux and thunk.
If I make a query for objects which have pointers as attributes, sometimes I get the embedded attributes of the pointer and sometime I don't, is this normal behaviour? It means having to make lots more queries for the pointer attributes as you can't be confident they will be in the original results.

As you state correctly sometimes Parse fetches the pointers of an object and sometimes it doesn't.
In order to ensure that the query will bring the pointers of the object alongside all its attributes its a good idea to use the include method as query.include('pointerAttribute').
In order to ensure that only some attributes will be fetched and the rest won't, then it is a good idea to use the select method query.select('attribute1toselect', 'attribute2toselect').
Using select is much better if not going to use all the attributes as the payload's weight in the response will decrease and the frontend will have to deal with less data.

Related

Elasticsearch nest how to index polymorphically

Ok,
Basically what I'd LIKE to do, is have a "Searchable" interface I implement on my entities, and have my repositories automatically call Index on save, and handle the update/delete accordingly. This all currently works. Ultimately I'd like to search against all these indicies and be able to give some sort of indicator of the type itself.
When I try to query them all back out... I use code that looks like this:
eclient.Search<Searchable>(s => s.AllIndices().Query(q => q.QueryString(d => d.Query(query))))
I don't get anything back from this unless I explicitly specify the type I'd like to return.
Any pointers would be hugely appreciated. At this point my object model is changeable if interface/baseclass makes a difference etc.
Actually I was able to determine that my problem was actually more of a problem with what I was doing with the results. JavascriptSerializer didnt know how to properly serialize the objects, whereas json.net did a lot better job.

How to check whether an instance of an ActiveRecord model is up to date?

For testing reasons, I want to check that one of my methods doesn't update a specific entry in my database. Is there a simple way to ask an instance of an ActiveRecord model if its in sync with the database? for instance, if we had a method foobar? that could do this:
old_post = Post.find(1)
updated_post = Post.find(1)
updated_post.update_attributes(name: "this is a new name not like the old name")
old_post.foobar? #should return true, as its attributes are no longer up to date
updated_post.foobar? #should return false, as its attributes match the database directly
So is there a method that acts like foobar, or something like it? Thanks in advance.
I think your problem lies beyond finding a method which tells you wether an attribute has been updated, but in the relationship among the different objects that are instantiated. First it is important to understand, that old_post and updated_post are unrelated ruby objects. They know about how to save their own state to the database, but they do not know about each other.
Therefore your first requirement for foobar? cannot be fulfilled, as old_post will think it is up-to-date as long as no attribute has been updated. In contrast the changed? method will roughly answer in the way you are trying to achieve for updated_post. However it does so because it thinks nothing has happened since it was last saved, this will not be verified against the database upon each call of changed? as this would be wasting a database call in 99.9% of all cases.
This means it is all too easy to generate anomalies between the objects you created as there is no direct connection between the two (except the implicit connection that they once represented the same database row). If you change an attribute in one object (using e.g. title='?' it will change the value of the object and take note of the change in the changed-array. Once you save this object it will save its changed attributes to the database (by creating an individually constructed update-statement).
Another object that is already instantiated (as old_post in your example) will not know about this change and might change other attributes if you are not careful (or even the same ones if they have been changed again). Depending on your database adapter you may try to use the lock! method which will synchronize your object with the database before allowing any modifications. This however will not happen automatically as in most controller methods updates do not conflict nearly often enough to merit the synchronization as it will be idempotent in most cases.
This does not go without saying that rails can not save you from thinking about your transaction semantics if you want to guarantee specific ACID semantics for your controller methods.

Which is the most efficient way to access the value of a control?

Of the two choices I have to access the value of a control which is the most efficient?
getComponent("ControlName").getValue();
or
dataSource.getItemValue("FieldName");
I find that on occasion the getComponent does not seem to return the current value, but accessing the dataSource seems to be more reliable. So does it make much difference from a performance perspective which one is used?
The dataSource.getValue seems to work everywhere that I have tried it. However, when working with rowData I still seem to need to do a rowData.getColumnValue("Something"). rowData.getValue("Something") fails.
Neither. The fastest syntax is dataSource.getValue ("FieldName"). The getItemValue method is only reliable on the document data source, whereas the getValue method is not only also available on view entries accessed via a view data source (although in that context you would pass it the programmatic name of a view column, which is not necessarily the same name as a field), but will also be available on any custom data sources that you develop or install (e.g. third-party extension libraries). Furthermore, it does automatic type conversion that you'd have to do yourself if you used getItemValue instead.
Even on very simple pages, dataSource.getValue ("FieldName") is 5 times as fast as getComponent ("id").getValue (), because, as Fredrik mentions, first it has to find the component, and then ask it what the value is... which, behind the scenes, just asks the data source anyway. So it will always be faster to just ask the data source yourself.
NOTE: the corresponding write method is dataSource.setValue ("FieldName", "NewValue"), not dataSource.replaceItemValue ("FieldName", "NewValue"). Both will work, but setValue also does the same type conversion that getValue does, so you can pass it data that doesn't strictly conform to the old Domino Java API and it usually just figures out what the value needs to be converted to in order to be "safe" for Domino to store.
I would say that the most efficient way is to get the value directly from the datasource.
Because if you use getComponent("ControlName").getValue(); you will do a get on the component first and then a getValue from that. So do a single get from the datasource is more efficient if you ask me.

Join Two Objects for Custom Serialization?

In C# on Framework 4, I have a List and List. They can be joined on the JoinId property. ParentObj will have 2 ChildObj matches, sometimes 10.
I would like to take each Parent and all Children and serialize to a single XML entity. I am having a hard time figuring out where to start, because I also need to serialize the objects in a custom way. Can I use Linq-to-XML in this case to get each object written correctly? XmlSerializer? Not sure.
Thanks.
Can I use Linq-to-XML in this case to get each object written correctly?
Yes. This is exactly what you need. A basic example.
XmlSerializer?
This will work too, but this approach is older and I think it is less appropriate and more complicated in this case.

Displaying computed data with external dependencies

I'm building a report that needs to include an 'estimate' column, which is based on data that's not available in the dataset.
Ideally I'd like to be able to define a Java interface
public int getEstimate(int foo_id, int bar_id, int quantity);
where foo_id, bar_id and quantity are available in the row I want the estimate presented.
There will be multiple strategies for producing the estimate so it would be good to use an interface to allow swapping them when needed.
Looking at the BIRT docs, I think it's possible I ought to be using the event handler mechanisms, but that seems to only allow defining a class to use and I'd somehow like to inject a configured estimator.
A non-obfuscated example might be to say that I have a dataset which includes an IP address column, and I'd like to be able to use some GeoIP service to resolve the country from the IP address. In that case I'd have an interface public String getCountryName(String address) and the actual implementations may use MaxMind, a local cache or some other system.
How would I go about doing this?
Or.. would I be better off by writing a scripted data source that can integrate the computed data before delivering it to BIRT?
Or.. some sort of scripted data source that is then used to create a join data set?
I think a Scripted Data Source would work fine, but a Java-based event handler would be more straightforward. You can implement it as a simple POJO and get access to any and all the complex objects and tools that will allow you to calculate your estimate. The simplest solution of all may simply to be adding a calculated field to the data set.
When creating the calculated field, you can get pretty complex in terms of the scripting logic you can leverage in order to produce the resultant value. The nicest thing about this route is that all the other column values in the row (which I assume you need to calculate the estimate) are made available via the Expression editor. You can pull in complex objects (POJOs) to help in your calculations here as well by using the "Packages" object (i.e. var red = new Packages.redwood.HelloWorld())
If you want to create the Event Handler class, here is what I would do. I would create a text object and bind the onCreate even to your POJO (by extending the TextItemEventAdapter) and override the "onCreate" method. There you can do any work you want to and at the end simply call 'text.setText(theEstimateResult);' to make the estimate itself visible. As far as accessing data values to do your calculations, You can get to those in the POJO too. I assume the estimate will be a part of a larger table of values. You can access any specific row value via the reportContext.
Those are the two ideas I would give a try first. The computed column is the fastest to implement and the least likely to throw you a curve during deployment. Let me know which way you choose and we can hash it out further if needed.

Resources