backbone.js: Retrieve a smaller version of model building a collection - performance

I'm trying to build an api to create a collection in backbone. My Model is called log and has this (shortened) properties (format for getLog/<id>):
{
'id': string,
'duration': float,
'distance': float,
'startDate': string,
'endDate': string
}
I need to create a collection, because I have many logs and I want to display them in a list. The api for creating the collection (getAllLogs) takes 30 sec to run, which is to slow. It returns the same as the format as the api getLog/<id>, but in an array, one element for each log on the database.
To speed things up, I rebuild the api several times and optimize it to it's limits, but now I came to 30 sec, which is still to slow.
My question is if it is possible to have a collection filled with instances of a model without ALL the information in the model, just a part of it needed to display the list. This will increase the speed of loading the collection and displaying the list, while in the background I could continue loading all other properties, or load them only for the elements I really need.
In my case, the model would load only with this information:
{
'id': string,
'distance': float
}
and all other properties could be loaded later.
How can I do it? is it a good idea anyway?
thanks.

One way to do this is to use map to get the shortened model. Something like this will convert a Backbone.Collection "collection" with all properties to one with only "id" and "distance":
var shortCollection = new Backbone.Collection(collection.toJSON().map(function(x) {
return { id: x.id, distance: x.distance };
}));
Here's a Fiddle illustration.

Related

readFragment to return all object of a type

i'm using Apollo Client do request a very structured dataset from my server. Something like
-Show
id
title
...
-Seasons
number
-Episodes
id
number
airdate
Thanks to normalization my episodes are stored individually but i cannot query them. For exemple i would like to query all the episodes to then sort them by date to display coming next.
the only way i see is to either 'reduce' my show list to an array of episode and then do the filtering. Or to do a new query to the server.
But it will be so much faster if I could get a list of all Episodes in cache.
Unfortunately with readFragment you can only query One object by its id.
Question:
Is there a way to query the cache for all object of a defined type?
The answer is late, but could have helped someone else, currently apollo does not support it. This is the issue here from github, and also a work around.
https://github.com/apollographql/apollo-client/issues/4724#issuecomment-487373566
Here is the copied workaround by #superandrew213
const serializedState = client.cache.extract()
const typeNameItems = Object.values(serializedState)
.filter(item => item.__typename === 'TypeName')
.map(item => client.readFragment({
fragmentName: 'FragmentName',
fragment: Fragment,
id: item.id,
}))
Please take note that this method is slow, especially if you have a large normalized data.

how to correctly render referenced entities in list when I have objects instead of numberic ids?

right now in order for the list to render properly I need to have this kind of data passed in:
row = {
id: value,
name: value,
height: value,
categories: [1,2,3,4]
}
how can I adapt the code so that a list works with this kind of data?
row = {
id: value,
name: value,
height: value,
categories: [{id: "1"},{id: "2"},{id: "3"},{id: "4"}]
}
when I try to do that it seems that it applies JSON.stringify to the objects so it is trying to find category with id [Object object]
I would to avoid a per case conversion of data as I do now..
it seems that I cannot do anything in my restClient since the stringify was already applied
I have the same issue when I fetch just one data row e.g in Edit or Create.. categories ReferenceArrayInput is not populated when categories contains objects
Have you tried using format?
https://marmelab.com/admin-on-rest/Inputs.html#transforming-input-value-tofrom-record
Might help transform your input value. Then you can use format() to change values back to the format your API expects.
If this does not work then you will have to probably create a custom component out of ReferenceArrayInput.

Within the MapReduce implementation, are reduce functions indexed similarly to map functions?

If I have a couple docs in Couch that look like this:
{
"_id": "be890e3ee1457e920f12722c44001b0e", // Or whatever auto ID
"_rev": "7-74d1787aa3ca6d2526c4436577da660f", // Or whatever auto rev
"type_": "count",
"value": -1,
"time": 1485759832925 // This is an Epoch time, the result of this JavaScript: var x = (new Date()).getTime(), that I calculate in the console just before saving the doc
}
And then I create a map function to retrieve these docs like so (that I run directly after creating a few docs):
function(doc) {
if (doc.type_) {
if (doc.time) {
var datetime = (new Date()).getTime();
var docTime = doc.time;
var docAge = datetime - docTime;
// Only emit docs younger than 1 minute
if (docAge / 1000 <= 60) {
emit(doc.time, docAge);
};
};
};
};
I found that once the view is calculated, that the docAge will never change and that the docs will always be emitted despite being 'too old'.
If you open a doc and re-save it, then the view will NOT emit that doc (because it reflects as a CouchDB update and now the time value is too old), but other docs will not have been recalculated (i.e. the docAge for those docs is still the same).
So by this I can see that views are incrementally updated to reflect changed docs. And as I understand, they are cached.
Question:
Where are these cached views stored?
Are Group and reduce output recalculated from scratch everytime the map
function incrementally updates?
Your views are not being "cached" per-se. The idea behind CouchDB views is that they are deterministic, and thus should not be influenced by anything beyond the document in question.
Using new Date() in your view means that you are bringing in an external resource (the clock) which means your view index will be computed in a way you aren't intending based on your question.
Your map function must deal in absolutes, so it should output the timestamp irregardless of the time that your view index is rebuilt. From your application, you'll pass the time you want to query as a parameter to the view query.
For example, consider this view function:
function (doc) {
if (doc.type_ && doc.time) {
emit(doc.time);
}
}
It will output the time for all your documents. Then, you will query the view passing in the expected timeframe.
?start_key=<timestamp from 1 minute ago>
Then you will get the documents whose timestamp falls in the last minute. You can include end_key to specify an upper-limit.
There's a bit of a mental hurdle to overcome with how MapReduce views in CouchDB are designed to work, so I would highly recommend their Guide to Views to get started. (in fact, their newest documentation is quite good and I would highly recommend reading through all of it)

Getting the objects with similar secondary index in Riak?

Is there a way to get all the objects in key/value format which are under one similar secondary index value. I know we can get the list of keys for one secondary index (bucket/{{bucketName}}/index/{{index_name}}/{{index_val}}). But somehow my requirements are such that if I can get all the objects too. I don't want to perform a separate query for each key to get the object details separately if there is way around it.
I am completely new to Riak and I am totally a front-end guy, so please bear with me if something I ask is of novice level.
In Riak, it's sometimes the case that the better way is to do separate lookups for each key. Coming from other databases this seems strange, and likely inefficient, however you may find your query will be faster over an index and a bunch of single object gets, than a map/reduce for all the objects in a single go.
Try both these approaches, and see which turns out fastest for your dataset - variables that affect this are: size of data being queried; size of each document; power of your cluster; load the cluster is under etc.
Python code demonstrating the index and separate gets (if the data you're getting is large, this method can be made memory-efficient on the client, as you don't need to store all the objects in memory):
query = riak_client.index("bucket_name", 'myindex', 1)
query.map("""
function(v, kd, args) {
return [v.key];
}"""
)
results = query.run()
bucket = riak_client.bucket("bucket_name")
for key in results:
obj = bucket.get(key)
# .. do something with the object
Python code demonstrating a map/reduce for all objects (returns a list of {key:document} objects):
query = riak_client.index("bucket_name", 'myindex', 1)
query.map("""
function(v, kd, args) {
var obj = Riak.mapValuesJson(v)[0];
return [ {
'key': v.key,
'data': obj,
} ];
}"""
)
results = query.run()

How to set same meta data across collections?

Am trying to set a product category on different collections but only the last collection defined in docpad.coffee actually sets it when trying it like so
firstCollection: ->
#getCollection("html").findAllLive().on "add", (model) ->
model.setMeta({category: 'first'})
secondCollection: ->
#getCollection("html").findAllLive().on "add", (model) ->
model.setMeta({category: 'second'})
document.categorywill be 'second' for all documents of each collection.
How to set the same meta data individually per doc in a collection?
What problem are you trying to solve? Because your approach is not going to work. If you share what you're trying to do, we may be able to suggest an alternative approach.
Your current approach won't work because you are setting a metadata property named "category" that is a string. That metadata property lives on the documents in the collection and not on the collection itself.
Both collections are pointing at the same set of documents. Each individual document can only have a single value for that property. It can't be both 'first' and 'second'. The last one to set it wins, and in this case, the event that sets it to 'second' is happening last and so all of the documents have 'second' as the value for that metadata property.
Update: I found a better way to do this: model.setMetaDefaults({foo:'bar'})
For example, to create a blog collection with a default cssClass of post:
collections: {
blog: function() {
return this.getCollection("documents")
.findAllLive({relativeOutDirPath:'blog'}, [{filename:-1}])
.on("add", function (model) {
model.setMetaDefaults({'cssClass': 'post'})
});
}
},
This would go in your docpad.coffee file or, in my case, docpad.js.
See a working example with full context at https://github.com/nfriedly/nfriedly.com/blob/master/docpad.js#L72 (collection is called "techblog", starts around like 72).

Resources