How are d3.map() values reset back to default? - d3.js

I am using d3.map() in an update pattern to map some values.
My code looks like this:
selectedNeighborhood.map(function(d) { rateById.set(d.endneighborhood, d.rides); }) ;
My issue is, when I make a new selection and new update, instead of replacing the existing map with a new set of values, the map is expanded. I would like my map to reset back to default every time I run my update function. How do I go about this?
One working method (not clean) is to set the map object equal to {}, then redefine the map altogether to the variable name.

Re-setting rateById to a new, blank map is not entirely unclean, but it could cause bugs if there are some objects/functions out there that retain a reference to the value of rateById in a separate variable, in which case the existing reference wouldn't update to point to the newly created map.
You want to clear the map "in place" (i.e. mutate it, so that the var rateById continues to points to the same d3.map). You can do so by looping over its entries and removing them one by one:
rateById.forEach(function(key) { rateById.remove(key); });
As a side note: it's not a big deal, but still, using Array map() for looping, as in selectedNeighborhood.map(...) ends up instantiating and returning a new Array of undefineds. If selectedNeighborhood was a giant array, this would be wasteful (in terms of memory and CPU). Using selectedNeighborhood.forEach(...) instead achieves the same result but without creating the new array, so it's more appropriate.

Related

How to add multiple nested object keys in Dexie?

I'm in a loop where I add several new keys (about 1 to 3) to an indexeddb row.
The dexie table looks like:
{
event_id: <event_id>
<event data>
groups: {
<group_id> : { group_name: <name>, <group data> }
}
}
I add the keys using Dexie's modify() callback, in a loop:
newGroupNr++
db.event.where('event_id').equals(event_id).modify(x => x.groups[newGroupNr]=objData)
objData is a simple object containing some group attributes.
However, this way when I add two or three groups, only one group is actually written to the database. I've tried wrapping them in a transaction(), but no luck.
I have the feeling that the issue is that the modify()-calls overlap each other, as they run asynchronously. Not sure if this is true, nor how to deal with this scenario.
Dexie modify():
https://dexie.org/docs/Collection/Collection.modify()
Related:
Dexie : How to add to array in nested object
EDIT: I found the problem, and it's not related to Dexie. However, I do not fully understand why this fix works, perhaps something to do with that in javascript everything is passed by reference instead of value? My theory is that the integer newGroupNr value was passed as reference, and in the next iteration of the loop, before Dexie was able to finish, incremented, causing effectively two creations of the same key. This fixed it:
newGroupNr++
let newGroupNrLocal = newGroupNr
db.event.where('event_id').equals(event_id).modify(x => x.groups[newGroupNrLocal]=objData)
There's a bug in Safari that hits Dexie's modify method in dexie versions below 3. If that's the case, upgrade dexie to latest. If it's not that, try debugging and nailing down when the modify callbacks are actually happening. A transaction won't help as all IDB operations go through transactions anyway and the modification you do should by no means overwrite the other.

Multiple parallel Increments on Parse.Object

Is it acceptable to perform multiple increment operations on different fields of the same object on Parse Server ?
e.g., in Cloud Code :
node.increment('totalExpense', cost);
node.increment('totalLabourCost', cost);
node.increment('totalHours', hours);
return node.save(null,{useMasterKey: true});
seems like mongodb supports it, based on this answer, but does Parse ?
Yes. One thing you can't do is both add and remove something from the same array within the same save. You can only do one of those operations. But, incrementing separate keys shouldn't be a problem. Incrementing a single key multiple times might do something weird but I haven't tried it.
FYI you can also use the .increment method on a key for a shell object. I.e., this works:
var node = new Parse.Object.("Node");
node.id = request.params.nodeId;
node.increment("myKey", value);
return node.save(null, {useMasterKey:true});
Even though we didn't fetch the data, we don't need to know the previous value in order to increment it on the database. Note that you don't have the data so can't access any other necessary data here.

Remove element in json array as new data added?

I have a line graph that is being updated every 5 seconds as new data is pulled from a mySQL database.
https://gist.github.com/Majella/5fc4cd5f41a6ddf2df23
How do I remove the first/oldest element from the array each time the data is called to stop the line/path being compressed?
I've tried adding data.shift() in the update function just after the data is called but only works for the first call?
I don't know the details of what lives behind getdata.php, but I assume it's returning progressively more data points each time, thus removing only the first one still lives you with a larger data set than you want. So you have a couple choices:
Change the server-side of getdata.php to return only the latest x data points (or maybe add a querystring parameter for how many points/minutes/whatever to retrieve)
Change the client-side in updateData to check the length of the array and .slice off the elements starting at lengthYouWant minus lengthYouReceived (assuming the data is already sorted correctly)

Ember.JS: Observing #each, but just iterating over new/changed items

I'm currently observing some Ember arrays like so:
observe_array: function() {
this.get("my_array").forEach(function(e,i) {
// do something
});
}.observes("my_array.#each");
Most times, if my_array is updated, multiple elements are added at once.
However, the observer fires one-by-one as each element is added, which becomes extremely inefficient. Is there anyway to do this more efficiently? Essentially, I need to be able to have a mutated array based on "my_array"
For reference, realistic sizes of my_array will be between 600-1200 elements. The "do something" block involves some operations that take a little more time - creating Date objects from strings and converting each element to json representation.
Instead of doing an observer I also tried a property with the cacheable() method/flag, but that didn't seam to speed things up very much....
Assuming (via comments) that your array is an ember-data populated one, you should try observing array.isUpdating property. I got success w/ this one.
The only drawback is it is only set when using .findAll()! (so Model.find())

Deciding whether or not a run a function, which way is better?

I have some data being loaded from a server, but there's no guarantee that I'll have it all when the UI starts to display it to the user. Every frame there's a tick function. When new data is received a flag is set so I know that it's time to load it into my data structure. Which of the following ways is a more sane way to decide when to actually run the function?
AddNewStuffToList()
{
// Clear the list and reload it with new data
}
Foo_Tick()
{
if (updated)
AddNewStuffToList();
// Rest of tick function
}
Versus:
AddNewStuffToList()
{
if (updated)
{
// Clear the list and reload it with new data
}
}
Foo_Tick()
{
AddNewStuffToList();
// Rest of tick function
}
I've omitted a lot of the irrelevant details for the sake of the example.
IMHO first one. This version separates:
when to update data (Foo_Tick)
FROM
how to loading data (AddNewStuffToList()).
2nd option just mixing all things together.
You should probably not run the function until it is updated. That way, the function can be used for more purposes.
Let's say you have 2 calls that both are going to come and put in data to the list. With the first set up, checking the variable inside of the function, you could only check if one call has came in. Instead, if you check it in the function that calls the data, you can have as many input sources as you want, without having to change the beginning function.
Functions should be really precise on what they are doing, and should avoid needing information created by another function unless it is passed in.
In the first version the simple variable check "updated" will be checked each time and only if true would AddNewStuffToList be called.
With the second version you will call AddNewStuffToList followed by a check to "updated" every time.
In this particular instance, given that function calls are generally expensive compared to a variable check I personally prefer the first version.
However, there are situations when a check inside the function would be better.
e.g.
doSomething(Pointer *p){
p->doSomethingElse();
}
FooTick(){
Pointer *p = new Pointer();
// do stuff ...
// lets do something
if (p){
doSomething(p);
}
}
This is clumbsy because every time you call doSomething you should really check you're
not passing in a bad pointer. What if this is forgotten? we could get an access violation.
In this case, the following is better as you're only writing the check in one place and
there is no extra overhead added because we always want to ensure we're not passing in a bad pointer.
doSomething(Pointer *p){
if (p){
p->doSomethingElse();
}
}
So in general, it depends on the situation. There are no right and wrong answers, just pros and cons here.

Resources