Redux/React state normalization - why maintain a separate array of IDs? - react-redux

Following the tutorial by Dan Abramov here: https://egghead.io/lessons/javascript-redux-normalizing-the-state-shape
He doesn't seem to explain the benefit of maintaining an extra reducer with an array of todo IDs (allIds), would it not be easier to have just the one byId reducer and user Object.keys or Object.values to iterate over it?

The sample Todo app shows a list of todos, in the order in which they were created. It's not possible to retrieve that ordered list in a way that is guaranteed to work across browsers using an Object and Object.keys.
JS Object properties are unordered, but arrays have an order. So the ordering of the output of Object.keys() is not guaranteed to have any relationship to the order in which the keys were added. The array allows the reducer to display the todos in the order in which they were added.
Theoretically you could use a Map, as the keys in a Map are ordered. However, there's no way to re-order the contents of a Map. With an array you could re-order the IDs without needing to touch the todo objects themselves.
In other words, the array data structure is better suited to storing ordered lists than both Object and Map.

Related

Vue is changing collection's order

I have a collection which I am sorting using and sortByDesc('created_at'). When I dd() it before returning to my view, it changes the order.
However, as soon as I pass it to my vue component, it changes the order back.
Why is this happening? Is there a way of solving this?
I keep forgetting about this all the fonking time, but it's usually because the collection-sorting methods retain their keys. Quoting the docs (https://laravel.com/docs/5.6/collections#method-sortby):
The sortBy method sorts the collection by the given key. The sorted collection keeps the original array keys, so in this example we'll use the values method to reset the keys to consecutively numbered indexes:

couchdb - retrieve unique documents for a view that emits non-unique two array keys

I have an map function in a view in CouchDB that emits non-unique two array keys, for documents of type message, e.g.
The first position in the array key is a user_id, the second position represents whether or not the user has read the message.
This works nicely in that I can set include_docs=true and retrieve the actual documents. However, I'm retrieving duplicate documents in that case, as you can see above in the view results. I need to be able to write a view that can be queried to return unique messages that have been read by a given user. Additionally, I need to be able to efficiently paginate the resultset.
notice in the image above that [66, true] is emitted twice for doc id 26a9a271de3aac494d37b17334aaf7f3. As far as I can tell, with the keys in my map function, I cannot reduce in such a way that unique documents will be returned.
the next idea I had was to emit doc._id also in the map function and reduce with group_level=exact the result being:
now I am able to get unique document ids, but I cannot get the documents without doing a second query. And even in the case of a second query, it will require a lot of complexity to do pagination like this (at least I think so).
the last idea I came up with is to emit the entire document rather than the doc._id in the third position in the array key, then I can access the entire document and likely paginate. This seems really brutish.
So my question is:
Is #3 above a terrible idea? Is there something I'm missing? Is there a better approach?
Thanks in advance.
See #WickedGrey's comment to the question. The solution is to ensure that I never emit the same key twice for one document. I do this in the map function by keeping track of the keys as I emit them in an array, then skipping the emit if the key exists in the array.

Efficient data structure to get an ID

I need an efficient data structure to generate IDs. The IDs should be able to be released using a method in the data structure. After an ID was released it can be generated again. The data structure must always retrieve the lowest unused ID.
What efficient data structure can be used for this?
Can't you just increment an integer and return that, with appropriate currency control. If someone releases an integer back store that in another sorted data structure and return that. If the list of returned integers is empty then your return is a simple as read, increment, write, return. If the list of returned integers is not empty then just read, return and remove the first int from the returned integers list

Combining Variable Numbers of Lists w/ LINQ

I have a list (List) of objects.
Each of those objects contains a list (List) of strings describing them.
I'm needing to create a dropdown containing all of the distinct strings used to describe the objects (Cards). To do this, I need a list of distinct strings used.
Any idea how/if this can be done with LINQ?
You can use the SelectMany extension method/operator to flatten a collection into the individual elements.
listOfObjects.SelectMany(x => x.DescriptionStrings).Distinct()
This will select all the strings out of the collection of description strings for each object in your list of objects.
LINQ has a Distinct function.
Assuming "_cards" exists as instance variable of List and Card.Descriptions returns the descriptions and "cardsComboBox" (in WinForms):
cardsComboBox.AutoCompleteSource = _cards.SelectMany(c => c.Descriptions).Distinct();
A reminder that that will be the list of card descriptions at the time of binding however. If you want to keep it synchronised when _cards get updated then you'll have to do some more fancy footwork or look at a reactive binding source. (We use Bindable.Linq)

NSDictionary, NSArray, NSSet and efficiency

I've got a text file, with about 200,000 lines. Each line represents an object with multiple properties. I only search through one of the properties (the unique ID) of the objects. If the unique ID I'm looking for is the same as the current object's unique ID, I'm gonna read the rest of the object's values.
Right now, each time I search for an object, I just read the whole text file line by line, create an object for each line and see if it's the object I'm looking for - which is basically the most inefficient way to do the search. I would like to read all those objects into memory, so I can later search through them more efficiently.
The question is, what's the most efficient way to perform such a search? Is a 200,000-entries NSArray a good way to do this (I doubt it)? How about an NSSet? With an NSSet, is it possible to only search for one property of the objects?
Thanks for any help!
-- Ry
#yngvedh is correct in that an NSDictionary has O(1) lookup time (as is expected for a map structure). However, after doing some testing, you can see that NSSet also has O(1) lookup time. Here's the basic test I did to come up with that: http://pastie.org/933070
Basically, I create 1,000,000 strings, then time how long it takes me to retrieve 100,000 random ones from both the dictionary and the set. When I run this a few times, the set actually appears to be faster...
dict lookup: 0.174897
set lookup: 0.166058
---------------------
dict lookup: 0.171486
set lookup: 0.165325
---------------------
dict lookup: 0.170934
set lookup: 0.164638
---------------------
dict lookup: 0.172619
set lookup: 0.172966
In your particular case, I'm not sure either of these will be what you want. You say that you want all of these objects in memory, but do you really need them all, or do you just need a few of them? If it's the latter, then I would probably read through the file and create an object ID to file offset mapping (ie, remember where each object id is in the file). Then you could look up which ones you want and use the file offset to jump to the right spot in the file, parse that line, and move on. This is a job for NSFileHandle.
Use NSDictionary to map from ID's to objects. That is: use the ID as key and the object as value. NSDictionary is the only collection class which supports efficient key lookup. (Or key lookup at all)
Dictionaries are a different kind of collection than the other collection classes. It is an associative collection (maps IDs to objects in your case) whereas the others are simply containers for multiple objects. NSSet holds unordered unique objects and NSArray holds ordered objects (may hold duplicates).
UPDATE:
To avoid reallocations as you read the entries, use the dictionaryWithCapacity: method. If you know the (approximate) number of entries prior to reading them you can use it to preallocate a big enough dictionary.
200,000 objects sounds like you might run into memory constraints, depending on size of the objects and your target environment. One other thing you may want to consider is to convert the data into SQLite database, and then index the columns you want to do lookup on. This would provide a good compromise between efficiency and resource consumption, as you would not have to load the full set into memory.

Resources