I am binding a database table data to treeview.
In documntation it is mentioned nodes count property as integer value which is signed 2 byte.
so if the nodes exceeds this range, nodes count is becoming negative.
Is there any workaround for this?
Yes, this is a documented bug. Fortunately, no one ever encounters it in the real world because it's completely nonsensical for a single TreeView control to ever need to display more than 32,767 nodes.
As mentioned in the linked knowledge base article, the best workaround is to maintain less nodes in your TreeView control. Consider splitting the data up between multiple TreeViews, or using a different control that is better suited for such incredibly large quantities of data.
If you absolutely must use a TreeView, Microsoft recommends that you keep the following in mind:
Performance will become extremely slow as you add more and more nodes.
Do not add more than 65535 nodes. (That's the limit imposed by the native control, which uses an unsigned integer to store the node count.)
Use the SendMessage API function to obtain the true node count. Alternately, you can use a module- or public-level variable to keep track of how many nodes are in the TreeView. Each time you add or remove a node, increment or decrement the variable by one. This is necessary if you need to determine the count of nodes because the Count property of the Nodes collection will not return the correct value.
Don't rely on the Index property of a node object. For example, the Index property is 32767 for node 32767 but is -32768 for node 32768.
You can still refer to a node by using its Key or by passing a number to the Nodes collection.
For example:
TreeView1.Nodes(40000) refers to node 40000.
Related
I have just started learning algorithms and data structures and I came by an interesting problem.
I need some help in solving the problem.
There is a data set given to me. Within the data set are characters and a number associated with each of them. I have to evaluate the sum of the largest numbers associated with each of the present characters. The list is not sorted by characters however groups of each character are repeated with no further instance of that character in the data set.
Moreover, the largest number associated with each character in the data set always appears at the largest position of reference of that character in the data set. We know the length of the entire data set and we can get retrieve the data by specifying the line number associated with that data set.
For Eg.
C-7
C-9
C-12
D-1
D-8
A-3
M-67
M-78
M-90
M-91
M-92
K-4
K-7
K-10
L-13
length=15
get(3)= D-1(stores in class with character D and value 1)
The answer for the above should be 13+10+92+3+8+12 as they are the highest numbers associated with L,K,M,A,D,C respectively.
The simplest solution is, of course, to go through all of the elements but what is the most efficient algorithm(reading the data set lesser than the length of the data set)?
You'll have to go through them each one by one, since you can't be certain what the key is.
Just for sake of easy manipulation, I would loop over the dataset and check if the key at index i is equal to the index at i+1, if it's not, that means you have a local max.
Then, store that value into a hash or dictionary if there's not already an existing key:value pair for that key, if there is, do a check to see if the existing value is less than the current value, and overwrite it if true.
While you could use statistics to optimistically skip some entries - say you read A 1, you skip 5 entries you read A 10 - good. You skip 5 more, B 3, so you need to go back and also read what is inbetween.
But in reality it won't work. Not on text.
Because IO happens in blocks. Data is stored in chunks of usually around 8k. So that is the minimum read size (even if your programming language may provide you with other sized reads, they will eventually be translated to reading blocks and buffering them).
How do you find the next line? Well you read until you find a \n...
So you don't save anything on this kind of data. It would be different if you had much larger records (several KB, like files) and an index. But building that index will require reading all at least once.
So as presented, the fastest approach would likely be to linearly scan the entire data once.
I am not so experienced in neo4j and have the requirement of searching for all graphs from a selection A of nodes to a selection B of nodes.
Around 600 nodes in the db with some relationships per node.
Node properties:
riskId
de_DE_description
en_GB_description
en_US_description
impact
Selection:
Selection A is determined by a property match (property: 'riskId')
Selection B is a known constant list of nodes (label: 'Core')
The following query returns the result I want, but it seems a bit slow to me:
match p=(node)-[*]->(:Core)
where node.riskId IN ["R47","R48","R49","R50","R51","R14","R3"]
RETURN extract (n IN nodes(p)| [n.riskId, n.impact, n.en_GB_description] )
as `risks`, length(p)
This query results in 7 rows with between 1 and 4 nodes per row, so not much.
I get around 270ms or more response time in my local environment.
I have not created any indices or done any other performance attempts.
Any hints how I can craft the query in more intelligent way or apply any performance tuning tricks?
Thank you very much,
Manuel
If there is not yet a single label that is shared by all the nodes that have the riskId property, you should add such a label (say, :Risk) to all those nodes. For example:
MATCH (n)
WHERE EXISTS(n.riskId)
SET n:Risk;
A node can have multiple labels. This alone can make your query faster, as long as you specify that node label in your query, since it would restrict scanning to only Risk nodes instead of all nodes.
However, you can do much better by first creating an index, like this:
CREATE INDEX ON :Risk(riskId);
After that, this slightly altered version of your query should be much faster, as it would use the index to quickly get the desired Risk nodes instead of scanning:
MATCH p=(node:Risk)-[*]->(:Core)
WHERE node.riskId IN ["R47","R48","R49","R50","R51","R14","R3"]
RETURN
EXTRACT(n IN nodes(p)| [n.riskId, n.impact, n.en_GB_description]) AS risks,
LENGTH(p);
I am looking for a queue algorithm that fulfills the following properties:
Processes communicate using only a shared dictionary (key-value-store)
Does not use any atomic operations other than load and store (no CAS, for example)
Supports multiple producers
Supports a single consumer
Producers can die at any time and queue must remain operational
The consumer can also die at any time and be restarted later, but there will never be more than one consumer-process running at a time
This is meant as a general question about a suitable algorithm, since I'd like to use it in a couple of different scenarios. But to help visualize the requirements, here is an example use-case:
I have a website with two pages: producer.html and consumer.html
producer.html can be opened in multiple tabs simultaneously
Each producer.html adds events to the queue
One copy of consumer.html is open and consumes these events (to aggregate and stream them to a webserver, for example)
If the multiple producer-tabs are opened by the user rather than the page, these tabs do not have references to each other available, so the usual communication methods (postMessage or calling directly into the other tab's JS code) are out. One of the ways they can still communicate with each other is via LocalStorage as suggested here: Javascript; communication between tabs/windows with same origin. But LocalStorage is not "thread-safe" as detailed here.
Note: There may be other ways to implement cross-tab communication in the browser (Flash, ...), but these are NOT the aim of this question as they won't translate to my other use-cases. This is really just an example use-case for the general queue algorithm that I am trying to find.
A couple more parameters:
The number of producers will never be very large (10s or 100s maybe), so the scaling of the number of reads and writes needed with respect to the number of producers is not really a concern.
I don't know before hand how many producers I might have and there is no immediately obvious way to assign a number or index to them. (Many mutex algorithms (Lamport's Bakery, Eisenberg&McGuire, Szymański's, ...) maintain an array of state for each process, which wouldn't necessarily be a natural approach here, although I do not want to exclude these approaches ex ante, if they can be implemented using the shared dictionary in some way...)
The algorithm should be 100% reliable. So, I'd like to avoid things like the delay in Lamport's first Fast Mutex algorithm (page 2 in the PDF) since I don't have any kind of real-time guarantees.
It would be very helpful if the queue was FIFO, but it's not strictly required.
The algorithm should not be encumbered by any patents, etc.
Update:
The Two-Lock Concurrent Queue Algorithm by Michael and Scott looks like it could work, but I would need two things to implement it:
A locking mechanism using the shared dictionary that can survive the crash of a lock-holder
A reliable way to allocate a new node (if I move the allocation into the locked section, I could just generate new random keys until I find one that's not in use yet, but there might be a better way?)
Update 2:
It seems, I wasn't being specific enough about the dictionary:
It's really nothing more than a trivial key-value-store. It provides the functions get(key) to read the value of a key, put(key, value) to change the value of a key, and delete(key) to remove a key. In some of my use-cases, I can also iterate over keys, but if possible, I'd like to avoid it for generality. Keys are arbitrary and the producers and consumers can create or calculate them as needed. The dictionary does not provide any facilities for automatically generating unique keys.
Examples are HTML LocalStorage, Google AppEngine's Datastore, a Java Map, a Python dictionary, or even a file-system with only a single directory (where the keys would be the file-names and the values the content of the files).
After quite a bit of further reading and sleeping on things for a night, I came up with one way that should be able to accomplish what I need, but it might not be the most elegant:
The paper Wait-Free Algorithms for Fast, Long-Lived Renaming by Moir and Anderson generalizes Lamport's Fast Mutex Algorithm #2 (page 6 here) into the following building block (Figure 2):
When n processes enter this section of code, at most one of them will stop, at most n-1 will move right and at most n-1 will move down.
In Lamport's algorithm, stopping means the process acquired the lock, whereas moving right or left will simply send the process back to the beginning of this section of code. To release the lock, a process simply sets Y back to false. (Not quite correct, actually... See "Update" below...)
The big problem with this is that if any of the processes ever die while holding the lock (i.e. before releasing it), the block will simply stay locked forever.
Another problem is that every process needs to be assigned a unique process ID p.
The locked-forever problem can be fixed by borrowing an idea from Moir and Anderson, namely to send processes that end up moving right or down into a different building block rather than back to this one, leading to a structure like this (Figure 3 in the paper):
Except that in this case, I won't be using this grid to assign process IDs as M&A did (although I could probably solve the problem of the unique values for p with this). Instead, every box in the grid will correspond to a very simple queue. If a process stops on a box, it acquired the tail-lock for the corresponding queue (e.g. as per the algorithm by Michael and Scott) and proceeds to enqueue a new element to that queue. Upon completion, it sets the Y value of the box back to false to allow other processes to use this queue. This way, if there is high contention or if processes die before releasing locks, new queues will be created dynamically as needed.
The consumer-process doesn't need to worry about locking the heads of the queues when dequeuing elements, since it's the only process to ever do so. So, it simply traverses the tree of boxes to find all queues and trivially helps itself to their contained elements. One thing to note is that while each individual queue will be FIFO, there is no synchronization between the queues, so the combined queue will not necessarily be FIFO.
If we now change the boolean Y to a time-stamp (or null/0 to indicate false), the consumer can also expire locks after some safe timeout to re-activate dead queues.
A note about implementation using the dictionary:
The shared variables X and Y can be entries in the dictionaries with key-names X_123 and Y_123, where 123 is the number of the box.
p can simply be any unique random string and will be stored as the value of key X_123.
The boolean or time-stamp is also simply stored as the value of key Y_123. The producer-processes interpret a missing entry for Y_123 as false or null/0.
The box-numbers 123 need to be calculated from the move-pattern. One way to do this would be to start with 1 in the top-left corner. If the process stops in that box, we're done. If not, the current number (starting with 1) is shifted left by 1 (i.e. multiplied by 2) and, if the process moved down, also incremented by 1. Smaller (and fewer) numbers can be calculated with a different numbering scheme (I still need to work it out), but this one should work.
The queues then consist of one entry with key H_123 that holds the index of the current head of the queue in its value and one entry with key T_123 that holds the index of the tail. Both default to 0 if they don't exist.
To enqueue an item into queue 123, the tail index is read from T_123 (let's say it yields 48) and an entry with key Q_123_48 is put into the dictionary with its value containing the enqueued item. After, T_123 is incremented by 1.
After the item is enqueued, the Y_123 entry is set back to false or null/0 (not deleted!)
To dequeue an item, the head index is read from H_123 (let's say it yields 39) and compared to the tail index T_123. If it is smaller, an item is available at Q_123_39, which is then read and deleted from the dictionary. After, H_123 is incremented by 1.
To traverse the box-tree, the consumer starts with the box in the top left corner. For each box (e.g. 123), if a key Y_123 exists in the dictionary (even if it contains values null/0 or false), the consumer dequeues items from the corresponding queue, and then recursively moves right and down to the adjacent boxes. If no key Y_123 exists, this box hasn't been used by any processes yet and doesn't need to be considered (and neither do the boxes below or to its right).
I haven't actually implemented this yet, but I'll do that next. I just wanted to post this already to see if it could inspire other approaches or if anyone can see anything wrong with this idea.
Update:
I just noticed one complication: It is possible that if two processes are trying to acquire the lock for a queue simultaneously, both will fail and move on to the next block. This will leave that queue locked forever as no-one will be left to set Y back to false or null/0.
This is the reason why the "Long-Lived Renaming" algorithm by M&A as well as Lamport's algorithm #2 use an array of Y-values in which every process has its own entry that it resets also if it moves on to another block. Y is then only considered false if all entries are false.
Since I don't know before-hand how many processes I will have, I could implement this only if the dictionary had some way of enumerating keys (the keys would then be Y_123_456 where 456 is the value of p for each process).
But, with rare contention and the above described timeout-mechanism for reactivating dead queues, the issue might lead to only a little bit of memory inefficiency, rather than a major problem.
Update 2:
A better way to label the boxes would be this pattern:
If we call the total number of moves n (counting the move into the top left box also, i.e. n ≥ 1) and the number of moves to the right r, then the box-number can be calculated using
box = (n × (n - 1))/2 + r
Just use a RDBMS. It's pretty simple in MS SQL, for PostgresSQL you'd have to use the RETURNING keyword and for MySQL you'd probably have to use triggers.
CREATE TABLE Q ([Key] BIGINT IDENTITY(1,1) PRIMARY KEY, [Message] NVARCHAR(4000))
INSERT INTO Q OUTPUT inserted.* VALUE(#message)
DELETE TOP(1) Q WITH (READPAST) OUTPUT deleted.*
If you were really hoping for an algorithmic solution, just use a ring buffer.
const int MAX_Q_SIZE = 20000000;
static string[] Q = new string[MAX_Q_SIZE];
static long ProducerID = 0;
static long ConsumerID = 0;
public static long Produce(string message) {
long key = Interlocked.Increment(ref ProducerID);
int idx = (int)(key % MAX_Q_SIZE);
Q[idx] = message;
return key;
}
public static string Consume() {
long key = Interlocked.Increment(ref ConsumerID);
int idx = (int)(key % MAX_Q_SIZE);
string message = Q[idx];
return message;
}
I am using neo4j-core gem (Neo4j::Node API). It is the only MRI-compatible Ruby binding of neo4j that I could find, and hence is valuable, but its documentation is a crap (it has missing links, lots of typographical errors, and is difficult to comprehend). In the Label and Index Support section of the first link, it says:
Create a node with an [sic] label person and one property
Neo4j::Node.create({name: 'kalle'}, :person)
Add index on a label
person = Label.create(:person)
person.create_index(:name)
drop index
person.drop_index(:name)
(whose second code line I believe is a typographical error of the following)
person = Node4j::Label.create(:person)
What is a label, is it the name of a database table, or is it an attribute peculiar to a node?
If it is the name of a node, I don't under the fact that (according to the API in the second link) the method Neo4j::Node.create and Neo4j::Node#add_label can take multiple arguments for the label. What does it mean to have multiple labels on a node?
Furthermore, If I repeat the create command with the same label argument, it creates a different node object each time. What does it mean to have multiple nodes with the same name? Isn't a label something to identify a node?
What is index? How are labels and indices different?
Labels are a way of grouping nodes. You can give the label to many nodes or just one node. Think of it as a collection of nodes that are grouped together. They allow you to assign indexes and other constraints.
An index allows quick lookup of nodes or edges without having to traverse the entire graph to find them. Think of it as a table of direct pointers to the particular nodes/edges indexed.
As I read what you pasted from the docs (and without, admittedly, knowing the slightest thing about neo4j):
It's a graph database, where every piece of data is a node with a certain amount of properties.
Each node can have a label (or more, presumably?). Think of it as a type -- or perhaps more appropriately, in Ruby parlance, a Module.
It's a database, so nodes can be part of an index for quicker access. So can subsets of nodes, and therefor nodes with a certain label.
Put another way: Think of the label as the table in a DB. Nodes as DB rows, which can belong to one or more labels/tables, or no label/table at all for that matter. And indexes as DB indexes on sets of rows.
I understand how typically trees are used to modify persistent data structures (create a new node and replace all it's ancestors).
But what if I have a tree of 10,000's of nodes and I need to modify 1000's of them? I don't want to go through and create 1000's of new roots, I only need the one new root that results from modifying everything at once.
For example:
Let's take a persistent binary tree for example. In the single update node case, it does a search until it finds the node, creates a new one with the modifications and the old children, and creates new ancestors up to the root.
In the bulk update case could we do:
Instead of just updating a single node, you're going to update 1000 nodes on it in one pass.
At the root node, the current list is the full list. You then split that list between those that match the left node and those that match the right. If none match one of the children, don't descend to it. You then descend to the left node (assuming there were matches), split its search list between its children, and continue. When you have a single node and a match, you update it and go back up, replacing and updating ancestors and other branches as appropriate.
This would result in only one new root even though it modified any number of nodes.
These kind of "mass modification" operations are sometimes called bulk updates. Of course, the details will vary depending on exactly what kind of data structure you are working with and what kind of modifications you are trying to perform.
Typical kinds of operations might include "delete all values satisfying some condition" or "increment the values associated with all the keys in this list". Frequently, these operations can be performed in a single walk over the entire structure, taking O(n) time.
You seem to be concerned about the memory allocation involved in creating "1000's of new roots". Typical allocation for performing the operations one at a time would be O(k log n), where k is the number of nodes being modified. Typical allocation for performing the single walk over the entire structure would be O(n). Which is better depends on k and n.
In some cases, you can decrease the amount of allocation--at the cost of more complicated code--by paying special attention to when changes occur. For example, if you have a recursive algorithm that returns a tree, you might modify the algorithm to return a tree together with a boolean indicating whether anything has changed. The algorithm could then check those booleans before allocating a new node to see whether the old node can safely be reused. However, people don't usually bother with this extra check unless and until they have evidence that the extra memory allocation is actually a problem.
A particular implementation of what you're looking for can be found in Clojure's (and ClojureScript's) transients.
In short, given a fully-immutable, persistent data structure, a transient version of it will make changes using destructive (allocation-efficient) mutation, which you can flip back into a proper persistent data structure again when you're done with your performance-sensitive operations. It is only at the transition back to a persistent data structure that new roots are created (for example), thus amortizing the attendant cost over the number of logical operations you performed on the structure while it was in its transient form.