Related
Tree class in MATLAB
I am implementing a tree data structure in MATLAB. Adding new child nodes to the tree, assigning and updating data values related to the nodes are typical operations that I expect to execute. Each node has the same type of data associated with it. Removing nodes is not necessary for me. So far, I've decided on a class implementation inheriting from the handle class to be able to pass references to nodes around to functions that will modify the tree.
Edit: December 2nd
First of all, thanks for all the suggestions in the comments and answers so far. They have already helped me to improve my tree class.
Someone suggested trying digraph introduced in R2015b. I have yet to explore this, but seeing as it does not work as a reference parameter similarly to a class inheriting from handle, I am a bit sceptical how it will work in my application. It is also at this point not yet clear to me how easy it will be to work with it using custom data for nodes and edges.
Edit: (Dec 3rd) Further information on the main application: MCTS
Initially, I assumed the details of the main application would only be of marginal interest, but since reading the comments and the answer by #FirefoxMetzger, I realise that it has important implications.
I am implementing a type of Monte Carlo tree search algorithm. A search tree is explored and expanded in an iterative manner. Wikipedia offers a nice graphical overview of the process:
In my application I perform a large number of search iterations. On every search iteration, I traverse the current tree starting from the root until a leaf node, then expand the tree by adding new nodes, and repeat. As the method is based on random sampling, at the start of each iteration I do not know which leaf node I will finish at on each iteration. Instead, this is determined jointly by the data of nodes currently in the tree, and the outcomes of random samples. Whatever nodes I visit during a single iteration have their data updated.
Example: I am at node n which has a few children. I need to access data in each of the children and draw a random sample that determines which of the children I move to next in the search. This is repeated until a leaf node is reached. Practically I am doing this by calling a search function on the root that will decide which child to expand next, call search on that node recursively, and so on, finally returning a value once a leaf node is reached. This value is used while returning from the recursive functions to update the data of the nodes visited during the search iteration.
The tree may be quite unbalanced such that some branches are very long chains of nodes, while others terminate quickly after the root level and are not expanded further.
Current implementation
Below is an example of my current implementation, with example of a few of the member functions for adding nodes, querying the depth or number of nodes in the tree, and so on.
classdef stree < handle
% A class for a tree object that acts like a reference
% parameter.
% The tree can be traversed in both directions by using the parent
% and children information.
% New nodes can be added to the tree. The object will automatically
% keep track of the number of nodes in the tree and increment the
% storage space as necessary.
properties (SetAccess = private)
% Hold the data at each node
Node = { [] };
% Index of the parent node. The root of the tree as a parent index
% equal to 0.
Parent = 0;
num_nodes = 0;
size_increment = 1;
maxSize = 1;
end
methods
function [obj, root_ID] = stree(data, init_siz)
% New object with only root content, with specified initial
% size
obj.Node = repmat({ data },init_siz,1);
obj.Parent = zeros(init_siz,1);
root_ID = 1;
obj.num_nodes = 1;
obj.size_increment = init_siz;
obj.maxSize = numel(obj.Parent);
end
function ID = addnode(obj, parent, data)
% Add child node to specified parent
if obj.num_nodes < obj.maxSize
% still have room for data
idx = obj.num_nodes + 1;
obj.Node{idx} = data;
obj.Parent(idx) = parent;
obj.num_nodes = idx;
else
% all preallocated elements are in use, reserve more memory
obj.Node = [
obj.Node
repmat({data},obj.size_increment,1)
];
obj.Parent = [
obj.Parent
parent
zeros(obj.size_increment-1,1)];
obj.num_nodes = obj.num_nodes + 1;
obj.maxSize = numel(obj.Parent);
end
ID = obj.num_nodes;
end
function content = get(obj, ID)
%% GET Return the contents of the given node IDs.
content = [obj.Node{ID}];
end
function obj = set(obj, ID, content)
%% SET Set the content of given node ID and return the modifed tree.
obj.Node{ID} = content;
end
function IDs = getchildren(obj, ID)
% GETCHILDREN Return the list of ID of the children of the given node ID.
% The list is returned as a line vector.
IDs = find( obj.Parent(1:obj.num_nodes) == ID );
IDs = IDs';
end
function n = nnodes(obj)
% NNODES Return the number of nodes in the tree.
% Equal to root + those whose parent is not root.
n = 1 + sum(obj.Parent(1:obj.num_nodes) ~= 0);
assert( obj.num_nodes == n);
end
function flag = isleaf(obj, ID)
% ISLEAF Return true if given ID matches a leaf node.
% A leaf node is a node that has no children.
flag = ~any( obj.Parent(1:obj.num_nodes) == ID );
end
function depth = depth(obj,ID)
% DEPTH return depth of tree under ID. If ID is not given, use
% root.
if nargin == 1
ID = 0;
end
if obj.isleaf(ID)
depth = 0;
else
children = obj.getchildren(ID);
NC = numel(children);
d = 0; % Depth from here on out
for k = 1:NC
d = max(d, obj.depth(children(k)));
end
depth = 1 + d;
end
end
end
end
However, performance at times is slow, with operations on the tree taking up most of my computation time. What specific ways would there be to make the implementation more efficient? It would even be possible to change the implementation to something else than the handle inheritance type if there are performance gains.
Profiling results with current implementation
As adding new nodes to the tree is the most typical operation (along with updating the data of a node), I did some profiling on that.
I ran the profiler on the following benchmarking code with Nd=6, Ns=10.
function T = benchmark(Nd, Ns)
% Tree benchmark. Nd: tree depth, Ns: number of nodes per layer
% Initialize tree
T = stree(rand, 10000);
add_layers(1, Nd);
function add_layers(node_id, num_layers)
if num_layers == 0
return;
end
child_id = zeros(Ns,1);
for s = 1:Ns
% add child to current node
child_id(s) = T.addnode(node_id, rand);
% recursively increase depth under child_id(s)
add_layers(child_id(s), num_layers-1);
end
end
end
Results from the profiler:
R2015b performance
It has been discovered that R2015b improves the performance of MATLAB's OOP features. I redid the above benchmark and indeed observed an improvement in performance:
So this is already good news, although further improvements are of course accepted ;)
Reserving memory differently
It was also suggested in the comments to use
obj.Node = [obj.Node; data; cell(obj.size_increment - 1,1)];
to reserve more memory rather than the current approach with repmat. This improved performance slightly. I should note that my benchmark code is for dummy data, and since the actual data is more complicated this is likely to help. Thanks! Profiler results below:
Questions on even further increasing performance
Perhaps there is an alternative way to maintain memory for the tree that is more efficient? Sadly, I typically don't know ahead of time how many nodes there will be in the tree.
Adding new nodes and modifying the data of existing nodes are the most typical operations I do on the tree. As of now, they actually take up most of the processing time of my main application. Any improvements on these functions would be most welcome.
Just as a final note, I would ideally like to keep the implementation as pure MATLAB. However, options such as MEX or using some of the integrated Java functionalities may be acceptable.
TL:DR You deep copy the entire data stored on each insertation, initialize the parent and Node cell bigger then what you expect to need.
Your data does have a tree structure, however you are not utilising this in your implementation. Instead the implemented code is a computational hungry version of a look up table (actually 2 tables), that stores the data and the relational data for the tree.
The reasons I am saying this are the following:
To insert you call stree.addnote(parent, data), which will store all data in the tree object stree's fields Node = {} and Parent = []
you seem to know prior which element in your tree you want to access as the search code is not given (if you use stree.getchild(ID) for it I have some bad news)
once you processed a node you trace it back using find() which is a list search
By no means does that mean the implementation is clumsy for the data, it may even be the best, depending on what you are doing. However it does explain your memory allocation issues and gives hints on how to resolve them.
Keep Data as lookup table
One of the ways to store data is to keep the underlying look up table. I would only do this, if you know the ID of the first element you want to modify without searching for it. This case allows you to make your structure more efficient in two steps.
First initialise your arrays bigger then what you expect you need to store the data. If the look up table's capacity is exceeded, a new one is initialized, which is X fields larger, and a deep-copy of the old data is made. If you need to expand capcity once or twice (during all insertations) it might not be an issue, but in your case a deep copy is made for ever insertation!
Second I would change the internal structure and merge the two tables Node and Parent. The reason for this is that back-propagation in your code takes O(depth_from_root * n), where n is the number of nodes in your table. This is because find() will iterate over the entire table for each parent.
Instead you can implement something similar to
table = cell(n,1) % n bigger then expected value
end_pointer = 1 % simple pointer to the first free value
function insert(data,parent_ID)
if end_pointer < numel(table)
content.data = data;
content.parent = parent_ID;
table{end_pointer} = content;
end_pointer = end_pointer + 1;
else
% need more space, make sure its enough this time
table = [table cell(end_pointer,1)];
insert(data,parent_ID);
end
end
function content = get_value(ID)
content = table(ID);
end
This instantly gives you access to the parent's ID without the need to find() it first, saving n iterations each step, so afford becomes O(depth). If you do not know your initial node, then you have to find() that one, which costs O(n).
Note that this structure has no need for is_leaf(), depth(), nnodes() or get_children(). If you still need those I need more insight in what you want to do with your data, as this highly influences a proper structure.
Tree Structure
This structure makes sense, if you never know the first node's ID and thus allways have to search for it.
The benefit is that the search for an arbitrary note works with O(depth), so searching is O(depth) instead of O(n) and back propagating is O(depth^2) instead of O(depth + n). Note that depth can be anything from log(n) for a perfectly balanced tree, that may be possible depending on your data, to n for the degenerated tree, that just is a linked list.
However to suggest something proper I would require more insight, as every tree structure kind of has its own nich. From what I can see so far, I'd suggest an unbalanced tree, that is 'sorted' by the simple order given by a nodes wanted parent. This may be further optimized depending on
is it possible to define a total order on your data
how do you treat double values (same data appearing twice)
what scale is your data (thousands, millions, ...)
is a lookup / search allways paired with back propagation
how long are the chains of 'parent-child' on your data (or how balanced and deep will the tree be using this simple order)
is there allways just one parent or is the same element inserted twice with different parents
I'll happily provide example code for above tree, just leave me a comment.
EDIT:
In your case an unbalanced tree (that is construted paralell to doing the MCTS) seems to be the best option. The code below assumes that the data is split in state and score and further that a state is unique. If it isn't this will still work, however there is a possible optimisation to increase MCTS preformance.
classdef node < handle
% A node for a tree in a MCTS
properties
state = {}; %some state of the search space that identifies the node
score = 0;
childs = cell(50,1);
num_childs = 0;
end
methods
function obj = node(state)
% for a new node simulate a score using MC
obj.score = simulate_from(state); % TODO implement simulation state -> finish
obj.state = state;
end
function value = update(obj)
% update the this node using MC recursively
if obj.num_childs == numel(obj.childs)
% there are to many childs, we have to expand the table
obj.childs = [obj.childs cell(obj.num_childs,1)];
end
if obj.do_exploration() || obj.num_childs == 0
% explore a potential state
state_to_explore = obj.explore();
%check if state has already been visited
terminate = false;
idx = 1;
while idx <= obj.num_childs && ~terminate
if obj.childs{idx}.state_equals(state_to_explore)
terminate = true;
end
idx = idx + 1;
end
%preform the according action based on search
if idx > obj.num_childs
% state has never been visited
% this action terminates the update recursion
% and creates a new leaf
obj.num_childs = obj.num_childs + 1;
obj.childs{obj.num_childs} = node(state_to_explore);
value = obj.childs{obj.num_childs}.calculate_value();
obj.update_score(value);
else
% state has been visited at least once
value = obj.childs{idx}.update();
obj.update_score(value);
end
else
% exploit what we know already
best_idx = 1;
for idx = 1:obj.num_childs
if obj.childs{idx}.score > obj.childs{best_idx}.score
best_idx = idx;
end
end
value = obj.childs{best_idx}.update();
obj.update_score(value);
end
value = obj.calculate_value();
end
function state = explore(obj)
%select a next state to explore, that may or may not be visited
%TODO
end
function bool = do_exploration(obj)
% decide if this node should be explored or exploited
%TODO
end
function bool = state_equals(obj, test_state)
% returns true if the nodes state is equal to test_state
%TODO
end
function update_score(obj, value)
% updates the score based on some value
%TODO
end
function calculate_value(obj)
% returns the value of this node to update previous nodes
%TODO
end
end
end
A few comments on the code:
depending on the setup the obj.calculate_value() might not be needed. E.g. if it is some value that can be computed by evaluating the child's scores alone
if a state can have multiple parents it makes sense to reuse the note object and cover it in the structure
as each node knows all its children a subtree can be easily generated using node as root node
searching the tree (without any update) is a simple recursive greedy search
depending on the branching factor of your search, it might be worth to visit each possible child once (upon initialization of the node) and later do randsample(obj.childs,1) for exploration as this avoids copying / reallocating of the child array
the parent property is encoded as the tree is updated recursively, passing value to the parent upon finishing the update for a node
The only time I reallocate memory is when a single node has more then 50 childs any I only do reallocation for that individual node
This should run a lot faster, as it just worries about whatever part of the tree is chosen and does not touch anything else.
I know that this might sound stupid... but how about keeping the number of free nodes instead of total number of nodes? This would require comparison against a constant (which is zero), which is single property access.
One other voodoo improvement would be moving .maxSize near .num_nodes, and placing both those before the .Node cell. Like this their position in memory won't change relative to the beginning of the object because of the growth of .Node property (the voodoo here being me guessing the internal implementation of objects in MATLAB).
Later Edit When I profiled with the .Node moved at the end of the property list, the bulk of the execution time was consumed by extending the .Node property, as expected (5.45 seconds, compared to 1.25 seconds for the comparison you mentioned).
You can try to allocate a number of elements that is proportional to the number of elements you have actually filled: this is the standard implementation for std::vector in c++
obj.Node = [obj.Node; data; cell(q * obj.num_nodes,1)];
I don't remember exactly but in MSCC q is 1 while it is .75 for GCC.
This is a solution using Java. I don't like it very much, but it does its job. I implemented the example you extracted from wikipedia.
import javax.swing.tree.DefaultMutableTreeNode
% Let's create our example tree
top = DefaultMutableTreeNode([11,21])
n1 = DefaultMutableTreeNode([7,10])
top.add(n1)
n2 = DefaultMutableTreeNode([2,4])
n1.add(n2)
n2 = DefaultMutableTreeNode([5,6])
n1.add(n2)
n3 = DefaultMutableTreeNode([2,3])
n2.add(n3)
n3 = DefaultMutableTreeNode([3,3])
n2.add(n3)
n1 = DefaultMutableTreeNode([4,8])
top.add(n1)
n2 = DefaultMutableTreeNode([1,2])
n1.add(n2)
n2 = DefaultMutableTreeNode([2,3])
n1.add(n2)
n2 = DefaultMutableTreeNode([2,3])
n1.add(n2)
n1 = DefaultMutableTreeNode([0,3])
top.add(n1)
% Element to look for, your implementation will be recursive
searching = [0 1 1];
idx = 1;
node(idx) = top;
for item = searching,
% Java transposes the matrices, remember to transpose back when you are reading
node(idx).getUserObject()'
node(idx+1) = node(idx).getChildAt(item);
idx = idx + 1;
end
node(idx).getUserObject()'
% We made a new test...
newdata = [0, 1]
newnode = DefaultMutableTreeNode(newdata)
% ...so we expand our tree at the last node we searched
node(idx).add(newnode)
% The change has to be propagated (this is where your recursion returns)
for it=length(node):-1:1,
itnode=node(it);
val = itnode.getUserObject()'
newitemdata = val + newdata
itnode.setUserObject(newitemdata)
end
% Let's see if the new values are correct
searching = [0 1 1 0];
idx = 1;
node(idx) = top;
for item = searching,
node(idx).getUserObject()'
node(idx+1) = node(idx).getChildAt(item);
idx = idx + 1;
end
node(idx).getUserObject()'
You are given with three sorted arrays ( in ascending order), you are required to find a triplet ( one element from each array) such that distance is minimum.
Distance is defined like this :
If a[i], b[j] and c[k] are three elements then
distance = max{abs(a[i]-b[j]),abs(a[i]-c[k]),abs(b[j]-c[k])}
Please give a solution in O(n) time complexity
Linear time algorithm:
double MinimalDistance(double[] A, double[] B, double[] C)
{
int i,j,k = 0;
double min_value = infinity;
double current_val;
int opt_indexes[3] = {0, 0, 0);
while(i < A.size || j < B.size || k < C.size)
{
current_val = calculate_distance(A[i],B[j],C[k]);
if(current_val < min_value)
{
min_value = current_val;
opt_indexes[1] = i;
opt_indexes[2] = j;
opt_indexes[3] = k;
}
if(A[i] < B[j] && A[i] < C[k] && i < A.size)
i++;
else if (B[j] < C[k] && j < B.size)
j++;
else
k++;
}
return min_value;
}
In each step you check the current distance, then increment the index of the array currently pointing to the minimal value. each array is iterated through exactly once, which mean the running time is O(A.size + B.size + C.size).
if you want the optimal indexes instead of the minimal values, you can return opt_indexes instead of min_value.
Suppose we have just one sorted array, then 3 consecutive elements which have less possible distances are the desired solution. Now when we have three arrays, just merge them all and make a big sorted array ABC (this can be done in O(n) by merge operation in merge-sort), just keep a flag to determine which element belongs in which original array. Now you have to find three consecutive elements in array like this:
a1,a2,b1,b2,b3,c1,b4,c2,c3,c4,b5,b6,a3,a4,a5,....
and here consecutive means they belong to the 3 different group in consecutive order, e.g: a2,b3,c1 or c4,b6,a3.
Now finding this tree elements is not hard, sure smallest and greatest one should be last and first of a elements of first and last group in some triple, e.g in the group: [c2,c3,c4],[b5,b6],[a3,a4,a5], we don't need to check a4,a5,c2,c3 is clear that possible solution in this case is among c4,[b5,b6],a5, also we don't need to compare c4 with b5,b6, or a5 with b5,b6, sure distance is made by a5-c4 (in this group). So we can start from left and keep track of last element and update best possible solution in each iteration by just keeping the last visited value of each group.
Example (first I should say that I didn't wrote the code because I think this is OP's task not me):
Suppose we have this sequences after sorted array:
a1,a2,b1,b2,b3,c1,b4,c2,c3,c4,b5,b6,a3,a4,a5,....
let iterate step by step:
We need to just keep track of last item for each item from our arrays, a is for keeping track of current best a_i, b for b_i, and c for c_i. suppose at first a_i=b_i=c_i=-1,
in the first step a will be a1, in the next step
a=a2,b=-1,c=-1
a=a2,b=b1,c=-1
a=a2,b=b2,c=-1
a=a2,b=b3,c=-1,
a=a2,b=b3,c=c1,
At this point we save current pointers (a2,b3,c1) as a best value for difference,
In the next step:
a=a2,c=c1,b=b4
Now we compare the difference of b4-a2 with previously best option, if is better, we save this pointers as a solution upto now and we proceed:
a=a2,b=b4,c=c2 (again compare and if needed update the best solution),
a=a2,b=b4,c=c3 (again ....)
a=a2,b=b4,c=c4 (again ....)
a=a2, b=b5,c=c4, ....
Ok if is not clear from the text, after merge we have (I'll suppose all of array have at least one element):
solution = infinite;
a=b=c=-1,
bestA=bestB=bestC=1;
for (int i=0;i<ABC.Length;i++)
{
if(ABC[i].type == "a") // type is a flag determines
// who is the owner of this element
{
a=ABC[i].Value;
if (b!=-1&&c!=-1)
{
if (max(|a-b|,|b-c|,|a-c|) < solution)
{
solution = max(|a-b|,|b-c|,|a-c|);
bestA= a,bestB = b,bestC = c;
}
}
}
// and two more if for type "b" and "c"
}
Sure there is more elegant algorithm than this, but I see you had problem with your link, so I guess this trivial way of looking at problem makes it easier, afterward you can understand your own link.
When drawing in random from a set of values in succession, where a drawn value is allowed to
be drawn again, a given value has (of course) a small chance of being drawn twice (or more) in immediate succession, but that causes an issue (for the purposes of a given application) and we would like to eliminate this chance. Any algorithmic ideas on how to do so (simple/efficient)?
Ideally we would like to set a threshold say as a percentage of the size of the data set:
Say the size of the set of values N=100, and the threshold T=10%, then if a given value is drawn in the current draw, it is guaranteed not to show up again in the next N*T=10 draws.
Obviously this restriction introduces bias in the random selection. We don't mind that a
proposed algorithm introduces further bias into the randomness of selection, what really
matters for this application is that the selection is just random enough to appear so
for a human observer.
As an implementation detail, the values are stored as database records, so database table flags/values can be used, or maybe external memory structures. Answers about the abstract case are welcome too.
Edit:
I just hit this other SO question here, which has good overlap with my own. Going through the good points there.
Here's an implementation that does the whole process in O(1) (for a single element) without any bias:
The idea is to treat the last K elements in the array A (which contains all the values) like a queue, we draw a value from the first N-k values in A, which is the random value, and swap it with an element in position N-Pointer, when Pointer represents the head of the queue, and it resets to 1 when it crosses K elements.
To eliminate any bias in the first K draws, the random value will be drawn between 1 and N-Pointer instead of N-k, so this virtual queue is growing in size at each draw until reaching the size of K (e.g. after 3 draws the number of possible values appear in A between indexes 1 and N-3, and the suspended values appear in indexes N-2 to N.
All operations are O(1) for drawing a single elemnent and there's no bias throughout the entire process.
void DrawNumbers(val[] A, int K)
{
N = A.size;
random Rnd = new random;
int Drawn_Index;
int Count_To_K = 1;
int Pointer = K;
while (stop_drawing_condition)
{
if (Count_To_K <= K)
{
Drawn_Index = Rnd.NextInteger(1, N-Pointer);
Count_To_K++;
}
else
{
Drawn_Index = Rnd.NextInteger(1, N-K)
}
Print("drawn value is: " + A[Drawn_Index])
Swap(A[Drawn_Index], A[N-Pointer])
Pointer--;
if (Pointer < 1) Pointer = K;
}
}
My previous suggestion, by using a list and an actual queue, is dependent on the remove method of the list, which I believe can be at best O(logN) by using an array to implement a self balancing binary tree, as the list has to have direct access to indexes.
void DrawNumbers(list N, int K)
{
queue Suspended_Values = new queue;
random Rnd = new random;
int Drawn_Index;
while (stop_drawing_condition)
{
if (Suspended_Values.count == K)
N.add(Suspended_Value.Dequeue());
Drawn_Index = Rnd.NextInteger(1, N.size) // random integer between 1 and the number of values in N
Print("drawn value is: " + N[Drawn_Index]);
Suspended_Values.Enqueue(N[Drawn_Index]);
N.Remove(Drawn_Index);
}
}
I assume you have an array, A, that contains the items you want to draw. At each time period you randomly select an item from A.
You want to prevent any given item, i, from being drawn again within some k iterations.
Let's say that your threshold is 10% of A.
So create a queue, call it drawn, that can hold threshold items. Also create a hash table that contains the drawn items. Call the hash table hash.
Then:
do
{
i = Get random item from A
if (i in hash)
{
// we have drawn this item recently. Don't draw it.
continue;
}
draw(i);
if (drawn.count == k)
{
// remove oldest item from queue
temp = drawn.dequeue();
// and from the hash table
hash.remove(temp);
}
// add new item to queue and hash table
drawn.enqueue(i);
hash.add(i);
} while (forever);
The hash table exists solely to increase lookup speed. You could do without the hash table if you're willing to do a sequential search of the queue to determine if an item has been drawn recently.
Say you have n items in your list, and you don't want any of the k last items to be selected.
Select at random from an array of size n-k, and use a queue of size k to stick the items you don't want to draw (adding to the front and removing from the back).
All operations are O(1).
---- clarification ----
Give n items, and a goal of not redrawing any of the last k draws, create an array and queue as follows.
Create an array A of size n-k, and put n-k of your items in the list (chosen at random, or seeded however you like).
Create a queue (linked list) Q and populate it with the remaining k items, again in random order or whatever order you like.
Now, each time you want to select an item at random:
Choose a random index from your array, call this i.
Give A[i] to whomever is asking for it, and add it to the front of Q.
Remove the element from the back of Q, and store it in A[i].
Everything is O(1) after the array and linked list are created, which is a one-time O(n) operation.
Now, you might wonder, what do we do if we want to change n (i.e. add or remove an element).
Each time we add an element, we either want to grow the size of A or of Q, depending on our logic for deciding what k is (i.e. fixed value, fixed fraction of n, whatever...).
If Q increases then the result is trivial, we just append the new element to Q. In this case I'd probably append it to the end of Q so that it gets in play ASAP. You could also put it in A, kicking some element out of A and appending it to the end of Q.
If A increases, you can use a standard technique for increasing arrays in amortized constant time. E.g., each time A fills up, we double it in size, and keep track of the number of cells of A that are live. (look up 'Dynamic Arrays' in Wikipedia if this is unfamiliar).
Set-based approach:
If the threshold is low (say below 40%), the suggested approach is:
Have a set and a queue of the last N*T generated values.
When generating a value, keep regenerating it until it's not contained in the set.
When pushing to the queue, pop the oldest value and remove it from the set.
Pseudo-code:
generateNextValue:
// once we're generated more than N*T elements,
// we need to start removing old elements
if queue.size >= N*T
element = queue.pop
set.remove(element)
// keep trying to generate random values until it's not contained in the set
do
value = getRandomValue()
while set.contains(value)
set.add(value)
queue.push(value)
return value
If the threshold is high, you can just turn the above on its head:
Have the set represent all values not in the last N*T generated values.
Invert all set operations (replace all set adds with removes and vice versa and replace the contains with !contains).
Pseudo-code:
generateNextValue:
if queue.size >= N*T
element = queue.pop
set.add(element)
// we can now just get a random value from the set, as it contains all candidates,
// rather than generating random values until we find one that works
value = getRandomValueFromSet()
//do
// value = getRandomValue()
//while !set.contains(value)
set.remove(value)
queue.push(value)
return value
Shuffled-based approach: (somewhat more complicated that the above)
If the threshold is a high, the above may take long, as it could keep generating values that already exists.
In this case, some shuffle-based approach may be a better idea.
Shuffle the data.
Repeatedly process the first element.
When doing so, remove it and insert it back at a random position in the range [N*T, N].
Example:
Let's say N*T = 5 and all possible values are [1,2,3,4,5,6,7,8,9,10].
Then we first shuffle, giving us, let's say, [4,3,8,9,2,6,7,1,10,5].
Then we remove 4 and insert it back in some index in the range [5,10] (say at index 5).
Then we have [3,8,9,2,4,6,7,1,10,5].
And continue removing the next element and insert it back, as required.
Implementation:
An array is fine if we don't care about efficient a whole lot - to get one element will cost O(n) time.
To make this efficient we need to use an ordered data structure that supports efficient random position inserts and first position removals. The first thing that comes to mind is a (self-balancing) binary search tree, ordered by index.
We won't be storing the actual index, the index will be implicitly defined by the structure of the tree.
At each node we will have a count of children (+ 1 for itself) (which needs to be updated on insert / remove).
An insert can be done as follows: (ignoring the self-balancing part for the moment)
// calling function
insert(node, value)
insert(node, N*T, value)
insert(node, offset, value)
// node.left / node.right can be defined as 0 if the child doesn't exist
leftCount = node.left.count - offset
rightCount = node.right.count
// Since we're here, it means we're inserting in this subtree,
// thus update the count
node.count++
// Nodes to the left are within N*T, so simply go right
// leftCount is the difference between N*T and the number of nodes on the left,
// so this needs to be the new offset (and +1 for the current node)
if leftCount < 0
insert(node.right, -leftCount+1, value)
else
// generate a random number,
// on [0, leftCount), insert to the left
// on [leftCount, leftCount], insert at the current node
// on (leftCount, leftCount + rightCount], insert to the right
sum = leftCount + rightCount + 1
random = getRandomNumberInRange(0, sum)
if random < leftCount
insert(node.left, offset, value)
else if random == leftCount
// we don't actually want to update the count here
node.count--
newNode = new Node(value)
newNode.count = node.count + 1
// TODO: swap node and newNode's data so that node's parent will now point to newNode
newNode.right = node
newNode.left = null
else
insert(node.right, -leftCount+1, value)
To visualize inserting at the current node:
If we have something like:
4
/
1
/ \
2 3
And we want to insert 5 where 1 is now, it will do this:
4
/
5
\
1
/ \
2 3
Note that when a red-black tree, for example, performs operations to keep itself balanced, none of these involve comparisons, so it doesn't need to know the order (i.e. index) of any already-inserted elements. But it will have to update the counts appropriately.
The overall efficiency will be O(log n) to get one element.
I'd put all "values" into a "list" of size N, then shuffle the list and retrieve values from the top of the list. Then you "insert" the retrieved value at a random position with any index >= N*T.
Unfortunately I'm not truly a math-guy :( So I simply tried it (in VB, so please take it as pseudocode ;) )
Public Class BiasedRandom
Private prng As New Random
Private offset As Integer
Private l As New List(Of Integer)
Public Sub New(ByVal size As Integer, ByVal threshold As Double)
If threshold <= 0 OrElse threshold >= 1 OrElse size < 1 Then Throw New System.ArgumentException("Check your params!")
offset = size * threshold
' initial fill
For i = 0 To size - 1
l.Add(i)
Next
' shuffle "Algorithm p"
For i = size - 1 To 1 Step -1
Dim j = prng.Next(0, i + 1)
Dim tmp = l(i)
l(i) = l(j)
l(j) = tmp
Next
End Sub
Public Function NextValue() As Integer
Dim tmp = l(0)
l.RemoveAt(0)
l.Insert(prng.Next(offset, l.Count + 1), tmp)
Return tmp
End Function
End Class
Then a simple check:
Public Class Form1
Dim z As Integer = 10
Dim k As BiasedRandom
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
k = New BiasedRandom(z, 0.5)
End Sub
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim j(z - 1)
For i = 1 To 10 * 1000 * 1000
j(k.NextValue) += 1
Next
Stop
End Sub
End Class
And when I check out the distribution it looks okay enough for an unarmed eye ;)
EDIT:
After thinking about RonTeller's argumentation, I have to admit that he is right. I don't think that there is a performance friendly way to achieve the wanted and to pertain a good (not more biased than required) random order.
I came to the follwoing idea:
Given a list (array whatever) like this:
0123456789 ' not shuffled to make clear what I mean
We return the first element which is 0. This one must not come up again for 4 (as an example) more draws but we also want to avoid a strong bias. Why not simply put it to the end of the list and then shuffle the "tail" of the list, i.e. the last 6 elements?
1234695807
We now return the 1 and repeat the above steps.
2340519786
And so on and so on. Since removing and inserting is kind of unnecessary work, one could use a simple array and a "pointer" to the actual element. I have changed the code from above to give a sample. It's slower than the first one, but should avoid the mentioned bias.
Public Function NextValue() As Integer
Static current As Integer = 0
' only shuffling a part of the list
For i = current + l.Count - 1 To current + 1 + offset Step -1
Dim j = prng.Next(current + offset, i + 1)
Dim tmp = l(i Mod l.Count)
l(i Mod l.Count) = l(j Mod l.Count)
l(j Mod l.Count) = tmp
Next
current += 1
Return l((current - 1) Mod l.Count)
End Function
EDIT 2:
Finally (hopefully), I think the solution is quite simple. The below code assumes that there is an array of N elements called TheArray which contains the elements in random order (could be rewritten to work with sorted array). The value DelaySize determines how long a value should be suspended after it has been drawn.
Public Function NextValue() As Integer
Static current As Integer = 0
Dim SelectIndex As Integer = prng.Next(0, TheArray.Count - DelaySize)
Dim ReturnValue = TheArray(SelectIndex)
TheArray(SelectIndex) = TheArray(TheArray.Count - 1 - current Mod DelaySize)
TheArray(TheArray.Count - 1 - current Mod DelaySize) = ReturnValue
current += 1
Return ReturnValue
End Function
Originally, I had basically written an essay with a question at the end, so I'm going to cram it down to this: which is better (being really nit-picky here)?
A)
int min = someArray[0][0];
for (int i = 0; i < someArray.length; i++)
for (int j = 0; j < someArray[i].length; j++)
min = Math.min(min, someArray[i][j]);
-or-
B)
int min = int.MAX_VALUE;
for (int i = 0; i < someArray.length; i++)
for (int j = 0; j < someArray[i].length; j++)
min = Math.min(min, someArray[i][j]);
I reckon b is faster, saving an instruction or two by initializing min to a constant value instead of using the indexer. It also feels less redundant - no comparing someArray[0][0] to itself...
As an algorithm, which is better/valid-er.
EDIT: Assume that the array is not null and not empty.
EDIT2: Fixed a couple of careless errors.
Both of these algorithms are correct (assuming, of course, the array is nonempty). I think that version A works more generally, since for some types (strings, in particular) there may not be a well-defined maximum value.
The reason that these algorithms are equivalent has to do with a cool mathematical object called a semilattice. To motivate semilattices, there are few cool properties of max that happen to hold true:
max is idempotent, so applying it to the same value twice gives back that original value: max(x, x) = x
max is commutative, so it doesn't matter what order you apply it to its arguments: max(x, y) = max(y, x)
max is associative, so when taking the maximum of three or more values it doesn't matter how you group the elements: max(max(x, y), z) = max(x, max(y, z))
These laws also hold for the minimum as well, as well as many other structures. For example, if you have a tree structure, the "least upper bound" operator also satisfies these constraints. Similarly, if you have a collection of sets and set union or intersection, you'd find that these constraints hold as well.
If you have a set of elements (for example, integers, strings, etc.) and some binary operator defined over them with the above three properties (idempotency, commutativity, and associativity), then you have found a structure called a semilattice. The binary operator is then called a meet operator (or sometimes a join operator depending on the context).
The reason that semilattices are useful is that if you have a (finite) collection of elements drawn from a semilattice and want to compute their meet, you can do so by using a loop like this:
Element e = data[0];
for (i in data[1 .. n])
e = meet(e, data[i])
The reason that this works is that because the meet operator is commutative and associative, we can apply the meet across the elements in any order that we want. Applying it one element at a time as we walk across the elements of the array in order thus produces the same value than if we had shuffled the array elements first, or iterated in reverse order, etc. In your case, the meet operator was "max" or "min," and since they satisfy the laws for meet operators described above the above code will correctly compute the max or min.
To address your initial question, we need a bit more terminology. You were curious about whether or not it was better or safer to initialize your initial guess of the minimum value to be the maximum possible integer. The reason this works is that we have the cool property that
min(int.MAX_VALUE, x) = min(x, int.MAX_VALUE) = x
In other words, if you compute the meet of int.MAX_VALUE and any other value, you get the second value back. In mathematical terms, this is because int.MAX_VALUE is the top element of the meet semilattice. More formally, a top element for a meet semilattice is an element (denoted ⊤) satisfying
meet(⊤, x) = meet(x, ⊤) = x
If you use max instead of min, then the top element would be int.MIN_VALUE, since
max(int.MIN_VALUE, x) = max(x, int.MIN_VALUE) = x
Because applying the meet operator to ⊤ and any other element produces that other element, if you have a meet semilattice with a well-defined top element, you can rewrite the above code to compute the meet of all the elements as
Element e = Element.TOP;
for (i in data[0 .. n])
e = meet(e, data[i])
This works because after the first iteration, e is set to meet(e, data[0]) = meet(Element.TOP, data[0]) = data[0] and the iteration proceeds as usual. Consequently, in your original question, it doesn't matter which of the two loops you use; as long as there is at least one element defined, they produce the same value.
That said, not all semilattices have a top element. Consider, for example, the set of all strings where the meet operator is defined as
meet(x, y) = x if x lexicographically precedes y
= y otherwise
For example, meet("a", "ab") = "a", meet("dog, "cat") = "cat", etc. In this case, there is no string s that satisfies the property meet(s, x) = meet(x, s) = x, and so the semilattice has no top element. In that case, you cannot possibly use the second version of the code, because there is no top element that you can initialize the initial value to.
However, there is a very cute technique you can use to fake this, which actually does end up getting used a bit in practice. Given a semilattice with no top element, you can create a new semilattice that does have a top element by introducing a new element ⊤ and arbitrarily defining that meet(⊤, x) = meet(x, ⊤) = x. In other words, this element is specially crafted to be a top element and has no significance otherwise.
In code, you can introduce an element like this implicitly by writing
bool found = false;
Element e;
for (i in data[0 .. n]) {
if (!found) {
found = true;
e = i;
} else {
e = meet(e, i);
}
}
This code works by having an external boolean found keep track of whether or not we have seen the first element yet. If we haven't, then we pretend that the element e is this new top element. Computing the meet of this top element and the array element produces the array element, and so we can just set the element e to be equal to that array element.
Hope this helps! Sorry if this is too theoretical... I just happen to like math. :-)
B is better; if someArray happened to be empty, you'd get a runtime error; But A and B both could have an issue, because if someArray is null (and this wasn't checked in previous lines of code), both A and B will throw exceptions.
From a practical standpoint, I like option A marginally better because if the data type being dealt with changes in the future, changing the initial value is one less thing that needs to be updated (and therefore, one less thing that can go wrong).
From an algorithmic purity standpoint, I have no idea if one is better than the other.
By the way, option A should have its initialization like so:
int min = someArray[0][0];
If I have a size N array of objects, and I have an array of unique numbers in the range 1...N, is there any algorithm to rearrange the object array in-place in the order specified by the list of numbers, and yet do this in O(N) time?
Context: I am doing a quick-sort-ish algorithm on objects that are fairly large in size, so it would be faster to do the swaps on indices than on the objects themselves, and only move the objects in one final pass. I'd just like to know if I could do this last pass without allocating memory for a separate array.
Edit: I am not asking how to do a sort in O(N) time, but rather how to do the post-sort rearranging in O(N) time with O(1) space. Sorry for not making this clear.
I think this should do:
static <T> void arrange(T[] data, int[] p) {
boolean[] done = new boolean[p.length];
for (int i = 0; i < p.length; i++) {
if (!done[i]) {
T t = data[i];
for (int j = i;;) {
done[j] = true;
if (p[j] != i) {
data[j] = data[p[j]];
j = p[j];
} else {
data[j] = t;
break;
}
}
}
}
}
Note: This is Java. If you do this in a language without garbage collection, be sure to delete done.
If you care about space, you can use a BitSet for done. I assume you can afford an additional bit per element because you seem willing to work with a permutation array, which is several times that size.
This algorithm copies instances of T n + k times, where k is the number of cycles in the permutation. You can reduce this to the optimal number of copies by skipping those i where p[i] = i.
The approach is to follow the "permutation cycles" of the permutation, rather than indexing the array left-to-right. But since you do have to begin somewhere, everytime a new permutation cycle is needed, the search for unpermuted elements is left-to-right:
// Pseudo-code
N : integer, N > 0 // N is the number of elements
swaps : integer [0..N]
data[N] : array of object
permute[N] : array of integer [-1..N] denoting permutation (used element is -1)
next_scan_start : integer;
next_scan_start = 0;
while (swaps < N )
{
// Search for the next index that is not-yet-permtued.
for (idx_cycle_search = next_scan_start;
idx_cycle_search < N;
++ idx_cycle_search)
if (permute[idx_cycle_search] >= 0)
break;
next_scan_start = idx_cycle_search + 1;
// This is a provable invariant. In short, number of non-negative
// elements in permute[] equals (N - swaps)
assert( idx_cycle_search < N );
// Completely permute one permutation cycle, 'following the
// permutation cycle's trail' This is O(N)
while (permute[idx_cycle_search] >= 0)
{
swap( data[idx_cycle_search], data[permute[idx_cycle_search] )
swaps ++;
old_idx = idx_cycle_search;
idx_cycle_search = permute[idx_cycle_search];
permute[old_idx] = -1;
// Also '= -idx_cycle_search -1' could be used rather than '-1'
// and would allow reversal of these changes to permute[] array
}
}
Do you mean that you have an array of objects O[1..N] and then you have an array P[1..N] that contains a permutation of numbers 1..N and in the end you want to get an array O1 of objects such that O1[k] = O[P[k]] for all k=1..N ?
As an example, if your objects are letters A,B,C...,Y,Z and your array P is [26,25,24,..,2,1] is your desired output Z,Y,...C,B,A ?
If yes, I believe you can do it in linear time using only O(1) additional memory. Reversing elements of an array is a special case of this scenario. In general, I think you would need to consider decomposition of your permutation P into cycles and then use it to move around the elements of your original array O[].
If that's what you are looking for, I can elaborate more.
EDIT: Others already presented excellent solutions while I was sleeping, so no need to repeat it here. ^_^
EDIT: My O(1) additional space is indeed not entirely correct. I was thinking only about "data" elements, but in fact you also need to store one bit per permutation element, so if we are precise, we need O(log n) extra bits for that. But most of the time using a sign bit (as suggested by J.F. Sebastian) is fine, so in practice we may not need anything more than we already have.
If you didn't mind allocating memory for an extra hash of indexes, you could keep a mapping of original location to current location to get a time complexity of near O(n). Here's an example in Ruby, since it's readable and pseudocode-ish. (This could be shorter or more idiomatically Ruby-ish, but I've written it out for clarity.)
#!/usr/bin/ruby
objects = ['d', 'e', 'a', 'c', 'b']
order = [2, 4, 3, 0, 1]
cur_locations = {}
order.each_with_index do |orig_location, ordinality|
# Find the current location of the item.
cur_location = orig_location
while not cur_locations[cur_location].nil? do
cur_location = cur_locations[cur_location]
end
# Swap the items and keep track of whatever we swapped forward.
objects[ordinality], objects[cur_location] = objects[cur_location], objects[ordinality]
cur_locations[ordinality] = orig_location
end
puts objects.join(' ')
That obviously does involve some extra memory for the hash, but since it's just for indexes and not your "fairly large" objects, hopefully that's acceptable. Since hash lookups are O(1), even though there is a slight bump to the complexity due to the case where an item has been swapped forward more than once and you have to rewrite cur_location multiple times, the algorithm as a whole should be reasonably close to O(n).
If you wanted you could build a full hash of original to current positions ahead of time, or keep a reverse hash of current to original, and modify the algorithm a bit to get it down to strictly O(n). It'd be a little more complicated and take a little more space, so this is the version I wrote out, but the modifications shouldn't be difficult.
EDIT: Actually, I'm fairly certain the time complexity is just O(n), since each ordinality can have at most one hop associated, and thus the maximum number of lookups is limited to n.
#!/usr/bin/env python
def rearrange(objects, permutation):
"""Rearrange `objects` inplace according to `permutation`.
``result = [objects[p] for p in permutation]``
"""
seen = [False] * len(permutation)
for i, already_seen in enumerate(seen):
if not already_seen: # start permutation cycle
first_obj, j = objects[i], i
while True:
seen[j] = True
p = permutation[j]
if p == i: # end permutation cycle
objects[j] = first_obj # [old] p -> j
break
objects[j], j = objects[p], p # p -> j
The algorithm (as I've noticed after I wrote it) is the same as the one from #meriton's answer in Java.
Here's a test function for the code:
def test():
import itertools
N = 9
for perm in itertools.permutations(range(N)):
L = range(N)
LL = L[:]
rearrange(L, perm)
assert L == [LL[i] for i in perm] == list(perm), (L, list(perm), LL)
# test whether assertions are enabled
try:
assert 0
except AssertionError:
pass
else:
raise RuntimeError("assertions must be enabled for the test")
if __name__ == "__main__":
test()
There's a histogram sort, though the running time is given as a bit higher than O(N) (N log log n).
I can do it given O(N) scratch space -- copy to new array and copy back.
EDIT: I am aware of the existance of an algorithm that will proceed through. The idea is to perform the swaps on the array of integers 1..N while at the same time mirroring the swaps on your array of large objects. I just cannot find the algorithm right now.
The problem is one of applying a permutation in place with minimal O(1) extra storage: "in-situ permutation".
It is solvable, but an algorithm is not obvious beforehand.
It is described briefly as an exercise in Knuth, and for work I had to decipher it and figure out how it worked. Look at 5.2 #13.
For some more modern work on this problem, with pseudocode:
http://www.fernuni-hagen.de/imperia/md/content/fakultaetfuermathematikundinformatik/forschung/berichte/bericht_273.pdf
I ended up writing a different algorithm for this, which first generates a list of swaps to apply an order and then runs through the swaps to apply it. The advantage is that if you're applying the ordering to multiple lists, you can reuse the swap list, since the swap algorithm is extremely simple.
void make_swaps(vector<int> order, vector<pair<int,int>> &swaps)
{
// order[0] is the index in the old list of the new list's first value.
// Invert the mapping: inverse[0] is the index in the new list of the
// old list's first value.
vector<int> inverse(order.size());
for(int i = 0; i < order.size(); ++i)
inverse[order[i]] = i;
swaps.resize(0);
for(int idx1 = 0; idx1 < order.size(); ++idx1)
{
// Swap list[idx] with list[order[idx]], and record this swap.
int idx2 = order[idx1];
if(idx1 == idx2)
continue;
swaps.push_back(make_pair(idx1, idx2));
// list[idx1] is now in the correct place, but whoever wanted the value we moved out
// of idx2 now needs to look in its new position.
int idx1_dep = inverse[idx1];
order[idx1_dep] = idx2;
inverse[idx2] = idx1_dep;
}
}
template<typename T>
void run_swaps(T data, const vector<pair<int,int>> &swaps)
{
for(const auto &s: swaps)
{
int src = s.first;
int dst = s.second;
swap(data[src], data[dst]);
}
}
void test()
{
vector<int> order = { 2, 3, 1, 4, 0 };
vector<pair<int,int>> swaps;
make_swaps(order, swaps);
vector<string> data = { "a", "b", "c", "d", "e" };
run_swaps(data, swaps);
}