I Want to implement Boolean Network in FPGA. Boolean Network Statement is given below.
A Random Boolean Network consists of N randomly connected nodes, each of which has a binary state: on or off (1 or 0). In NK networks every node receives exactly K inputs chosen randomly from other nodes in the network and in most cases self-connection is allowed (Figure 1 shows an example of the topology of an RBN).
The state of each node in the network at time t+1 is determined by the states of its inputs at time t through a randomly generated Boolean function which can be represented with a look-up table for each node (see Table 1). Typically the Boolean functions do not change throughout the lifetime of the network. In an RBN the Boolean function for each node maps each of the 2K possible input combinations to an output state of 0 or 1.
The network is given a random initial state by assigning each node a value of 0 or 1. The value of each node at the next time step is determined using the state of its inputs and each node's Boolean function. All nodes are updated at the same time (synchronously). The new state of the network is generated and then used to determine the next state and so on. An example of the network activity is displayed in Figure .
I really want to implement this in verilog. Can any one give me some detail about RBN?
after reading the whole statement about the Boolean Network, i can divide my problem into three steps which are
First i might want to implement binary network with given node function.
Second, i want to implement a random function block.
Third - integrate the two. BTW
generate a different network every time upon reset, or in compile time
Random boolean function f(d_0, d_1,..,d_(k-1) ) can be represented as a 2^k to 1 multiplexer with 2^k random data inputs and k "select" inputs chosen from the N outputs of the other multiplexers. For timing requirements, place latches on the lines. I have drawn for you the basic block for your network:
You are going to need N blocks like this with interconnected Q's. You will have to implement the combination generator "N choose K", based on the input representing the choice number, and connect the top inputs to some kind of random numbers generator (compile time or not, your choice).
Related
I need an algorithm to transform game rules (p&p role playing) to probabilities, specifically conditional constructs built from if-then-else with conditions made of the boolean (not,and,or) and relational operators (==,>=,<=,<,>) and dice rolls and boolean values.
Example:
var a = diceRoll(d8,d10,d12) // a shaker full of dices
// a 8 sides, a 10 sided and a 12 sided dice
// values added together
var w = true
var result = (
if (a>=20) then 10.3994
else if (a>=14 and w) then 8.23
else if (a>=8 and diceRoll(d6)>3) then 5.22
else 0
)
should be transformed programatically to a formula for the expected average result like
var result = diceProbabilityGreaterThan(a,20)*10.3994
+(diceProbabilityGreaterThan(a,14)-diceProbabilityGreaterThan(a,20))*8.23
+ ..
I know how to map a single relational operator on a single diceRoll to a probability (diceProbabilityGreaterThan), and I know how I could transform this specific simple example by hand, but I have problems to find a general transformation scheme for any given rule. The hard part in this problem to me are the dependend probabilities (like a>20 ... a>10).
More background:
I know that I could use a monte carlo method, but I tried it and it's
too slow for my use case.
The rules are allready data structures, so
there is no parsing required.
The dices may be exploding, meaning a 6
sided dice falling on 6 will be rolled again and adding up, so the
maximum shaker result is not bounded by an finite number.
The rules contain no loop control structures like while or for, they just form
an maybe nested if-then-else-tree.
The boolean and number values in
the conditions are constants.
The solution can be limited to just one
dependend probability variable (like a in the example), but I'm
interested also in the existence of a general solution for any number
of depended variable.
This question is a clone from https://math.stackexchange.com/questions/842458/map-if-then-else-to-probability because it was marked there as offtopic.
What you want is calculate the expected value of the function. This can be done recursively.
I assume you have the rules in a tree-like data structure. Then the initial call would just be root.CalculateExpectedValue().
There are three kinds of nodes:
Leaf nodes (that specify an actual value). CalculateExpectedValue() should return this very value for leaf nodes.
Variable definitions. These nodes have one child and return child.CalculateExpectedValue(). However, they have to introduce a variable declaration along with its probability mass function. The probability mass functions of all active variables must be passed as a parameter to CalculateExpectedValue(). More information on the probability mass function below.
Decisions. These nodes have two children. The probability of both cases can be calculated, given the probability mass functions of active variables. Then these nodes should return p * trueChild.CalculateExpectedValue() + (1 - p) * falseChild.CalculateExpectedValue(). Furthermore, they have to adjust the probability mass function of involved variables.
A probability mass function for a variable defines how likely it is for this variable to become a certain value. For a simple six-sided dice, this would be 1 -> 1/6, 2 -> 1/6, 3 -> 1/6 .... It is probably easiest to store this function as a dictionary or map.
For the diceRoll function with more than one dice, we have to be able to add two probability mass function (e.g. pmf for d8 + pmf for d10, and later to d12). In order to do so, we create a new empty pmf. For each pair of elements of both input distributions, we calculate the resulting sum (element1.Value + element2.Value) and its probability (element1.probability * element2.probability).
Now we can create and modify PMFs for variable declaration nodes. We still need the behavior of decision nodes.
The first thing is to calculate the probability of a decision. That's rather easy. Pick the PMF of the according variable, iterate all entries and sum the probability if the condition holds for the element.
For the true child, we have to modify the PMF in that way that all entries where the condition is false are removed. For the false child, we have to remove the other entries. Afterwards we have to re-normalize the PMF (i.e. divide by sum of remaining probabilities). Be sure to create new PMFs. You don't want these modifications to intervene with other parts of the tree.
You could also propagate the cumulative probability to the leaf nodes. However, this is not necessary to calculate the expected value.
This question is an enhancement to the previous SO question.
Distance Calculation for massive number of devices/nodes
I have N mobile devices/nodes (say 100K) and I periodically obtain their location ( latitude , longtitude ) values.
Some of the devices are "logically connected" to roughly M other devices (say 10 in average). My program periodically compares the distance between the each device and its logically connected devices and determines if the distance is within a threshold (say 100 meters).
Furthermore number of logical connections "K" can also be more then one and (say 5 in average)
Example is A can be connected to B,C for i.e. "parents" logic. A can also be connected to C,D,E,F for "work" logic
I need a robust algorithm to calculate these distances to the logically connected devices.
The complexity order of brute force approach would be NMK or (Θ3 in terms of order)
The program does this every 3 seconds (all devices are mobile), thus for instance 100K*10*5 = 5M calculations every 3 seconds is not good.
Any good/classical algorithms for this operation ?
I decided to rewrite my answer after a bit more thought.
The complexity of your problem is not O(N^3) in the worst case, it is actually only O(N^2) in the worst case. It's also not O(N*M*K) but rather O(N*(M+K)), where O(M+K) is O(N). However, the real complexity of your problem is O(E) where E is the total number of logical connections (number of work connections + number of parent connections). Unless you want to approximate, your solution cannot be better than O(E). Your averages suggest that you likely have on the order of 5 million connections, which is on the order of O(N log N).
You example uses two sets of logical connections. So you would simply cycle through each set and check if distance between the devices of the logical connection is within the threshold.
That being said, the example you gave and your assumed time complexity suggests you are interested in more than just if the individual connections are within threshold, but rather if sets of connections are within threshold. Specifically, in your example it would return True if parents logic: (A,B), (A,C) and Work logic (A,C),(A,D),(A,E),(A,F) are all True. In which case your best data structure would be a dictionary of dictionaries that looks like the following in Python (includes the optimization below):
"parentsLogic[A][B] = (last position A, last position B, was within threshold)".
If it's common that the positions don't change much, you may obtain some run-time improvement by storing the previous positions and if they were within the threshold or not (Boolean). The benefit is that you can simply return the previous result if the two positions haven't changed and updating them if they have changed.
You can use a brute force algorithm and sort the result then use the top best groups.
One thing you can do in addition to what was suggested in the answers to the previous question is to store a list of the nearby connected devices for every device and update it only for those devices that have moved by a significant distance since last update (and for the devices connected to those that have moved).
For example, if the threshold is 100 m, store a list of the connected devices within 200 m of every device, and update it for every device that has moved more than by 50 m since last update.
Let G be DAG with n vertices and m edges given by adjacency matrix. I need to calculate it's closure in form of a matrix as well. We have a computer that each word is b bits. and I need to find an algorithm that calculate the transitive closure in (n^2+nm/b)
I'm not really sure I understand what bits means and how can I use it
Adding the algorithm for finding transitive closure of dag:
TransitiveForDAG (Graph G)
int T[1...n,1...n] ={0,...,0}
List L <- TopologicalSort(G)
For each v in reverse(L)
T[v,v]<-1
For each u in Adj[v]
for j<-1,...,n do
T[v,j]<-T[v,j] or T[u,j]
You say you don't know what bits mean, so let's start with that.
Bit is the smallest unit of digital information - a 0 or a 1
Word is the unit of data processed by a computer at once. Processors don't take and process individual bits, but small chunks of them. Most today's computer architectures use words of 32 or 64 bits.
Now, how to work with words of binary data? In most programming languages, you'll use a numeric data type to store the data. To manipulate them, most languages provide bitwise operators - bitwise or (|) is one needed here.
So, how to make your algorithm faster? Look at the T matrix. It can only have values of 0 or 1 - a single bit is enough to store that information. You're processing fields of the matrix one by one; every time you process the last line of your algorithm, you only use one bit from the v'th row and one from the u'th row.
As said before, the processor has to read a whole word to read and process each of those bits. That's ineffective; mostly you wouldn't care about such a detail, but here it's in a critical place - the last line is in the innermost cycle and will be executed very many times.
Now, how to be more effective? Change the way you store the data - use a type with the length of a word as the data type for your matrix. Store b values from your original matrix in every value of the new one - this is called packing. Because of the way your algorithm works, you'll want to store them by row - the first value in the i-th row will contain first b values from the i-th row of the original matrix.
Apart from that, you only have to change the innermost cycle of the algorithm - the cycle will iterate over the words instead of individual fields and inside you'll process the whole words at once using bitwise or
T[v,j]<-T[v,j] | T[u,j]
The cycle is what generates the time complexity of the algorithm. You've now managed to iterate b-times less, so the complexity becomes (n^2+nm/b)
For a simple graph, each entry in the adjacency matrix is either 0 or 1 and can be represented by one bit. This allows a compact representation of the matrix by packing b entries into each computer word. The challenge then is to implement matrix multiplication (to compute the closure) using the correct bit manipulation operators.
I'm trying to create an algorithm to solve the following problem:
Input is an unsorted list of sets containing pairs (key, value) of ints. The first of each pair is positive and unique within the set.
I want to find an algorithm to split the input sets so the sets can be ordered such that for each key the value is nondecreasing in the set order.
There is a trival solution which is to split the sets into each individual value and sort them, I'd like something more efficient in terms of the number of sets which are split.
Are there any similar problems you have encountered and/or techniques you can suggest?
Does the optimal (minimum number of splits) solution sound like it is possible in polynomial time?
Edit: In the example the "<=" operator indicates a constraint on the sets as a whole whereby for each key value (100, 101, 102) the corresponding values are equal to or greater than the values in previous sets (or omitted from the set). I.e extracting the values for each key using the order from the output sets gives:
Key 100 {0, 1}
Key 101 {2, 3}
Key 102 {10, 15}
A*
I propose using A* to find an optimal solution. Build the order of split sets incrementally from left to right, minimizing the number of sets required to achieve this.
A* visits states based on some heuristic estimate of the total cost. I propose that a state is described by the totality of all the pairs already included in the order as we have it so far. If all values for every key are different, then you can represent this information rather concisely by simply storing the last value for each key. Otherwise you'll have to somehow take care of equal values, so you know which ones were already included and which ones were not. For every state you maintain some representation of the best order leading to it, but that may get updated along the way while the state remains the same.
The heuristic should be an estimate of the total cost of the path from the beginning through the current state to the goal. It may be too low, but must never be too high. In our case, the heuristic should count the number of (possibly split) sets included in the order so far, and add to that the number of (unsplit) sets still waiting for insertion. As the remaining sets may need splitting, this might be too low, but as you can never have less sets than those still waiting for insertion, it is a suitable heuristic.
Now you have some priority queue of states, ordered by the value of this heuristic. You extract minimal items from it, and know that the moment you extract a state from the queue, the cost up to that state can not decrease any more, so the path up to that state is optimal. Now you examine what other states can be reached from this: which other pairs can be next in the order of split sets? For each remaining set which has pairs that are ready to be included, you create a new subsequent state, taking all the pairs from the set which are ready. The cost so far increases by one. If you manage to take a whole set, without splitting, then the extimate for the remaining cost decreases by one.
For this new state, you check whether it is already persent in your priority queue. If it is, and its previous cost was higher than the one just computed, then you update its cost, and the optimal path leading to it. Make sure the priority key changes its position accordingly (“decrease key”). If the state wasn't present in the queue before, then add it to the queue.
Dijkstra
Come to think of it, this is the same as running Dijkstra's algorithm with the number of splits as cost. And as each edge has either cost zero or cost one, you can implement this even easier, without any priority queue at all. Instead, you can use two sets, called S₀ and S₁, where all elements from S₀ require the same number of splits, and all elements from S₁ require one more split. Roughly sketched in pseudocode:
S₀ = ∅ (empty set)
S₁ = ∅
add initial state (no pairs added yet, all sets remain to be added) to S₀
while True
while (S₀ ≠ ∅)
x = take and remove any element from zero
if x is the target state (all pairs included in the order) then
return the path information associated with it
for (r: those sets which remain to be added in state x)
if we can take r as a whole then
let y be the state obtained by taking r as the next set in the order
if y is in S₁, remove it
add y to S₀
else if we can add only some elements from r then
let y bet the state obtained by taking as many elements from r as possible
if y is not in S₀, add it to S₁
S₀ = S₁
S₁ = ∅
Suppose I have data presented with variable-length encoding when I can retrieve the data parsing some virtual b-tree and stopping when I reach the item (similar to Huffman encoding). There is unknown number of items (in the best case only the upper limit is known). Is there an algorithm to generate uniformly distributed numbers? The problem is that a coin-based algorithm will give non-uniform result in this case, for example if there's a number encoded as 101 and there's a number encoded 10010101, the latter will appear very rarely comparing to the former.
UPDATE: In other words, I have a set of maximum N elements (but maybe fewer) when every element can be addressed with arbitrary number of bits (and with accordance with informational theory, so if one is encoded 101 then no other element can be encoded with the same prefix). So it's more like B-Tree when I go left or right depending on a bit and at some moment I get to the data item. I want to get a sequence of random numbers addressed with this technique, but the distribution of them should be uniform (the example why choosing randomly left-right won't work is above, the numbers 101 and 10010101)
Thanks
Max
I can think of three basic methods, one of which involves frequent reguessing and one of which involves keeping extra information. I think that doing one or the other of these things is unavoidable. I'm going to begin with the extra information one:
In each node, store a number count which represents the number of descendants it has. For every node, you'll need to have a number between 1 and count for that node to tell you whether to go left or right by comparing it to the left child's count. Here's the algorithm:
n := random integer between 1 and root.count
node := route
while node.count != 1
if n <= node.left.count
node = node.left
else
node = node.right
n = n - node.left.count
So, essentially, we're imposing a left-to-right ordering on all nodes and selecting the nth one from the left. This is fairly quick, only having a O(depth of tree), which is likely the best we can do without doing something like also building a vector which contains all the node labels. This also adds an overhead of O(depth of tree) to any changes to the tree since counts must be corrected. If you're going the other way and never changing the tree at all but going to be selecting random nodes a lot, just bit the bullet and put all of the node labels in a vector. That way you can select a random one in O(1) after O(N) initial set-up time.
If, however, you don't want to use up any storage space, here's an alternative with a lot of reguessing. First find a bound (which I'll label B) for the depth of the tree (we can use N-1 if needed, but obviously, that's a very loose bound. The tighter the bound which can be found, the faster the algorithm runs). Next we're going to generate a possible node label in a random, but even way. There are 2^(B+1)-1 possibilities. It's not just 2^B because, for example, the string "0011" and "11" are completely different strings. As a result, we need to count all possible binary strings of length between 0 and B. Obviously, we have 2^i strings of length i. So for strings of length i or less, we have sum(i=0 to B){2^i} = 2^(B+1)-1. So, we can just chose a number between 0 and 2^(B+1)-2 and then find the corresponding node label. Of course, the mapping from numbers to node labels isn't trivial, so I'll provide it here.
We convert the number we have chosen into a string of bits in the ordinary way. Then, reading from the left, if the first digit is a 0, then the node label is the remaining string to the right (possibly the empty string, which is a valid node label although not likely to be in use). If the first digit is a 1, then we throw it away and repeat this process. Thus, if B=4, then the node label "0001" would come from the number "00001". The node label "001" would come from the number "10001". The node label "01" would come from the number "11001". The node label "1" would come from the number "11101". And the node label "" would come from the number "11110". We did not include the number 2^(B+1)-1 ("11111" in this case) which has no valid interpretation under this scheme. I'll leave it as an exercise to the reader to prove to themselves that every string from length 0 to B can be represented under this scheme. Rather than trying to prove it, I'll just assert that it will work.
So now we have a node label. The next step is to see if that label exists by traversing the tree. If it does, we're done. If it doesn't, then choose a new number and start over (that's the reguessing part). It's likely to have to reguess a lot, since only a small fraction of legal node labels will be in use, but this won't skew the fairness, just increase the time.
Here's a pseudo-code version of this process in four functions:
function num_to_binary_list(n,digits) =
if digits == 0 return ()
if n mod 2 == 0 return 0 :: num_to_digits(n/2,digits-1)
else return 1 :: num_to_digits((n-1)/2,digits-1)
function binary_list_to_node_label_list(l) =
if l.head() == 0 return l.tail()
else return binary_list_to_node_label_list(l.tail())
function check_node_label_list_against_tree(str,node) =
if node == null return false,null
if str.isEmpty()
if node.isLeaf() return true,node
else return false,null
if str.head() == 0 return check_node_label_list_against_tree(str.tail(),node.left)
else check_node_label_list_against_tree(str.tail,node.right)
function generate_random_node tree b =
found := false
while (not found)
x := random(0,2**(b+1)-2) // We're assuming that this random selects inclusively
node_label := binary_list_to_node_label(num_to_binary_list(x,b+1))
found,node := check_node_label_list_against_tree(node_label,tree)
return node
The timing analysis for this, of course, is pretty horrendous. Basically, the while loop will run an average of (2^(B+1)-1)/N times. So, in the worst case, it's O((2^N)/N) which is terrible. In the best case, B would be on the order of log(N), so it would be roughly O(1), but that requires that the tree be fairly balanced which it may not be. Still, if you really want no extra space, this method does that.
I don't really think that you can do better than this last method without storing some information. It sounds appealing to be able to traverse the tree, making random decisions as you go, but without storing additional information about the structure, you're just not going to be able to do that. Every time you make a branching decision, you could have just one node on the left side and a million nodes on the right side or it could have a million nodes on the left side and just one on the right side. Because those are both possible and you don't know which is the case, there's simply no way to make an even random decision between the two sides. Obviously 50-50 doesn't work and any other choice is going to be similarly problematic.
So, if you don't want extra space, the second method will work, but be slow. If you don't mind adding some extra space, the first method will work and be fast. And, as I said earlier, if you're not going to be changing the tree and you'll be selecting a lot of random nodes, then bite the bullet and just traverse the tree and stick all leaf nodes in a self-growing array or vector and then pick from that.