Markov chains and Random walks on top of biological data - algorithm

I'm coming from biology's field and thus I have some difficulties in understanding (intuitively?) some of the ideas of that paper. I really tried my best to decipher it step by step by using a lot of google and youtube, but now I feel, it's the time to refer to the professionals in that field.
Before filling out the whole universe with (unordered) questions, let me put the whole thing down and try to introduce you to the subject while at the same time explain to you what I got so far from my research on that.
Microarrays
For those that do not have any idea of what this is, you can imagine, that it is literally an array (matrix) where each cell of it contains a probe for a specific gene. Making the long story short, by the end of the microarray experiment, you have a matrix (in computational terms) with each column representing a sample, each line a different gene while the contents of the matrix represent the expression values of the genes for each sample.
Pathways
In biology pathway / gene-set they call a set of genes that interact with each other forming a small network responsible for a specific function.These pathways are not isolated but they talk/interact with each other too. What that paper does on the first hand, is to expand the initial pathway (let us call it target pathway), by including some other genes from other pathways that might interact with that.
Procedure
1.
Let's assume now that we have a matrix G x S. Where G for genes and S for Samples. We construct a gene co-expression network (G x G) using as weights the Pearson's correlation coefficients between genes' pairs (a). This could also be represented as an undirected weighted graph. .
2.
For each gene (row OR column) we calculate the weighted degree (d) which is nothing more than the sum of all correlation coefficients of that gene.
3.
From the two previous matrices, they construct the transition matrix producing the probabilities (P) to transit from one gene to another by using the
formula
Q1. Why do they call this transition probability? Is there any intuitive way to see this as a probability in the biological context?
4.
Since we have the whole transition matrix, we can define a subnetwork of the initial one, that we want to expand it and it consisted out of let's say 15 genes. In that step, they used formula number 3 (on the paper) which transforms the values of the initial transition matrix as it says. They set the probability of 1 on the nodes that are part of the selected subnetwork because they define them as absorbing states.
Q2. In that same formula (3), I cannot understand what the second condition does. When should the probability be 0? Intuitively, in my opinion, all nodes that didn't exist in subnetwork, should have the P_ij value as a probability.
5.
After that, the newly constructed transition matrix is showed at formula (4) in the paper and I managed to understand it using this excellent article.
6.
Here is where everything is getting more blur for me and where I need the most of the help. What I imagine at that step, is that the algorithm starts randomly from one node and keep walking around the network. In order to construct a relevance function (What that exactly means?), they firstly calculate a probability called joint probability of visiting one node/edge E(i,j) and noted as :
From the other hand they seem to calculate another probability called probability of a walk of length L starting in x and denoted as :
7.
In the next step, they divide the previously calculated probabilities and calculate the number of times a random walk starts in x using the transition from i to j that I don't really understand what this means.
After that step, I lost their reasoning at all :-P.
I'm not expecting an expert to come open my mind and give me understand that procedure. What I'm expecting is some guidelines, hints, ideas, useful resources or more intuitive approaches to understanding the whole procedure. Then when I fully understand it I will try to implement it on R or python.
So any idea / critics is welcome.
Thanks.

Related

How is the class center for a decision attribute calculated in class center based fuzzification algorithm?

I came across class center based fuzzification algorithm on page 16 of this research paper on TRFDT. However, I fail to understand what is happening in step 2 of this algorithm (titled in the paper as Algorithm 2: Fuzzification). If someone could explain it by giving a small example it would certainly be helpful.
It is not clear from your question which parts of the article you understand and IMHO the article is written in not the clearest possible way, so this is going to be a long answer.
Let's start with some intuition behind this article. In short I'd say it is: "let's add fuzziness everywhere to decision trees".
How a decision tree works? We have a classification problem and we say that instead of analyzing all attributes of a data point in a holistic way, we'll analyze them one by one in an order defined by the tree and will navigate the tree until we reach some leaf node. The label at that leaf node is our prediction. So the trick is how to build a good tree i.e. a good order of attributes and good splitting points. This is a well studied problem and the idea is to build a tree that encode as much information as possible by some metric. There are several metrics and this article uses entropy which is similar to widely used information gain.
The next idea is that we can change the classification (i.e. split of the values into a classes) as fuzzy rather than exact (aka "crisp"). The idea here is that in many real life situations not all members of the class are equally representative: some a more "core" examples and some a more "edge" example. If we can catch this difference, we can provide a better classification.
And finally there is a question of how similar the data points are (generally or by some subset of attributes) and here we can also have a fuzzy answer (see formulas 6-8).
So the idea of the main algorithm (Algorithm 1) is the same as in the ID3 tree: recursively find the attribute a* that classifies the data in the best way and perform the best split along that attribute. The main difference is in how the information gain for the best attribute selection is measured (see heuristic in formulas 20-24) and that because of fuzziness the usual stop rule of "only one class left" doesn't work anymore and thus another entropy (Kosko fuzzy entropy in 25) is used to decide if it is time to stop.
Given this skeleton of the algorithm 1 there are quite a few parts that you can (or should) select:
How do you measure μ(ai)τ(Cj)(x) used in (20) (this is a measure of how well x represents the class Cj with respect to attribute ai, note that here being not in Cj and far from the points in Cj is also good) with two obvious choices of the lower (16 and 18) and the upper bounds (17 and 19)
How do you measure μRτ(x, y) used in (16-19). Given that R is induced by ai this becomes μ(ai)τ(x, y) which is a measure of similarity between two points with respect to attribute ai. Here you can choose one of the metrics (6-8)
How do you measure μCi(y) used in (16-19). This is the measure of how well the point y fits in the class Ci. If you already have data as fuzzy classification, there is nothing you should do here. But if your input classification is crisp, then you should somehow produce μCi(y) from that and this is what the Algorithm 2 does.
There is a trivial solution of μCj(xi) = "1 if xi ∈ Cj and 0 otherwise" but this is not fuzzy at all. The process of building fuzzy data is called "fuzzification". The idea behind the Algorithm 2 is that we assume that every class Cj is actually some kind of a cluster in the space of attributes. And so we can measure the degree of membership μCj(xi) as the distance from the xi to the center of the cluster cj (the closer we are, the higher the membership should be so it is really some inverse of a distance). Note that since distance is measured by attributes, you should normalize your attributes somehow or one of them might dominate the distance. And this is exactly what the Algorithm 2 does:
it estimates the center of the cluster for class Cj as the center of mass of all the known points in that class i.e. just an average of all points by each coordinate (attribute).
it calculates the distances from each point xi to each estimated center of class cj
looking into formula at step #12 it uses inverse square of the distance as a measure of proximity and just normalizes the value because for fuzzy sets Sum[over all Cj](μCj(xi)) should be 1

A good randomizer for puzzle-15

I have implemented a puzzle 15 for people to compete online. My current randomizer works by starting from the good configuration and moving tiles around for 100 moves (arbitrary number)
Everything is fine, however, once in a little while the tiles are shuffled too easy and it takes only a few moves to solve the puzzle, therefore the game is really unfair for some people reaching better scores in a much higher speed.
What would be a good way to randomize the initial configuration so it is not "too easy"?
You can generate a completely random configuration (that is solvable) and then use some solver to determine the optimal sequence of moves. If the sequence is long enough for you, good, otherwise generate a new configuration and repeat.
Update & details
There is an article on Wikipedia about the 15-puzzle and when it is (and isn't) solvable. In short, if the empty square is in the lower-right corner, then the puzzle is solvable if and only if the number of inversions (an inversion is a swap of two elements in the sequence, not necessarily adjacent elements) with respect to the goal permutation is even.
You can then easily generate a solvable start state by doing an even number of inversions, which may lead to a not-so-easy-to-solve state far quicker than by doing regular moves, and it is guaranteed that it will remain solvable.
In fact, you don't need to use a search algorithm as I mentioned above, but an admissible heuristic. Such one always underestimates never overestimates the number of moves needed to solve the puzzle, i.e. you are guaranteed that it will not take less moves that the heuristic tells you.
A good heuristic is the sum of manhattan distances of each number to its goal position.
Summary
In short, a possible (very simple) algorithm for generating starting positions might look like this:
1: current_state <- goal_state
2: swap two arbitrary (randomly selected) pieces
3: swap two arbitrary (randomly selected) pieces again (to ensure solvability)
4: h <- heuristic(current_state)
5: if h > desired threshold
6: return current_state
7: else
8: go to 2.
To be absolutely certain about how difficult a state is, you need to find the optimal solution using some solver. Heuristics will give you only an estimate.
I would do this
start from solution (just like you did)
make valid turn in random direction
so you must keep track where the gap is and generate random direction (N,E,S,W) and do the move. I think this part you have done too.
compute the randomness of your placements
So compute some coefficient dependent on the order of the array. So ordered (solved) solutions will have low values and random will have high values. The equation for the coefficiet however is a matter of trial and error. Here some ideas what to use:
correlation coefficient
sum of average difference of value and its neighbors
1 2 4
3 6 5
9 8 7
coeff(6)= (|6-3|+|6-5|+|6-2|+|6-8|)/4
coeff=coeff(1)+coeff(2)+...coeff(15)
abs distance from ordered array
You can combine more approaches together. You can divide this to separated rows and columns and then combine the sub coefficients together.
loop #2 unit coefficient from #3 is high enough (treshold)
The treshold can be used also to change the difficulty.

Pseudo-code for Network-only-bayes-classifier

I am trying to implement a classification toolkit for univariate network data using igraph and python.
However, my question is actually more of an algorithms question in relational classification area instead of programming.
I am following Classification in Networked Data paper.
I am having a difficulty to understand what this paper refers to "Network-Only Bayes Classifier"(NBC) which is one of the relational classifiers explained in the paper.
I implemented Naive Bayes classifier for text data using bag of words feature representation earlier. And the idea of Naive Bayes on text data is clear on my mind.
I think this method (NBC) is a simple translation of the same idea to the relational classification area. However, I am confused with the notation used in the equations, so I couldn't figure out what is going on. I also have a question on the notation used in the paper here.
NBC is explained in page 14 on the paper,
Summary:
I need the pseudo-code of the "Network-Only Bayes Classifier"(NBC) explained in the paper, page 14.
Pseudo-code notation:
Let's call vs the list of vertices in the graph. len(vs) is the
length. vs[i] is the ith vertex.
Let's assume we have a univariate and binary scenario, i.e., vs[i].class is either 0 or 1 and there is no other given feature of a node.
Let's assume we run a local classifier before so that every node has an initial label, which are calculated by the local classifier. I am only interested in with the relational classifier part.
Let's call v the vertex we are trying to predict, and v.neighbors() is the list of vertices which are neighbors of v.
Let's assume all the edge weights are 1.
Now, I need the pseudo-code for:
def NBC(vs, v):
# v.class is 0 or 1
# v.neighbors is list of neighbor vertices
# vs is the list of all vertices
# This function returns 0 or 1
Edit:
To make your job easier, I did this example. I need the answer for last 2 equations.
In words...
The probability that node x_i belongs to the class c is equal to:
The probability of the neighbourhood of x_i (called N_i) if x
belonged indeed to the class c; Multiplied by ...
The probability of the class c itself; Divided by ...
The probability of the neighbourhood N_i (of node x_i) itself.
As far as the probability of the neighbourhood N_i (of x_i) if x were to belong to the class c is concerned, it is equal to:
A product of some probability; (which probability?)
The probability that some node (v_j) of the neighbourhood (N_i) belongs to the class c if x belonged indeed to the class c
(raised to the weight of the edge connecting the node that is being examined and the node that is being classified...but you are not interested in this...yet). (The notation is a bit off here I think, why do they define v_j and then never use it?...Whatever).
Finally, multiply the product of some probability with some 1/Z. Why? Because all ps are probabilities and therefore lie within the range of 0 to 1, but the weights w could be anything, meaning that in the end, the calculated probability could be out of range.
The probability that some x_i belongs to a class c GIVEN THE
EVIDENCE FROM ITS NEIGHBOURHOOD, is a posterior probability. (AFTER
something...What is this something? ... Please see below)
The probability of appearance of neighbourhood N_i if x_i
belonged to the class c is the likelihood.
The probability of the class c itself is the prior probability.
BEFORE something...What is this something? The evidence. The prior
tells you the probability of the class without any evidence
presented, but the posterior tells you the probability of a specific
event (that x_i belongs to c) GIVEN THE EVIDENCE FROM ITS
NEIGHBOURHOOD.
The prior, can be subjective. That is, derived by limited observations OR be an informed opinion. In other words, it doesn't have to be a population distribution. It only has to be accurate enough, not absolutely known.
The likelihood is a bit more challenging. Although we have here a formula, the likelihood must be estimated from a large enough population or as much "physical" knowledge about the phenomenon being observed as possible.
Within the product (capital letter Pi in the second equation that expresses the likelihood) you have a conditional. The conditional is the probability that a neighbourhood node belongs to some class if x belonged to class c.
In the typical application of the Naive Bayesian Classifier, that is document classification (e.g. spam mail), the conditional that an email is spam GIVEN THE APPEARANCE OF SPECIFIC WORDS IN ITS BODY is derived by a huge database of observations, or, a huge database of emails that we really, absolutely know which class they belong to. In other words, I must have an idea of how does a spam email looks like and eventually, the majority of spam emails converge to having some common theme (I am some bank official and i have a money opportunity for you, give me your bank details to wire money to you and make you rich...).
Without this knowledge, we can't use Bayes rule.
So, to get back to your specific problem. In your PDF, you have a question mark in the derivation of the product.
Exactly.
So the real question here is: What is the likelihood from your Graph / data?
(...or Where are you going to derive it from? (obviously, either a large number of known observations OR some knowledge about the phenomenon. For example, what is the likelihood that a node is infected given that a proportion of its neighbourhood is infected too)).
I hope this helps.

What algorithm do I use to calculate voltage across a combination circuit?

I'm trying to programmatically calculate voltage changes over a very large circuit.
*This question may seem geared toward electronics, but it's
more about applying an algorithm over a set of data.
To keep things simple,
here is a complete circuit, with the voltages already calculated:
I'm originally only given the battery voltage and the resistances:
The issue I have is that voltage is calculated differently among parallel and series circuits.
A somewhat similar question asked on SO.
Some formulas:
When resistors are in parallel:
Rtotal = 1/(1/R1 + 1/R2 + 1/R3 ... + 1/Rn)
When resistors are in series:
Rtotal = R1 + R2 + R3 ... + Rn
Ohm's Law:
V = IR
I = V/R
R = V/I
V is voltage (volts)
I is current (amps)
R is resistance(ohms)
Every tutorial I've found on the internet consists of people conceptually grouping together parallel circuits to get the total resistance, and then using that resistance to calculate the resistance in series.
This is fine for small examples, but it's difficult to derive an algorithm out of it for large scale circuits.
My question:
Given a matrix of all complete paths,
is there a way for me to calculate all the voltage drops?
I currently have the system as a graph data structure.
All of the nodes are represented(and can be looked up by) an id number.
So for the example above, if I run the traversals, I'll get back a list of paths like this:
[[0,1,2,4,0]
,[0,1,3,4,0]]
Each number can be used to derive the actual node and it's corresponding data. What kind of transformations/algorithms do I need to perform on this set of data?
It's very likely that portions of the circuit will be compound, and those compound sections may find themselves being in parallel or series with other compound sections.
I think my problem is akin to this:
http://en.wikipedia.org/wiki/Series-parallel_partial_order
Some circuits cannot even be analyzed in terms of series and parallel, for example a circuit which includes the edges of a cube (there's some code at the bottom of that web page that might be helpful; I haven't looked at it). Another example that can't be analyzed into series/parallel is a pentagon/pentagram shape.
A more robust solution than thinking about series and parallel is to use Kirchhoff's laws.
You need to make variables for the currents in each linear section
of the circuit.
Apply Kirchhoff's current law (KCL) to nodes where
linear sections meet.
Apply Kirchhoff's voltage law (KVL) to as many
cycles as you can find.
Use Gaussian elimination to solve the
resulting linear system of equations.
The tricky part is identifying cycles. In the example you give, there are three cycles: through battery and left resistor, battery and right resistor, and through left and right resistors. For planar circuits it's not too hard to find a complete set of cycles; for three dimensional circuits, it can be hard.
You don't actually need all the cycles. In the above example, two would be enough (corresponding to the two bounded regions into which the circuit divides the plane). Then you have three variables (currents in three linear parts of the circuit) and three equations (sum of currents at the top node where three linear segments meet, and voltage drops around two cycles). That is enough to solve the system for currents by Gaussian elimination, then you can calculate voltages from the currents.
If you throw in too many equations (e.g., currents at both nodes in your example, and voltages over three cycles instead of two), things will still work out: Gaussian elimination will just eliminate the redundancies and you'll still get the unique, correct answer. The real problem is if you have too few equations. For example, if you use KCL on the two nodes in your example and KVL around just one cycle, you'll have three equations, but one is redundant, so you'll only really have two independent equations, which is not enough. So I would say throw in every equation you can find and let Gaussian elimination sort it out.
And hopefully you can restrict to planar circuits, for which it is easy to find a nice set of cycles. Otherwise you'll need a graph cycle enumeration algorithm. I'm sure you can find one if you need it.
use a maximum flow algorithm (Dijkstra is your friend).
http://www.cs.princeton.edu/courses/archive/spr04/cos226/lectures/maxflow.4up.pdf
You pretend to be in front of a water flow problem (well, actually it IS a flow problem). You have to compute the flow of water on each segment (the current). Then you can easily compute the voltage drop (water pressure) across every resistor.
I think the way to go here would be something like this:
Sort all your paths into groups of the same length.
While there are more than one group, choose the group with the largest length and:
2a. Find two paths with one item difference.
2b. "Merge" them into a path with the length smaller by one - the merge is dependent on the actual items that are different.
2c. Add the new path into the relevant group.
2d. If there are only paths with more than one item difference, merge the different items so that you have only one different item between the paths.
2e. When there is only one item left, find an item from a "lower" (= length is smaller) with minimum differences, and merge item to match.
When there is one group left with more than one item, keep doing #2 until there is one group left with one item.
Calculate the value of that item directly.
This is very initial, but I think the main idea is clear.
Any improvements are welcome.

Optimal placement of objects wrt pairwise similarity weights

Ok this is an abstract algorithmic challenge and it will remain abstract since it is a top secret where I am going to use it.
Suppose we have a set of objects O = {o_1, ..., o_N} and a symmetric similarity matrix S where s_ij is the pairwise correlation of objects o_i and o_j.
Assume also that we have an one-dimensional space with discrete positions where objects may be put (like having N boxes in a row or chairs for people).
Having a certain placement, we may measure the cost of moving from the position of one object to that of another object as the number of boxes we need to pass by until we reach our target multiplied with their pairwise object similarity. Moving from a position to the box right after or before that position has zero cost.
Imagine an example where for three objects we have the following similarity matrix:
1.0 0.5 0.8
S = 0.5 1.0 0.1
0.8 0.1 1.0
Then, the best ordering of objects in the tree boxes is obviously:
[o_3] [o_1] [o_2]
The cost of this ordering is the sum of costs (counting boxes) for moving from one object to all others. So here we have cost only for the distance between o_2 and o_3 equal to 1box * 0.1sim = 0.1, the same as:
[o_3] [o_1] [o_2]
On the other hand:
[o_1] [o_2] [o_3]
would have cost = cost(o_1-->o_3) = 1box * 0.8sim = 0.8.
The target is to determine a placement of the N objects in the available positions in a way that we minimize the above mentioned overall cost for all possible pairs of objects!
An analogue is to imagine that we have a table and chairs side by side in one row only (like the boxes) and you need to put N people to sit on the chairs. Now those ppl have some relations that is -lets say- how probable is one of them to want to speak to another. This is to stand up pass by a number of chairs and speak to the guy there. When the people sit on two successive chairs then they don't need to move in order to talk to each other.
So how can we put those ppl down so that every distance-cost between two ppl are minimized. This means that during the night the overall number of distances walked by the guests are close to minimum.
Greedy search is... ok forget it!
I am interested in hearing if there is a standard formulation of such problem for which I could find some literature, and also different searching approaches (e.g. dynamic programming, tabu search, simulated annealing etc from combinatorial optimization field).
Looking forward to hear your ideas.
PS. My question has something in common with this thread Algorithm for ordering a list of Objects, but I think here it is better posed as problem and probably slightly different.
That sounds like an instance of the Quadratic Assignment Problem. The speciality is due to the fact that the locations are placed on one line only, but I don't think this will make it easier to solve. The QAP in general is NP hard. Unless I misinterpreted your problem you can't find an optimal algorithm that solves the problem in polynomial time without proving P=NP at the same time.
If the instances are small you can use exact methods such as branch and bound. You can also use tabu search or other metaheuristics if the problem is more difficult. We have an implementation of the QAP and some metaheuristics in HeuristicLab. You can configure the problem in the GUI, just paste the similarity and the distance matrix into the appropriate parameters. Try starting with the robust Taboo Search. It's an older, but still quite well working algorithm. Taillard also has the C code for it on his website if you want to implement it for yourself. Our implementation is based on that code.
There has been a lot of publications done on the QAP. More modern algorithms combine genetic search abilities with local search heuristics (e. g. Genetic Local Search from Stützle IIRC).
Here's a variation of the already posted method. I don't think this one is optimal, but it may be a start.
Create a list of all the pairs in descending cost order.
While list not empty:
Pop the head item from the list.
If neither element is in an existing group, create a new group containing
the pair.
If one element is in an existing group, add the other element to whichever
end puts it closer to the group member.
If both elements are in existing groups, combine them so as to minimize
the distance between the pair.
Group combining may require reversal of order in a group, and the data structure should
be designed to support that.
Let me help the thread (of my own) with a simplistic ordering approach.
1. Order the upper half of the similarity matrix.
2. Start with the pair of objects having the highest similarity weight and place them in the center positions.
3. The next object may be put on the left or the right side of them. So each time you may select the object that when put to left or right
has the highest cost to the pre-placed objects. Goto Step 2.
The selection of Step 3 is because if you left this object and place it later this cost will be again the greatest of the remaining, and even more (farther to the pre-placed objects). So the costly placements should be done as earlier as it can be.
This is too simple and of course does not discover a good solution.
Another approach is to
1. start with a complete ordering generated somehow (random or from another algorithm)
2. try to improve it using "swaps" of object pairs.
I believe local minima would be a huge deterrent.

Resources