BigQuery: Sample a varying number of rows per group - random

I have two tables. One has a list of items, and for each item, a number n.
item | n
--------
a | 1
b | 2
c | 3
The second one has a list of rows containing item, uid, and other rows.
item | uid | data
------------------
a | x | foo
a | x | baz
a | x | bar
a | z | arm
a | z | leg
b | x | eye
b | x | eye
b | x | eye
b | x | eye
b | z | tap
c | y | tip
c | z | top
I would like to sample, for each (item,uid) pair, n rows (arbitrary, it's better if this is uniformly random, but it doesn't have to be). In the example above, I want to keep maximum one row per user for item a, two rows per user for item b, and three rows per user to item c:
item | uid | data
------------------
a | x | baz
a | z | arm
b | x | eye
b | x | eye
b | z | tap
c | y | tip
c | z | top
ARRAY_AGG with LIMIT n doesn't work for two reasons: first, I suspect that given that n can be large (on the order of 100,000), this won't scale. The second, more fundamental problem is that n needs to be a constant.
Table sampling also doesn't seem to solve my problem, since it's per-table, and also only supports sampling a fixed percentage of rows, rather than a fixed number of rows.
Are there any other options?

Consider below solution
select * except(n)
from rows_list
join items_list
using(item)
where true
qualify row_number() over win <= n
window win as (partition by item, uid order by rand())
if applied to sample data in your question - output is

Related

Randomness Comparison Experiment

I have a drug analysis experiment that need to generate a value based on given drug database and set of 1000 random experiments.
The original database looks like this where the number in the columns represent the rank for the drug. This is a simplified version of actual database, the actual database will have more Drug and more Gene.
+-------+-------+-------+
| Genes | DrugA | DrugB |
+-------+-------+-------+
| A | 1 | 3 |
| B | 2 | 1 |
| C | 4 | 5 |
| D | 5 | 4 |
| E | 3 | 2 |
+-------+-------+-------+
A score is calculated based on user's input: A and C, using the following formula:
# Compute Function
# ['A','C'] as array input
computeFunction(array) {
# do some stuff with the array ...
}
The formula used will be same for any provided value.
For randomness test, each set of experiment requires the algorithm to provide randomized values of A and C, so both A and C can be having any number from 1 to 5
Now I have two methods of selecting value to generate the 1000 sets for P-Value calculation, but I would need someone to point out if there is one better than another, or if there is any method to compare these two methods.
Method 1
Generate 1000 randomized database based on given database input shown above, meaning all the table should contain different set of value pair.
Example for 1 database from 1000 randomized database:
+-------+-------+-------+
| Genes | DrugA | DrugB |
+-------+-------+-------+
| A | 2 | 3 |
| B | 4 | 4 |
| C | 3 | 2 |
| D | 1 | 5 |
| E | 5 | 1 |
+-------+-------+-------+
Next we perform computeFunction() with new A and C value.
Method 2
Pick any random gene from original database and use it as a newly randomized gene value.
For example, we pick the values from E and B as a new value for A and C.
From original database, E is 3, B is 2.
So, now A is 3, C is 2. Next we perform computeFunction() with new A and C value.
Summary
Since both methods produce completely randomized input, therefore it seems to me that it will produce similar 1000-value outcome. Is there any way I could prove they are similar?

Any algorithm to fill a space with smallest number of boxes

Let a 3D grid, just like a checkerboard, with an extra dimension. Now let's say that I have a certain amount of cubes into that grid, each cube occupying 1x1x1 cells. Let's say that each of these cubes is an item.
What I would like to do is replace/combine these cubes into larger boxes occupying any number of cells on the X, Y and Z axes, so that the resulting number of boxes is as small as possible while preserving the overall "appearance".
It's probably unclear so I'll give a 2D example. Say I have a 2D grid containing several squares occupying 1x1 cells. A letter represents the cells occupied by a given item, each item having a different letter from the other ones. In the first example we have 10 different items, each of them occupying 1x1x1 cells.
+---+---+---+---+---+---+
| | | | | | |
+---+---+---+---+---+---+
| | A | B | C | D | |
+---+---+---+---+---+---+
| | E | F | G | H | |
+---+---+---+---+---+---+
| | | K | L | | |
+---+---+---+---+---+---+
| | | | | | |
+---+---+---+---+---+---+
That's my input data. I could now optimize it, i.e reduce the number of items while still occupying the same cells, by multiple possible ways, one of which could be :
+---+---+---+---+---+---+
| | | | | | |
+---+---+---+---+---+---+
| | A | B | B | C | |
+---+---+---+---+---+---+
| | A | B | B | C | |
+---+---+---+---+---+---+
| | | B | B | | |
+---+---+---+---+---+---+
| | | | | | |
+---+---+---+---+---+---+
Here, instead of 10 items, I only have 3 (i.e A, B and C). However it can be optimized even more :
+---+---+---+---+---+---+
| | | | | | |
+---+---+---+---+---+---+
| | A | A | A | A | |
+---+---+---+---+---+---+
| | A | A | A | A | |
+---+---+---+---+---+---+
| | | B | B | | |
+---+---+---+---+---+---+
| | | | | | |
+---+---+---+---+---+---+
Here I only have two items, A and B. This is as optimized as this can be.
What I am looking for is an algorithm capable of finding the best item sizes and arrangement, or at least a reasonably good one, so that I have as few items as possible while occupying the same cells, and in 3D !
Is there such an algorithm ? I'm sure there are some domains where that kind of algorithm would be useful, and I need it for a video game. Thanks !!
Perhaps a simpler algorithm is possible, but a set partition should suffice.
Min x1 + x2 + x3 + ... //where x1 is 1 if the 1th partition is chosen, 0 otherwise
such that x1 + + x3 = 1// if 1st and 3rd partition contain 1st item
x2 + x3 = 1//if 2nd and 3rd partition contain 2nd item and so on.
x1, x2, x3,... are binary
You have 1 constraint for each item. Each constraint stipulates that each item can be part of exactly one box. The objective minimizes the total number of boxes.
This is an NP Hard integer programming however.
The number of variables in this problem could be exponential. You need to have an efficient way of enumerating them -- that is figuring out when a contiguous box can be found that is capable of including all points in it. It is here that you have to take into account information such as whether the grid is 2d or 3d, how you define a contiguous "box", etc.
Such problems are usually solved by column-generation, where these columns of the integer program are dynamically generated on the fly.
If I understand David Eppstein's1 explanation (see section 3) then a solution can be found in a maximal independent set in the bipartite intersection graph of axis-aligned diagonals connecting one concave vertex to another. (This would be 2d. I'm not sure about 3d, although perhaps it involves evaluating hyperplanes instead of lines?)
In your example, there is only one such diagonal:
________
| |
|_x....x_|
|____|
The two xs represent connected concave vertices. The maximal independent set of edges here contains only one edge, splitting the polygon in two.
Here's another with only one axis-parallel edge connecting two concave vertices, x and x. This polygon, though, also has two concave vertices, a and b, that do not have an opposite, axis-parallel match. In that case, it seems to me, each concave vertex without a partner would split the polygon it's on in two (either vertically or horizontally):
____________
| |
| |x
| . |
| . |a
|___ . |
b| . |
| .___|
|________|x
results in 4 rectangles:
____________
| |
| |x
| . |
| ..|a
|___.......... |
b| . |
| .___|
|________|x
Here's one with two intersecting axis-parallel diagonals, each connecting two concave vertices, (x,x) and (y,y):
____________
| |
| |x_
| . |
| . |
|___ . . . .z. .|y
y| . |
| .____|
|________|x
In this case, as I understand, the intersection graph again contains only one independent set:
(y,z) (z,y) (x,z) (z,x)
yielding 4 rectangles as a solution.
Since I'm not completely sure how the "intersection graph" in the paper is defined, I would welcome any clarifying comments.
1. Graph-Theoretic Solutions to Computational Geometry Problems, David Eppstein (Submitted on 26 Aug 2009)

MapReduce matrix multiplication complexity

Assume, that we have large file, which contains descriptions of the cells of two matrices (A and B):
+---------------------------------+
| i | j | value | matrix |
+---------------------------------+
| 1 | 1 | 10 | A |
| 1 | 2 | 20 | A |
| | | | |
| ... | ... | ... | ... |
| | | | |
| 1 | 1 | 5 | B |
| 1 | 2 | 7 | B |
| | | | |
| ... | ... | ... | ... |
| | | | |
+---------------------------------+
And we want to calculate the product of this matrixes: C = A x B
By definition: C_i_j = sum( A_i_k * B_k_j )
And here is a two-step MapReduce algorithm, for calculation of this product (I will provide a pseudocode):
First step:
function Map (input is a single row of the file from above):
i = row[0]
j = row[1]
value = row[2]
matrix = row[3]
if(matrix == 'A')
emit(i, {j, value, 'A'})
else
emit(j, {i, value, 'B'})
Complexity of this Map function is O(1)
function Reduce(Key, List of tuples from the Map function):
Matrix_A_tuples =
filter( List of tuples from the Map function, where matrix == 'A' )
Matrix_B_tuples =
filter( List of tuples from the Map function, where matrix == 'B' )
for each tuple_A from Matrix_A_tuples
i = tuple_A[0]
value_A = tuple_A[1]
for each tuple_B from Matrix_B_tuples
j = tuple_B[0]
value_B = tuple_B[1]
emit({i, j}, {value_A * value_b, 'C'})
Complexity of this Reduce function is O(N^2)
After the first step we will get something like the following file (which contains O(N^3) lines):
+---------------------------------+
| i | j | value | matrix |
+---------------------------------+
| 1 | 1 | 50 | C |
| 1 | 1 | 45 | C |
| | | | |
| ... | ... | ... | ... |
| | | | |
| 2 | 2 | 70 | C |
| 2 | 2 | 17 | C |
| | | | |
| ... | ... | ... | ... |
| | | | |
+---------------------------------+
So, all we have to do - just sum the values, from lines, which contains the same values i and j.
Second step:
function Map (input is a single row of the file, which produced in first step):
i = row[0]
j = row[1]
value = row[2]
emit({i, j}, value)
function Reduce(Key, List of values from the Map function)
i = Key[0]
j = Key[1]
result = 0;
for each Value from List of values from the Map function
result += Value
emit({i, j}, result)
After the second step we will get the file, which contains cells of the matrix C.
So the question is:
Taking into account, that there are multiple number of instances in MapReduce cluster - which is the most correct way to estimate complexity of the provided algorithm?
The first one, which comes to mind is such:
When we assume that number of instances in the MapReduce cluster is K.
And, because of the number of lines - from file, which produced after the first step is O(N^3) - the overall complexity can be estimated as O((N^3)/K).
But this estimation doesn't take into account many details: such as network bandwidth between instances of MapReduce cluster, ability to distribute data between distances - and perform most of the calculations locally etc.
So, I would like to know which is the best approach for estimation of efficiency of the provided MapReduce algorithm, and does it make sense to use Big-O notation to estimate efficiency of MapReduce algorithms at all?
as you said the Big-O estimates the computation complexity, and does not take into consideration the networking issues such(bandwidth, congestion, delay...)
If you want to calculate how much efficient the communication between instances, in this case you need other networking metrics...
However, I want to tell you something, if your file is not big enough, you will not see an improvement in term of execution speed. This is because the MapReduce works efficiently only with BIG data. Moreover, your code has two steps, that means two jobs. MapReduce, from one job to another, takes time to upload the file and start the job again. This can affect slightly the performance.
I think you can calculate the efficiently in term of speed and time as the MapReduce approach is for sure faster when it comes to big data. This is if we compared it to the sequential algorithms.
Moreover, efficiency can be with regards to the fault-tolerance. This is because MapReduce will manage to handle failures by itself. So, no need for the programmers to handle instance failure or networking failures..

turned on bits counter

Suppose I have a black box with 3 inputs (each input is 1 bit) and 2 bits output.
The black box counts the amount of turned on input bits.
Using only such black boxes,one needs to implement the counter of turned on bits in the input,which has 7 bits.The implementation should use the minimum possible amount of black boxes.
//This is a job interview question
You're making a binary adder. Try this...
Two black boxes for input with one input remaining:
7 6 5 4 3 2 1
| | | | | | |
------- ------- |
| | | | |
| H L | | H L | |
------- ------- |
| | | | |
Take the two low outputs and the remaining input (1) and feed them to another black box:
L L 1
| | |
-------
| |
| C L |
-------
| |
The low output from this black box will be the low bit of the result. The high output is the carry bit. Feed this carry bit along with the high bits from the first two black boxes into the fourth black box:
H H C L
| | | |
------- |
| | |
| H M | |
------- |
| | |
The result should be the number of "on" bits in the input expressed in binary by the High, Middle and Low bits.
Suppose that each BB outputs a 2-bit binary count 00, 01, 10, or 11, when 0, 1, 2, or 3 of its inputs are on. Also suppose that the desired ultimate output O₄O₂O₁ is a 3-bit binary count 000 ... 111, when 0, 1, ... 7 of the 7 input bits i₁...i₇ are on. For problems like this in general, you can write a boolean expression for what the BB does and a boolean expression for the desired output and then synthesize the output. In this particular case, however, try the obvious approach of putting i₁, i₂, i₃ into a first box B₁, and i₄, i₅, i₆ into a second box B₂, and i₇ into one input of a third box B₃. Looking at this it's clear that if you run the units outputs from B₁ and B₂ into the other two inputs of B₃ then the units output from B₃ is equal to the desired value O₁. You can get the sum of the twos outputs from B₁, B₂, B₃ via a box B₄, and this sum is equal to the desired values O₄O₂.

Optimization problems with linq and searching within a groupby

I have a Collection 'Col':
ID|Type|Type1|Type2|Count
1| a | a | b |2
1| b | a | b |1
2| d | b | d |2
2| b | b | d |2
3| x | y | x |0
3| y | y | x |1
I want to return a collection which contains for each unique ID, the possible types, the sum of each types count, and the individual count for each type
Ideally, the output should be:
ID|Type1|Type2|TotalCount|Type1Count|Type2Count
1 | a | b | 3 | 2 | 1
2 | b | d | 4 | 2 | 2
3 | y | x | 1 | 0 | 1
Currently, I do a groupBy and a Select on the collection:
Col.GroupBy(g=>new{g.ID, g.Type1, g.Type2}).Select(s=>new
{s.Key.ID, s.Key.Type1, s.Key.Type2, TotalCount = s.Sum(x=>x.Count),
Type1Count = s.Count(x=>x.Type == s.Key.Type1) //addition of individual count severely slows down query
Type2Count = s.Count(x=>x.Type == s.Key.Type2)// same here
}
);
As indicated above, the inclusion of Type1Count and Type2Count drastically increases the time the query would take without those columns. I have tried various other ways to get the value of the "Count" column from the initial Col table for each grouped row, but again...performance takes a big hit. Eventually the right results are returned though.
Is there an easier, better and or faster way to achieve the same results?
Any help is appreciated!

Resources