A stack is said to be the ideal data structure for Arithmetic Evaluation. Why is it so?
Why do we even need a data structure for Arithmetic Evaluation? I've been studying about this for some time now and still confused. I don't understand what is the use of Prefix and Postfix expressions because the Infix expression is quite readable.
Answer for the part of why postfix/prefix over infix is quite well explained here.As a summary infix is readable but not easily parsed
As for why stack is used here is:
1: push,pop in O(1) time are quite useful for evaluation.
2: push: add the operand on stack.
3: pop: remove the operand and evaluate expression(binary)
4: final result is the only one left on stack after parsing
The infix expression is readable, yes. But if you want to write an algorithm that can evaluate an arithmetic expression, how would you do?
Take the following expression:
3 + 4 * 5 + 2 ^ 3 * 12 + 6
How does your algorithm proceed from there?
A simple and naive way is to look for the highest-precedence operation, evaluate it, then rewrite the string, and keep doing that until all operations have been performed. You'd get this result:
3 + 4 * 5 + 2 ^ 3 * 12 + 6
3 + 4 * 5 + 8 * 12 + 6
3 + 20 + 96 + 6
23 + 102
125
That is one way to do it. But not a particularly efficient way. Looking through the string for the highest-precedence operation takes time linear in the length of the string, and you have to do that once per operation, and rewrite the string every time. You end up with something like a quadratic complexity. There might be a few tricks to be slightly more efficient, but it's not going to be as efficient as other existing methods.
Another possible method is to put the expression into a tree, called a "syntax tree" or "abstract syntax tree". We get this:
+
/ / \ \
3 * * 6
/ \ / \
4 5 ^ 12
/ \
2 3
This tree is easier to evaluate for an algorithm, compared to the expression we had before: it is a linked structure, in which you can easily replace one branch by the value of that branch without having to rewrite everything else in the tree. So you replace 2^3 with 8 in the tree, then 8 * 12 with 96, etc.
Postfix (or prefix) notation is harder to read for humans, but much easier to manipulate for an algorithm. My previous example becomes this in postfix:
3 4 5 * + 2 3 ^ 12 * + 6 +
This can be evaluated easily reading it from left to right; every time you encounter a number, push it onto a stack; every time you encounter an operator, pop the two numbers on top of the stack, perform the operation, and push the result.
Assuming the postfix expression was correct, there should be a single number in the stack at the end of the evaluation.
EXPR | [3] 4 5 * + 2 3 ^ 12 * + 6 +
STACK | 3
EXPR | 3 [4] 5 * + 2 3 ^ 12 * + 6 +
STACK | 3 4
EXPR | 3 4 [5] * + 2 3 ^ 12 * + 6 +
STACK | 3 4 5
EXPR | 3 4 5 [*] + 2 3 ^ 12 * + 6 +
STACK | 3 20
EXPR | 3 4 5 * [+] 2 3 ^ 12 * + 6 +
STACK | 23
EXPR | 3 4 5 * + [2] 3 ^ 12 * + 6 +
STACK | 23 2
EXPR | 3 4 5 * + 2 [3] ^ 12 * + 6 +
STACK | 23 2 3
EXPR | 3 4 5 * + 2 3 [^] 12 * + 6 +
STACK | 23 8
EXPR | 3 4 5 * + 2 3 ^ [12] * + 6 +
STACK | 23 8 12
EXPR | 3 4 5 * + 2 3 ^ 12 [*] + 6 +
STACK | 23 96
EXPR | 3 4 5 * + 2 3 ^ 12 * [+] 6 +
STACK | 119
EXPR | 3 4 5 * + 2 3 ^ 12 * + [6] +
STACK | 119 6
EXPR | 3 4 5 * + 2 3 ^ 12 * + 6 [+]
STACK | 125
And there we have the result. We only had to read through the expression once. Thus the execution time is linear. This is much better than the quadratic execution time we had when trying to evaluate the infix expression directly and had to read through it several time, looking for the next operation to perform.
Note that converting from infix to postfix can also be done in linear time, using the so-called Shunting Yard algorithm, which uses two stacks. Stacks are awesome!
I'd like to create a function where for an arbitrary integer input value (let's say unsigned 32 bit) and a given number of d digits the return value will be a d digit B base number, B being the smallest base that can be used to represent the given input on d digits.
Here is a sample input - output of what I have in mind for 3 digits:
Input Output
0 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
8 0 0 2
9 0 1 2
10 1 0 2
11 1 1 2
12 0 2 0
13 0 2 1
14 1 2 0
15 1 2 1
16 2 0 0
17 2 0 1
18 2 1 0
19 2 1 1
20 0 2 2
21 1 2 2
22 2 0 2
23 2 1 2
24 2 2 0
25 2 2 1
26 2 2 2
27 0 0 3
28 0 1 3
29 1 0 3
30 1 1 3
.. .....
The assignment should be 1:1, for each input value there should be exactly one, unique output value. Think of it as if the function should return the nth value from the list of strangely sorted B base numbers.
Actually this is the only approach I could come up so far with - given an input value, generate all the numbers in the smallest possible B base to represent the input on d digits, then apply a custom sorting to the results ('penalizing' the higher digit values and putting them further back in the sort), and return the nth value from the sorted array. This would work, but is a spectacularly inefficient implementation - I'd like to do this without generating all the numbers up to the input value.
What would be an efficient approach for implementing this function? Any language or pseudocode is fine.
MBo's answer shows how to find the smallest base that will represent an integer number with a given number of digits.
I'm not quite sure about the ordering in your example. My answer is based on a different ordering: Create all possible n-digit numbers up to base b (e.g. all numbers up to 999 for max. base 10 and 3 digits). Sort them according to their maximum digit first. Numbers are sorted normalls within a group with the same maximum digit. This retains the characteristic that all values from 8 to 26 must be base 3, but the internal ordering is different:
8 0 0 2
9 0 1 2
10 0 2 0
11 0 2 1
12 0 2 2
13 1 0 2
14 1 1 2
15 1 2 0
16 1 2 1
17 1 2 2
18 2 0 0
19 2 0 1
20 2 0 2
21 2 1 0
22 2 1 1
23 2 1 2
24 2 2 0
25 2 2 1
26 2 2 2
When your base is two, life is easy: Just generate the appropriate binary number.
For other bases, let's look at the first digit. In the example above, five numbers start with 0, five start with 1 and nine start with 2. When the first digit is 2, the maximum digit is assured to be 2. Therefore, we can combine 2 with a 9 2-digit numbers of base 3.
When the first digit is smaller than the maximum digit in the group, we can combine it with the 9 2-digit numbers of base 3, but we must not use the 4 2-digit numbers that are ambiguous with the 4 2-digit numbers of base 2. That gives us five possibilites for the digits 0 and 1. These possibilities – 02, 12, 20, 21 and 22 – can be described as the unique numbers with two digits according to the same scheme, but with an offset:
4 0 2
5 1 2
6 2 0
7 2 1
8 2 2
That leads to a recursive solution:
for one digit, just return the number itself;
for base two, return the straightforward representation in base 2;
if the first number is the maximum digit for the determined base, combine it with a straighforward representations in that base;
otherwise combine it with a recursively determined representation of the same algorithm with one fewer digit.
Here's an example in Python. The representation is returned as list of numbers, so that you can represent 2^32 − 1 as [307, 1290, 990].
import math
def repres(x, ndigit, base):
"""Straightforward representation of x in given base"""
s = []
while ndigit:
s += [x % base]
x /= base
ndigit -= 1
return s
def encode(x, ndigit):
"""Encode according to min-base, fixed-digit order"""
if ndigit <= 1:
return [x]
base = int(x ** (1.0 / ndigit)) + 1
if base <= 2:
return repres(x, ndigit, 2)
x0 = (base - 1) ** ndigit
nprev = (base - 1) ** (ndigit - 1)
ncurr = base ** (ndigit - 1)
ndiff = ncurr - nprev
area = (x - x0) / ndiff
if area < base - 1:
xx = x0 / (base - 1) + x - x0 - area * ndiff
return [area] + encode(xx, ndigit - 1)
xx0 = x0 + (base - 1) * ndiff
return [base - 1] + repres(x - xx0, ndigit - 1, base)
for x in range(32):
r = encode(x, 3)
print x, r
Assuming that all values are positive, let's make simple math:
d-digit B-based number can hold value N if
Bd > N
so
B > N1/d
So calculate N1/d value, round it up (increment if integer), and you will get the smallest base B.
(note that numerical errors might occur)
Examples:
d=2, N=99 => 9.95 => B=10
d=2, N=100 => 10 => B=11
d=2, N=57 => 7.55 => B=8
d=2, N=33 => 5.74 => B=6
Delphi code
function GetInSmallestBase(N, d: UInt32): string;
const
Digits = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ';
var
Base, i: Byte;
begin
Base := Ceil(Power(N, 1/d) + 1.0E-12);
if Base > 36 then
Exit('Big number, few digits...');
SetLength(Result, d);
for i := d downto 1 do begin
Result[i] := Digits[1 + N mod Base]; //Delphi string is 1-based
N := N div Base;
end;
Result := Result + Format(' : base [%d]', [Base]);
end;
begin
Memo1.Lines.Add(GetInSmallestBase(99, 2));
Memo1.Lines.Add(GetInSmallestBase(100, 2));
Memo1.Lines.Add(GetInSmallestBase(987, 2));
Memo1.Lines.Add(GetInSmallestBase(1987, 2));
Memo1.Lines.Add(GetInSmallestBase(87654321, 6));
Memo1.Lines.Add(GetInSmallestBase(57, 2));
Memo1.Lines.Add(GetInSmallestBase(33, 2));
99 : base [10]
91 : base [11]
UR : base [32]
Big number, few digits...
H03LL7 : base [22]
71 : base [8]
53 : base [6]
I am learning about TSP and understand it quite well , but i could not understand How Bit masking can be used to generate all permutation.
If i am 3 citites so i will find the cost as:
0 1 2 3
0 1 3 2
0 2 1 3
0 2 3 1
0 3 1 2
0 3 2 1
or:
g(0,{1,2,3})
/ | \
g(1,{2,3}) g(2,{1,3}) g(3,{1,2})
/ \ / \ | \
g(2,{3}) g(3,{2}) g(1,{3}) g(3,{1}) g(1,{2}) g(2,{1})
/ | | | | |
0 0 0 0 0 0
g(3,null) g(2,null) g(3,null) g(1,null) g{2,null) g(1,null)
How bit masking is used in this
Here is a dynamic programming solution with O(2^n * n^2) time and O(2^n * n) space complexity which uses bit masks.
Let's assume that f(mask, last) is the shortest path that goes for all cities in the mask which starts in 0 city and ends in the last city(last must be in the mask).
The base case is simple: f(1, 0) = 0(it corresponds to the case when the only city we have visited so far is the start city).
Transitions:
for cur not in mask
for last = 0 ... n - 1
f(mask or 2^cur, cur) = min(f(mask or 2^cur, cur), f(mask, last) + dist(last, cur))
The answer is min(f(2^n - 1, last) + dist(last, 0) for last = 1 ... n - 1)
I have a very large matrix (100M rows by 100M columns) that has a lots of duplicate values right next to each other. For example:
8 8 8 8 8 8 8 8 8 8 8 8 8
8 4 8 8 1 1 1 1 1 8 8 8 8
8 4 8 8 1 1 1 1 1 8 8 8 8
8 4 8 8 1 1 1 1 1 8 8 8 8
8 4 8 8 1 1 1 1 1 8 8 8 8
8 4 8 8 1 1 1 1 1 8 8 8 8
8 8 8 8 8 8 8 8 8 8 8 8 8
8 8 3 3 3 3 3 3 3 3 3 3 3
I want a datastructure/algorithm to store matricies like these as compactly as possible. For instance, the matrix above should only take O(1) space (even if the matrix was stretched out arbitrarily big), because there is only a constant number of rectangular regions, where each region only has one value.
The repetition happens both across rows and down columns, so the simple approach of compressing the matrix row-by-row isn't good enough. (That would require a minimum of O(num_rows) space to store any matrix.)
The representation of the matrix also needs to accessible row-by-row, so that I can do a matrix multiplication to a column vector.
You could store the matrix as a quadtree with the leaves containing single values. Think of this as a two-dimensional "run" of values.
Now for my preferred method.
Ok, as I made mention in my previous answer rows with the same entries in each column in matrix A will multiply out to the same result in matrix AB. If we can maintain that relationship then we can theoretically speed up calculations significantly (a profiler is your friend).
In this method we maintain the row * column structure of the matrix.
Each row is compressed with whatever method can decompress fast enough not to affect the multiplication speed too much. RLE may be sufficient.
We now have a list of compressed rows.
We use an entropy encoding method (like Shannon-Fano, Huffman or arithmetic coding), but we don’t compress the data in the rows with this, we use it to compress the set of rows.
We use it to encode the relative frequency of the rows. I.e. we treat a row the same way standard entropy encoding would treat a character/byte.
In this example RLE compresses a row, and Huffman compresses the entire set of rows.
So, for example, given the following matrix (prefixed with row numbers, Huffman used for ease of explanation)
0 | 8 8 8 8 8 8 8 8 8 8 8 8 8 |
1 | 8 4 8 8 1 1 1 1 1 8 8 8 8 |
2 | 8 4 8 8 1 1 1 1 1 8 8 8 8 |
3 | 8 4 8 8 1 1 1 1 1 8 8 8 8 |
4 | 8 4 8 8 1 1 1 1 1 8 8 8 8 |
5 | 8 4 8 8 1 1 1 1 1 8 8 8 8 |
6 | 8 8 8 8 8 8 8 8 8 8 8 8 8 |
7 | 8 8 3 3 3 3 3 3 3 3 3 3 3 |
Run length encoded
0 | 8{13} |
1 | 8{1} 4{1} 8{2} 1{5} 8{4} |
2 | 8{1} 4{1} 8{2} 1{5} 8{4} |
3 | 8{1} 4{1} 8{2} 1{5} 8{4} |
4 | 8{1} 4{1} 8{2} 1{5} 8{4} |
5 | 8{1} 4{1} 8{2} 1{5} 8{4} |
6 | 8{13} |
7 | 8{2} 3{11} |
So, 0 and 6 appear twice and 1 – 5 appear 5 times. 7 only once.
Frequency table
A: 5 (1-5) | 8{1} 4{1} 8{2} 1{5} 8{4} |
B: 2 (0,6) | 8{13} |
C: 1 7 | 8{2} 3{11} |
Huffman tree
0|1
/ \
A 0|1
/ \
B C
So in this case it takes one bit (for each row) to encode rows 1 – 5, and 2 bits to encode rows 0, 6, and 7.
(If the runs are longer than a few bytes then do freq count on a hash that you build up as you do the RLE).
You store the Huffman tree, unique strings, and the row encoding bit stream.
The nice thing about Huffman is that it has a unique prefix property, so you always know when you are done. Thus, given the bit string 10000001011 you can rebuild the matrix A from the stored unique strings and the tree. The encoded bit stream tells you the order that the rows appear in.
You may want to look into adaptive Huffman encoding, or its arithmetic counterpart.
Seeing as rows in A with the same column entries multiply to the same result in AB over vector B you can cache the result and use it instead of calculating it again (it’s always good to avoid 100M*100M multiplications if you can).
Links to further info:
Arithmetic Coding + Statistical Modeling = Data Compression
Priority Queues and the STL
Arithmetic coding
Huffman coding
A Comparison
Uncompressed
0 1 2 3 4 5 6 7
=================================
0 | 3 3 3 3 3 3 3 3 |
|-------+ +-------|
1 | 4 4 | 3 3 3 3 | 4 4 |
| +-----------+---+ |
2 | 4 4 | 5 5 5 | 1 | 4 4 |
| | | | |
3 | 4 4 | 5 5 5 | 1 | 4 4 |
|---+---| | | |
4 | 5 | 0 | 5 5 5 | 1 | 4 4 |
| | +---+-------+---+-------|
5 | 5 | 0 0 | 2 2 2 2 2 |
| | | |
6 | 5 | 0 0 | 2 2 2 2 2 |
| | +-------------------|
7 | 5 | 0 0 0 0 0 0 0 |
=================================
= 64 bytes
Quadtree
0 1 2 3 4 5 6 7
=================================
0 | 3 | 3 | | | 3 | 3 |
|---+---| 3 | 3 |---+---|
1 | 4 | 4 | | | 4 | 4 |
|-------+-------|-------+-------|
2 | | | 5 | 1 | |
| 4 | 5 |---+---| 4 |
3 | | | 5 | 1 | |
|---------------+---------------|
4 | 5 | 0 | 5 | 5 | 5 | 1 | 4 | 4 |
|---+---|---+---|---+---|---+---|
5 | 5 | 0 | 0 | 2 | 2 | 2 | 2 | 2 |
|-------+-------|-------+-------|
6 | 5 | 0 | 0 | 2 | 2 | 2 | 2 | 2 |
|---+---+---+---|---+---+---+---|
7 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
=================================
0 +- 0 +- 0 -> 3
| +- 1 -> 3
| +- 2 -> 4
| +- 3 -> 4
+- 1 -> 3
+- 2 -> 4
+- 3 -> 5
1 +- 0 -> 3
+- 1 +- 0 -> 3
| +- 1 -> 3
| +- 2 -> 4
| +- 3 -> 4
+- 2 +- 0 -> 5
| +- 1 -> 1
| +- 2 -> 5
| +- 3 -> 1
+- 3 -> 4
2 +- 0 +- 0 -> 5
| +- 1 -> 0
| +- 2 -> 5
| +- 3 -> 0
+- 1 +- 0 -> 5
| +- 1 -> 5
| +- 2 -> 0
| +- 3 -> 2
+- 2 +- 0 -> 5
| +- 1 -> 0
| +- 2 -> 5
| +- 3 -> 0
+- 3 +- 0 -> 0
+- 1 -> 2
+- 2 -> 0
+- 3 -> 0
3 +- 0 +- 0 -> 5
| +- 1 -> 1
| +- 2 -> 2
| +- 3 -> 2
+- 1 +- 0 -> 4
| +- 1 -> 4
| +- 2 -> 2
| +- 3 -> 2
+- 2 +- 0 -> 2
| +- 1 -> 2
| +- 2 -> 0
| +- 3 -> 0
+- 3 +- 0 -> 2
+- 1 -> 2
+- 2 -> 0
+- 3 -> 0
((1*4) + 3) + ((2*4) + 2) + (4 * 8) = 49 leaf nodes
49 * (2 + 1) = 147 (2 * 8 bit indexer, 1 byte data)
+ 14 inner nodes -> 2 * 14 bytes (2 * 8 bit indexers)
= 175 Bytes
Region Hash
0 1 2 3 4 5 6 7
=================================
0 | 3 3 3 3 3 3 3 3 |
|-------+---------------+-------|
1 | 4 4 | 3 3 3 3 | 4 4 |
| +-----------+---+ |
2 | 4 4 | 5 5 5 | 1 | 4 4 |
| | | | |
3 | 4 4 | 5 5 5 | 1 | 4 4 |
|---+---| | | |
4 | 5 | 0 | 5 5 5 | 1 | 4 4 |
| + - +---+-------+---+-------|
5 | 5 | 0 0 | 2 2 2 2 2 |
| | | |
6 | 5 | 0 0 | 2 2 2 2 2 |
| +-------+-------------------|
7 | 5 | 0 0 0 0 0 0 0 |
=================================
0: (4,1; 4,1), (5,1; 6,2), (7,1; 7,7) | 3
1: (2,5; 4,5) | 1
2: (5,3; 6,7) | 1
3: (0,0; 0,7), (1,2; 1,5) | 2
4: (1,0; 3,1), (1,6; 4,7) | 2
5: (2,2; 4,4), (4,0; 7,0) | 2
Regions: (3 + 1 + 1 + 2 + 2 + 2) * 5
= 55 bytes {4 bytes rectangle, 1 byte data)
{Lookup table is a sorted array, so it does not need extra storage}.
Huffman encoded RLE
0 | 3 {8} | 1
1 | 4 {2} | 3 {4} | 4 {2} | 2
2,3 | 4 {2} | 5 {3} | 1 {1} | 4 {2} | 4
4 | 5 {1} | 0 {1} | 5 {3} | 1 {1} | 4 {2} | 5
5,6 | 5 {1} | 0 {2} | 2 {5} | 3
7 | 5 {1} | 0 {7} | 2
RLE Data: (1 + 3+ 4 + 5 + 3 + 2) * 2 = 36
Bit Stream: 20 bits packed into 3 bytes = 3
Huffman Tree: 10 nodes * 3 = 30
= 69 Bytes
One Giant RLE stream
3{8};4{2};3{4};4{4};5{3};1{1};4{4};5{3};1{1};4{2};5{1};0{1};
5{3};1{1};4{2};5{1};0{2};2{5};5{1};0{2};2{5};5{1};0{7}
= 2 * 23 = 46 Bytes
One Giant RLE stream encoded with common prefix folding
3{8};
4{2};3{4};
4{4};5{3};1{1};
4{4};5{3};
1{1};4{2};5{1};0{1};5{3};
1{1};4{2};5{1};0{2};2{5};
5{1};0{2};2{5};
5{1};0{7}
0 + 0 -> 3{8};4{2};3{4};
+ 1 -> 4{4};5{3};1{1};
1 + 0 -> 4{2};5{1} + 0 -> 0{1};5{3};1{1};
| + 1 -> 0{2}
|
+ 1 -> 2{5};5{1} + 0 -> 0{2};
+ 1 -> 0{7}
3{8};4{2};3{4} | 00
4{4};5{3};1{1} | 01
4{4};5{3};1{1} | 01
4{2};5{1};0{1};5{3};1{1} | 100
4{2};5{1};0{2} | 101
2{5};5{1};0{2} | 110
2{5};5{1};0{7} | 111
Bit stream: 000101100101110111
RLE Data: 16 * 2 = 32
Tree: : 5 * 2 = 10
Bit stream: 18 bits in 3 bytes = 3
= 45 bytes
If your data is really regular, you might benefit from storing it in a structured format; e.g. your example matrix might be stored as the following list of "fill-rectangle" instructions:
(0,0)-(13,7) = 8
(4,1)-(8,5) = 1
(Then to look up the value of a particular cell, you'd iterate backwards through the list until you found a rectangle that contained that cell)
As Ira Baxter suggested,
you could store the matrix as a quadtree with the leaves containing single values.
The simplest way to do this is for every node of the quadtree to cover an area 2^n x 2^n,
and each non-leaf node points to its 4 children of size 2^(n-1) x 2^(n-1).
You might get slightly better compression with an adaptive quadtree that allows irregular sub-division.
Then each non-leaf node stores the cut-point (B,G) and points to its 4 children.
For example, if some non-leaf node covers an area from (A,F) in the upper-left corner to (C,H) in the lower-right corner,
then its 4 children cover areas
(A,F) to (B-1, G-1)
(A,G) to (B-1, H)
(B,F) to (C,G-1)
(B,G) to (C,H).
You would try to pick the (B,G) cut-point for each non-leaf node such that it lines up with some real division in your data.
For example, say you have a matrix with a small square in the middle filled with nines and zero elsewhere.
With the simple powers-of-two quadtree, you'll end up with at least 21 nodes: 5 non-leaf nodes, 4 leaf nodes of nines, and 12 leaf nodes of zeros.
(You'll get even more nodes if the centered small square is not precisely some power-of-two distance from the left and top edges, and not itself some precise power-of-two).
With an adaptive quadtree, if you are smart enough to pick the cut-point for the root node at the upper-left corner of that square, then for the root's lower-right child you pick a cut-point at the lower-right corner of the square, you can representing the entire matrix in 9 nodes: 2 non-leaf nodes, 1 leaf node for the nines, and 6 leaf nodes for the zeros.
Do you know about.... interval trees ?
Interval trees are a way to store intervals efficiently, and then query them. A generalization is the Range Tree, which can be adapted to any dimension.
Here you could effectively describe your rectangles and attach a value to them. Of course the rectangles can overlap, that's what will make it efficient.
0,0-n,n --> 8
4,4-7,7 --> 1
8,8-8,n --> 3
Then when querying for a value in one particular spot, you are returned a list of several rectangles and need to determine the innermost one: this is the value in this spot.
The simplest approach is to use run-length encoding on one dimension and not worry about the other dimension.
(If the dataset weren't so incredibly huge, interpreting it as an image and using a standard lossless image compression method would be very simple also--but since you'd have to work on making the algorithm work on sparse matrices, it wouldn't end up being all that simple.)
Another simple approach is to try a rectangular flood fill--start at the top-right pixel and increase it into the largest rectangle you can (breadth-first); then mark all those pixels as "done" and take the top-right most remaining pixel, repeat until done. (You'd probably want to store these rectangles in some sort of BSP or quad-tree.)
A highly effective technique--not optimal, but probably good enough--is to use a binary space partitioning tree where "space" is measured not spatially but by number of changes. You'd recursively cut so that you have equal numbers of changes on the left and right (or top and bottom--presumably you'd want to keep things square) and, as your sizes got smaller, so that you would cut as many changes as possible. Eventually, you'll end up cutting two rectangles apart from each other, each of which has all the same number; then stop. (Encoding by RLE in x and y will quickly tell you where the change points are.)
Your description of O(1) space for a matrix of size 100M x 100M is confusing. When you have a finite matrix, then your size is a constant (unless the program that generates the matrix doesn't alter it). So the amount of space required to store is also a constant even if you multiply it with a scalar. Definitely the time to read and write the matrix is not going to be O(1).
Sparse matrix is what I could think of to reduce the amount of space required to store such a matrix. You can write this sparse matrix to a file and store it as a tar.gz which will further compress the data.
I do have a question what does M in 100M denote? Does it mean Megabyte/million? If yes, this matrix size will be 100 x 10^6 x 100 x 10^6 bytes = 10^16 / 10^6 MB = 10^10/10^6 TB = 10^4 TB!!! What kind of a machine are you using?
I'm not sure why this question was made Community Wiki, but so it goes.
I'll rely on the assumption that you have a linear algebra application, and that your matrix has a rectangular type of redundancy. If so, then you can do something much better than quadtrees, and cleaner than cutting the matrix into rectangles (which is generally the right idea).
Let M be your matrix, let v be the vector that you want to multiply by M, and let
A be the special matrix
A = [1 -1 0 0 0]
[0 1 -1 0 0]
[0 0 1 -1 0]
[0 0 0 1 -1]
[0 0 0 0 1]
You'll also need the inverse matrix to A, which I'll call B:
B = [1 1 1 1 1]
[0 1 1 1 1]
[0 0 1 1 1]
[0 0 0 1 1]
[0 0 0 0 1]
Multiplying a vector v by A is fast and easy: You just take differences of consecutive pairs of elements of v. Multiply a vector v by B is also fast and easy: The entries of Bv are partial sums of the elements of v. Then you want to use the equation
Mv = B AMA B v
The matrix AMA is sparse: In the middle, each entry is an alternating sum of 4 entries of M that make a 2 x 2 square. You have to be at a corner of one of the rectangles in M for this alternating sum to be non-zero. Since AMA is sparse, you can store its non-zero entries in an associative array and use sparse matrix multiplication to apply it to a vector.
I do not have a specific answer for the matrix you have shown. In finite element analysis (FEA), you have matrices with redundant data. In implementing a FEA package in my under grad project, I used skyline storage method.
Some links:
Intel page for sparse matrix storage
Wikipedia link
The first thing to try is always the existing libraries and solutions. It is a lot of work getting custom formats working with all the operations you're going to want in the end. Sparse matrices is an old problem, so make sure you read up on the existing stuff.
Assuming you don't find something suitable, I would recommend a row-based format. Don't try to be too fancy with super-compact representations, you will end up with lots of processing needed for every little operation and bugs in your code. Instead try to compress each row separately. You know you are going to have to scan through each row for the matrix-vector multiplication, make life easy for yourself.
I would start with run-length-encoding, see how that works first. Once that is working, try adding some tricks like references to sections of the previous row. So a row might be encoded as: 126 zeros, 8 ones, 1000 entries copied directly from row above, 32 zeros. That seems like it might be very efficient with your given example.
Many of the above solutions are fine.
If you are working with a file consider file oriented
compression tools like compress, bzip, zip, bzip2 and friends.
They work very well especially if the data contains redundant
ASCII characters. Using an external compression tool eliminates
problems and challenges inside your code and will compress
both binary and ASCII data.
In your example you are displaying one character numbers.
The numbers 0-9 can be represented by a smaller four bit
encoding pattern. You can use the additional bits in
a byte as a count. Four bits gives you extra codes to
escape to extras... But there is a caution which reaches
back to the old Y2K bugs where two characters were used
for a year. Byte encoding from an ofset would have given
255 years and the same two bytes would span all of written
history and then some.
You may want to take a look at GIF format and its compression algorithm. Just think about your matrix as a Bitmap...
Let me check my assumptions, if for no other reason than to guide my thinking about the problem:
The matrix is highly redundant, not necessarily sparse.
We want to minimize storage (on disk and RAM).
We want to be able to multiply A[m*n] by vector B[n*1] to get to AB[m*1] without first decompressing either (at least not more than required to do the calculations).
We don’t need random access to any A[i*j] entry --all operations are over the matrix.
The multiplication is done online (as needed), and so must be as efficient as possible.
The matrix is static.
One can try all kinds of clever schemes to detect rectangles or self similarity etc, but that is going to end up hurting performance when doing the multiplication. I propose 2 relatively simple solutions.
I am going to have to work backwards a bit, so please be patient with me.
If the data is predominantly biased towards horizontal repetition then the following may work well.
Think of the matrix flattened into an array (this is really the way it is stored in memory anyway). E.g.
A
| w0 w1 w2 |
| x0 x1 x2 |
| y0 y1 y2 |
| z0 z1 z2 |
becomes
A’
| w0 w1 w2 x0 x1 x2 y0 y1 y2 z0 z1 z2 |
We can use the fact that any index [i,j] = i * j.
So, when we do the multiplication we iterate over the “matrix” array A’ with k = [0..m*n-1] and index into the vector B using (k mod n) and into vector AB with (k div n). “div” being integer division.
So, for example, A[10] = z1. 10 mod 3 = 1 and 10 div 3 = 3 A[3,1] = z1.
Now, on to the compression.
We do normal run of the mill Run Length Encoding (RLE), but against the A’, not A. With the flat array there will be longer sequences of repetition, hence better compression. Then after encoding the runs we do another process where we extract common substrings. We can either do a form of dictionary compression, or process the run data into some form of space optimized graph like a radix tree/suffix tree or a device of your own creation that merges tops and tails. The graph should have a representation of all the unique strings in the data. You can pick any number of methods to break the stream into strings: matching prefixes, length, or something else (whatever suits your graph best) but do it on a run boundary, not bytes or your decoding will be made more complicated. The graph becomes a state machine when we decompress the stream.
I’m going to use a bit stream and Patricia trie as an example, because it is simplest, but you can use something else (more bits per state change better merging, etc. Look for papers by Stefan Nilsson).
To compress the run data we build a hash table against the graph. The table maps a string to a bit sequence. You can do this by walking the graph and encoding each left branch as 0 and right branch as 1 (arbitrary choice).
Process the run data and build up a bit string until you get a match in the hash table, output the bits and clear the string (the bits will not be on a byte boundary, so you may have to buffer until you get a sequence long enough to write out). Rinse and repeat until you have processed the complete run data stream. You store the graph and the bit stream. The bit stream encodes strings, not bytes.
If you reverse the process, using the bit stream to walk the graph until you reach a leaf/terminal node you get back the original run data, which you can decode on the fly to produce the stream of integers that you multiply against the vector B to get AB. Each time you run out of runs you read the next bit and lookup its corresponding string. We don’t care that we don’t have random access into A, because we only need it in B (B which can be range / interval compressed but doesn’t need to be).
So even though RLE is biased towards horizontal runs we still get good vertical compression because common strings are stored only once.
I will explain the other method in a separate answer as this is getting too long as it is, but that method can actually speed up calculation due to the fact that repeat rows in matrix A multiplies to the same result in AB.
ok you need a compression algorithm try RLE (Run Length Encoding) its work very good when the data is
highly-redundant .
Can anyone tell me which is the best algorithm to find the value of determinant of a matrix of size N x N?
Here is an extensive discussion.
There are a lot of algorithms.
A simple one is to take the LU decomposition. Then, since
det M = det LU = det L * det U
and both L and U are triangular, the determinant is a product of the diagonal elements of L and U. That is O(n^3). There exist more efficient algorithms.
Row Reduction
The simplest way (and not a bad way, really) to find the determinant of an nxn matrix is by row reduction. By keeping in mind a few simple rules about determinants, we can solve in the form:
det(A) = α * det(R), where R is the row echelon form of the original matrix A, and α is some coefficient.
Finding the determinant of a matrix in row echelon form is really easy; you just find the product of the diagonal. Solving the determinant of the original matrix A then just boils down to calculating α as you find the row echelon form R.
What You Need to Know
What is row echelon form?
See this [link](http://stattrek.com/matrix-algebra/echelon-form.aspx) for a simple definition
**Note:** Not all definitions require 1s for the leading entries, and it is unnecessary for this algorithm.
You Can Find R Using Elementary Row Operations
Swapping rows, adding multiples of another row, etc.
You Derive α from Properties of Row Operations for Determinants
If B is a matrix obtained by multiplying a row of A by some non-zero constant ß, then
det(B) = ß * det(A)
In other words, you can essentially 'factor out' a constant from a row by just pulling it out front of the determinant.
If B is a matrix obtained by swapping two rows of A, then
det(B) = -det(A)
If you swap rows, flip the sign.
If B is a matrix obtained by adding a multiple of one row to another row in A, then
det(B) = det(A)
The determinant doesn't change.
Note that you can find the determinant, in most cases, with only Rule 3 (when the diagonal of A has no zeros, I believe), and in all cases with only Rules 2 and 3. Rule 1 is helpful for humans doing math on paper, trying to avoid fractions.
Example
(I do unnecessary steps to demonstrate each rule more clearly)
| 2 3 3 1 |
A=| 0 4 3 -3 |
| 2 -1 -1 -3 |
| 0 -4 -3 2 |
R2 R3, -α -> α (Rule 2)
| 2 3 3 1 |
-| 2 -1 -1 -3 |
| 0 4 3 -3 |
| 0 -4 -3 2 |
R2 - R1 -> R2 (Rule 3)
| 2 3 3 1 |
-| 0 -4 -4 -4 |
| 0 4 3 -3 |
| 0 -4 -3 2 |
R2/(-4) -> R2, -4α -> α (Rule 1)
| 2 3 3 1 |
4| 0 1 1 1 |
| 0 4 3 -3 |
| 0 -4 -3 2 |
R3 - 4R2 -> R3, R4 + 4R2 -> R4 (Rule 3, applied twice)
| 2 3 3 1 |
4| 0 1 1 1 |
| 0 0 -1 -7 |
| 0 0 1 6 |
R4 + R3 -> R3
| 2 3 3 1 |
4| 0 1 1 1 | = 4 ( 2 * 1 * -1 * -1 ) = 8
| 0 0 -1 -7 |
| 0 0 0 -1 |
def echelon_form(A, size):
for i in range(size - 1):
for j in range(size - 1, i, -1):
if A[j][i] == 0:
continue
else:
try:
req_ratio = A[j][i] / A[j - 1][i]
# A[j] = A[j] - req_ratio*A[j-1]
except ZeroDivisionError:
# A[j], A[j-1] = A[j-1], A[j]
for x in range(size):
temp = A[j][x]
A[j][x] = A[j-1][x]
A[j-1][x] = temp
continue
for k in range(size):
A[j][k] = A[j][k] - req_ratio * A[j - 1][k]
return A
If you did an initial research, you've probably found that with N>=4, calculation of a matrix determinant becomes quite complex. Regarding algorithms, I would point you to Wikipedia article on Matrix determinants, specifically the "Algorithmic Implementation" section.
From my own experience, you can easily find a LU or QR decomposition algorithm in existing matrix libraries such as Alglib. The algorithm itself is not quite simple though.
I am not too familiar with LU factorization, but I know that in order to get either L or U, you need to make the initial matrix triangular (either upper triangular for U or lower triangular for L). However, once you get the matrix in triangular form for some nxn matrix A and assuming the only operation your code uses is Rb - k*Ra, you can just solve det(A) = Π T(i,i) from i=0 to n (i.e. det(A) = T(0,0) x T(1,1) x ... x T(n,n)) for the triangular matrix T. Check this link to see what I'm talking about. http://matrix.reshish.com/determinant.php