Background
I apologize for the music-based question, but the details don't really mean all that much. I'm sequentially going through a midi file and I'm looking for an efficient way to find a pattern in the data to find something called a tuplet. See image below:
The tuplets have the numbers (3 or 6) over top of them. I need to know at which position they begin in the data file. The numbers below the notes are the values you would see sequentially in the data file. Just in case you can't decipher the data below, here it is:
1, 2, 2.3333, 2.6666, 3, 3.5, 3.6666, 3.83333, 4, 4.1666, 4.3333, 4.5, 4.6666, 4.8333,
5, 6.3333, 6.6666, 7.1666, 7.3333, 7.5, 7.6666, 7.8333, 8, 8.1666, 8.333, 8.5, 8.6666.
The first tuplet begins at position 2 and the difference between the position of notes is 0.3333 (repeating)
The second tuplet begins at position 3.5 and the difference between the position of notes is 0.1666 (repeating)
The main issue is that in the note, unlike the image below, position 7 will not be noted in the data file because the data only file only lists note locations. The icon that you see in that location is called a rest, which is not notated in the data file.
Question
How can I find an efficient method to find the start of each tuplet? Is there some sort of recursive method?
I don't think you need any recursion for this.
The normal note values can only be represented by fractions of the beat of the type a / 2^b. The tuplets can be arbitrary fractions, but mostly I've seen something like triplets, quintuplets or (in your case sextuplets).
So the simplest way would be to compute the length of every note (maybe the time difference between two MIDI events? Or the length is stored explicitly in MIDI? I'm not that familiar with the format) and compute the rational representation of this length.
Every group of notes with a denominator that is not a power of two belongs to such a tuplet. To group the notes together, I would recommend the following approach (assuming that all notes of a tuplet have the same value):
Factorize the denominator into a power of two a and the rest b (e.g. a * b = 4 * 5)
Initialize an empty tuplet of size b
For every note compute the distance to the beginning of the tuplet and store the note at the corresponding position, inserting rests if necessary. The length of the tuplet can be computed by taking the minimum length l of all notes in the tuplet, so greedily adding them until the end of these notes exceeds a distance of l * b from the beginning of the tuplet
This way, you base the tuplet on the minimum note length and add all notes that fit into it.
Related
My actual vector has 110 elements that I'll use to extract features from images in matlab, I took this one (tb) to simplify
tb=[22.9 30.0 30.3 27.8 24.1 28.2 26.4 12.6 39.7 38.0];
normalized_V = tb/norm(tb);
I = mat2gray(tb);
For normalized_v I got 0.2503 0.3280 0.3312 0.3039 0.2635 0.3083 0.2886 0.1377 0.4340 0.4154.
For I I got 0.3801 0.6421 0.6531 0.5609 0.4244 0.5756 0.5092 0 1.0000 0.9373 which one should I use if any of those 2 methods and why, and should I transform the features vector to 1 element after extraction for better training or leave it as a 110 element vector.
Normalization can be performed in several ways, such as the following:
Normalizing the vector between 0 and 1. In that case, just use:(tb-min(tb))/max(tb)
Making the maximum point at 1. In that case, just use: tb/max(tb) (which is the method that you have been used before).
Making the mean 0 and the standard deviation as 1. This is the most common method for using the returned values as features in a classification procedure and thus, I think that it is the one that you should use right now: zscore(tb) (or (tb-mean(tb))/std(tb)).
So, your final values would be:
zscore(tb)
ans =
-0.6664
0.2613
0.3005
-0.0261
-0.5096
0.0261
-0.2091
-2.0121
1.5287
1.3066
Edit:
In regard to your second question, it depends on the number of observations. Every single classifier takes an MxN matrix of data and an Mx1 vector of labels as inputs. In this case, M refers to the number of observations, whereas N refers to the number of features. Usually, in order to avoid over-fitting, it is recommended to use a number of features less than the tenth part of the number of observations (i.e., the number of observations must be M > 10N).
So, in your case, if you use the entire 110-set of features, you should have a minimum of 1100 observations, otherwise you can have problems with over-fitting.
I've been given as an assignment to write using prolog a solver for
the battleships solitaire puzzle. To those unfamiliar, the puzzle deals
with a 6 by 6 grid on which a series of ships are placed according to the provided
constraints on each row and column, i.e. the first row must contain 3 squares with ships, the second row must contain 1 square with a ship, the third row must contain 0 squares etc for the other rows and columns.
Each puzzle comes with it's own set of constraints and revealed squares, typically two. An example can be seen here:
battleships
So, here's what I've done:
step([ShipCount,Rows,Cols,Tiles],[ShipCount2,Rows2,Cols2,Tiles2]):-
ShipCount2 is ShipCount+1,
nth1(X,Cols,X1),
X1\==0,
nth1(Y,Rows,Y1),
Y1\==0,
not(member([X,Y,_],Tiles)),
pairs(Tiles,TilesXY),
notdiaglist(X,Y,TilesXY),
member(T,[1,2,3,4,5,6]),
append([X,Y],[T],Tile),
append([Tile],Tiles,Tiles2),
dec_elem1(X,Cols,Cols2),dec_elem1(Y,Rows,Rows2).
dec_elem1(1,[A|Tail],[B|Tail]):- B is A-1.
dec_elem1(Count,[A|Tail],[A|Tail2]):- Count1 is Count-1,dec_elem1(Count1,Tail,Tail2).
neib(X1,Y1,X2,Y2) :- X2 is X1,(Y2 is Y1 -1;Y2 is Y1+1; Y2 is Y1).
neib(X1,Y1,X2,Y2) :- X2 is X1-1,(Y2 is Y1 -1;Y2 is Y1+1; Y2 is Y1).
neib(X1,Y1,X2,Y2) :- X2 is X1+1,(Y2 is Y1 -1;Y2 is Y1+1; Y2 is Y1).
notdiag(X1,Y1,X2,Y2) :- not(neib(X1,Y1,X2,Y2)).
notdiag(X1,Y1,X2,Y2) :- neib(X1,Y1,X2,Y2),((X1 == X2,t(Y1,Y2));(Y1 == Y2,t(X1,X2))).
notdiaglist(X1,Y1,[]).
notdiaglist(X1,Y1,[[X2,Y2]|Tail]):-notdiag(X1,Y1,X2,Y2),notdiaglist(X1,Y1,Tail).
t(X1,X2):- X is abs(X1-X2), X==1.
pairs([],[]).
pairs([[X,Y,Z]|Tail],[[X,Y]|Tail2]):-pairs(Tail,Tail2).
I represent a state with a list: [Count,Rows,Columns,Tiles]. The last state must be
[10,[0,0,0,0,0,0],[0,0,0,0,0,0], somelist]. A puzzle starts from an initial state, for example
initial([1, [1,3,1,1,1,2] , [0,2,2,0,0,5] , [[4,4,1],[2,1,0]]]).
I try to find a solution in the following manner:
run:-initial(S),step(S,S1),step(S1,S2),....,step(S8,F).
Now, here's the difficulty: if i restrict myself to one type of ship parts by using member(T,[1])
instead of
member(T,[1,2,3,4,5,6])
it works fine. However, when I use the full range of possible values for T which are needed
later, the query never ends since it runs for too long. this puzzles me, since :
(a) it works for 6 types of ships but only for 8 steps instead of 9
(b) going from a single type of ship to 6 types increases the number
of options for just the last step by a factor of 6, which
shouldn't have such a dramatic effect.
So, what's going on?
To answer your question directly, what's going on is that Prolog is trying to sift through an enormous space of possibilities.
You're correct that altering that line increases the search space of the last call by a factor of six, note that the size of the search space of, say, nine calls, isn't proportional to 9 times the size of one call. Prolog will backtrack on failure, so it's proportional (bounded above, actually) to the size of the possible results of one call raised to the ninth power.
That means we can expect the size of the space Prolog needs to search to grow by at most a factor of 6^9 = 10077696 when we allow T to take on 6 times as many values.
Of course, it doesn't help that (as far as I was able to tell) a solution doesn't exist if we call step 9 times starting with initial anyways. Since that last call is going to fail, Prolog will keep trying until it's exhausted all possibilities (of which there are a great many) before it finally gives up.
As far as a solution goes, I'm not sure I know enough about the problem. If the value if T is the kind of ship that fits in the grid (e.g. single square, half of a 2-square-ship, part of a 3-square-ship) you should note that that gives you a lot more information than the numbers on the rows/columns.
Right now, in pseudocode, your step looks like this:
Find a (X,Y) pair that has non-zero markings on its row/column
Check that there isn't already a ship there
Check that it isn't diagonal to a ship
Pick a kind of ship-part for it to be.
I'd suggest you approach like this:
Finish any already placed ship-bits to form complete ships (if we can't: fail)
Until we're finished:
Find acceptable places to place ship
Check that the markings on the row/column aren't zero
Try to place an entire ship here. (instead of a single part)
By using the most specific information that we have first (in this case, the previously placed parts), we can reduce the amount of work Prolog has to do and make things return reasonably fast.
I tried coming up with a compression algorithm. I do little bit about compression theories and so am aware that this scheme that I have come up with could very well never achieve compression at all.
Currently it works only for a string with no consecutive repeating letters/digits/symbols. Once properly established I hope to extrapolate it to binary data etc. But first the algorithm:
Assuming there are only 4 letters: a,b,c,d; we create a matrix/array corresponding to the letters. Whenever a letter is encountered, the corresponding index is incremented so that the index of the last letter encountered is always largest. We incremement an index by 2 if it was originally zero. If it was not originally zero then we increment it by 2+(the second largest element in the matrix). An example to clarify:
Array = [a,b,c,d]
Initial state = [0,0,0,0]
Letter = a
New state = [2,0,0,0]
Letter = b
New state = [2,4,0,0]
.
.c
.d
.
New state = [2,4,6,8]
Letter = a
New state = [12,4,6,8]
//Explanation for the above state: 12 because Largest - Second Largest - 2 = Old value
Letter = d
New state = [12,4,6,22]
and so on...
Decompression is just this logic in reverse.
A rudimentary implementation of compression (in python):
(This function is very rudimentary so not the best kind of code...I know. I can optimize it once I get the core algorithm correct.)
def compress(text):
matrix = [0]*95 #we are concerned with 95 printable chars for now
for i in text:
temp = copy.deepcopy(matrix)
temp.sort()
largest = temp[-1]
if matrix[ord(i)-32] == 0:
matrix[ord(i)-32] = largest+2
else:
matrix[ord(i)-32] = largest+matrix[ord(i)-32]+2
return matrix
The returned matrix is then used for decompression. Now comes the tricky part:
I can't really call this compression at all because each number in the matrix generated from the function are of the order of 10**200 for a string of length 50000. So storing the matrix actually takes more space than storing the original string. I know...totally useless. But I had hoped prior to doing all this that I can use the mathematical properties of a matrix to effectively represent it in some kind of mathematical shorthand. I have tried many possibilities and failed. Some things that I tried:
Rank of the matrix. Failed because not unique.
Denote using the mod function. Failed because either the quotient or the remainder
Store each integer as a generator using pickle.
Store the matrix as a bitmap file but then the integers are too large to be able to store as color codes.
Let me iterate again that the algorithm could be optimized. e.g. instead of adding 2 we could add 1 and proceed. But don't really result in any compression. Same for the code. Minor optimizations later...first I want to improve the main algorithm.
Furthermore, it is very likely that this product of a mediocre and idle mind like myself could never be able to achieve compression after all. In which case, I would then like your help and ideas on what this could probably be useful in.
TL;DR: Check coded parts which depict a compression algorithm. The compressed result is longer than the original string. Can this be fixed? If yes, how?
PS: I have the entire code on my PC. Will create a repo on github and upload in some time.
Compression is essentially a predictive process. Look for patterns in the input and use them to encode the more likely next character(s) more efficiently than the less likely. I can't see anything in your algorithm that tries to build a predictive model.
Can anyone please explain arithmetic encoding for data compression with implementation details ? I have surfed through internet and found mark nelson's post but the implementation's technique is indeed unclear to me after trying for many hours.
Mark nelson's explanation on arithmetic coding can be located at
http://marknelson.us/1991/02/01/arithmetic-coding-statistical-modeling-data-compression/
The main idea with arithmetic compression is its the capability to code a probability using the exact amount of data length required.
This amount of data is known, proven by Shannon, and can be calculated simply by using the following formula : -log2(p)
For example, if p=50%, then you need 1 bit.
And if p=25%, you need 2 bits.
That's simple enough for probabilities which are power of 2 (and in this special case, huffman coding could be enough). But what if the probability is 63% ? Then you need -log2(0.63) = 0.67 bits. Sounds tricky...
This property is especially important if your probability is high. If you can predict something with a 95% accuracy, then you only need 0.074 bits to represent a good guess. Which means you are going to compress a lot.
Now, how to do that ?
Well, it's simpler than it sounds. You will divide your range depending on probabilities. For example, if you have a range of 100, 2 possible events, and a probability of 95% for the 1st one, then the first 95 values will say "Event 1", and the last 5 remaining values will say "Event 2".
OK, but on computers, we are accustomed to use powers of 2. For example, with 16 bits, you have a range of 65536 possible values. Just do the same : take the 1st 95% of the range (which is 62259) to say "Event 1", and the rest to say "Event 2". You obviously have a problem of "rounding" (precision), but as long as you have enough values to distribute, it does not matter too much. Furthermore, you are not constrained to 2 events, you could have a myriad of events. All that matters is that values are allocated depending on the probabilities of each event.
OK, but now i have 62259 possible values to say "Event 1", and 3277 to say "Event 2". Which one should i choose ?
Well, any of them will do. Wether it is 1, 30, 5500 or 62256, it still means "Event 1".
In fact, deciding which value to select will not depend on the current guess, but on the next ones.
Suppose i'm having "Event 1". So now i have to choose any value between 0 and 62256. On next guess, i have the same distribution (95% Event 1, 5% Event 2). I will simply allocate the distribution map with these probabilities. Except that this time, it is distributed over 62256 values. And we continue like this, reducing the range of values with each guess.
So in fact, we are defining "ranges", which narrow with each guess. At some point, however, there is a problem of accuracy, because very little values remain.
The idea, is to simply "inflate" the range again. For example, each time the range goes below 32768 (2^15), you output the highest bit, and multiply the rest by 2 (effectively shifting the values by one bit left). By continuously doing like this, you are outputting bits one by one, as they are being settled by the series of guesses.
Now the relation with compression becomes obvious : when the range are narrowed swiftly (ex : 5%), you output a lot of bits to get the range back above the limit. On the other hand, when the probability is very high, the range narrow very slowly. You can even have a lot of guesses before outputting your first bits. That's how it is possible to compress an event to "a fraction of a bit".
I've intentionally used the terms "probability", "guess", "events" to keep this article generic. But for data compression, you just to replace them with the way you want to model your data. For example, the next event can be the next byte; in this case, you have 256 of them.
Maybe this script could be useful to build a better mental model of arithmetic coder: gen_map.py. Originally it was created to facilitate debugging of arithmetic coder library and simplify generation of unit tests for it. However it creates nice ASCII visualizations that also could be useful in understanding arithmetic coding.
A small example. Imagine we have an alphabet of 3 symbols: 0, 1 and 2 with probabilities 1/10, 2/10 and 7/10 correspondingly. And we want to encode sequence [1, 2]. Script will give the following output (ignore -b N option for now):
$ ./gen_map.py -b 6 -m "1,2,7" -e "1,2"
000000111111|1111|111222222222222222222222222222222222222222222222
------011222|2222|222000011111111122222222222222222222222222222222
---------011|2222|222-------------00011111122222222222222222222222
------------|----|-------------------------00111122222222222222222
------------|----|-------------------------------01111222222222222
------------|----|------------------------------------011222222222
==================================================================
000000000000|0000|000000000000000011111111111111111111111111111111
000000000000|0000|111111111111111100000000000000001111111111111111
000000001111|1111|000000001111111100000000111111110000000011111111
000011110000|1111|000011110000111100001111000011110000111100001111
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
001100110011|0011|001100110011001100110011001100110011001100110011
010101010101|0101|010101010101010101010101010101010101010101010101
First 6 lines (before ==== line) represent a range from 0.0 to 1.0 which is recursively subdivided on intervals proportional to symbol probabilities. Annotated first line:
[1/10][ 2/10 ][ 7/10 ]
000000111111|1111|111222222222222222222222222222222222222222222222
Then we subdivide each interval again:
[ 0.1][ 0.2 ][ 0.7 ]
000000111111|1111|111222222222222222222222222222222222222222222222
[ 0.7 ][.1][ 0.2 ][ 0.7 ]
------011222|2222|222000011111111122222222222222222222222222222222
[.1][ .2][ 0.7 ]
---------011|2222|222-------------00011111122222222222222222222222
Note, that some intervals are not subdivided. That happens when there is not enough space to represent every subinterval within given precision (which is specified by -b option).
Each line corresponds to a symbol from the input (in our case - sequence [1, 2]). By following subintervals for each input symbol we'll get a final interval that we want to encode with minimal amount of bits. In our case it's a first 2 subinterval on a second line:
[ This one ]
------011222|2222|222000011111111122222222222222222222222222222222
Following 7 lines (after ====) represent the same interval 0.0 to 1.0, but subdivided according to binary notation. Each line is a bit of output and by choosing between 0 and 1 you choose left or right half-subinterval. For example bits 01 corresponds to subinterval [0.25, 05) on a second line:
[ This one ]
000000000000|0000|111111111111111100000000000000001111111111111111
The idea of arithmetic coder is to output bits (0 or 1) until the corresponding interval will be entirely inside (or equal to) the interval determined by the input sequence. In our case it's 0011. The ~~~~ line shows where we have enough bits to unambiguously identify the interval we want.
Vertical lines formed by | symbol show the range of bit sequences (rows) that could be used to encode the input sequence.
First of all thanks for introducing me to the concept of arithmetic compression!
I can see that this method has the following steps:
Creating mapping: Calculate the fraction of occurrence for each letter which gives a range size for each alphabet. Then order them and assign actual ranges from 0 to 1
Given a message calculate the range (pretty straightforward IMHO)
Find the optimal code
The third part is a bit tricky. Use the following algorithm.
Let b be the optimal representation. Initialize it to empty string (''). Let x be the minimum value and y the maximum value.
double x and y: x=2*x, y=2*y
If both of them are greater than 1 append 1 to b. Go to step 1.
If both of them are less than 1, append 0 to b. Go to step 1.
If x<1, but y>1, then append 1 to b and stop
b essentially contains the fractional part of the number you are transmitting. Eg. If b=011, then the fraction corresponds to 0.011 in binary.
What part of implementation do you not understand?
I am looking for an existign path truncation algorithm (similar to what the Win32 static control does with SS_PATHELLIPSIS) for a set of paths that should focus on the distinct elements.
For example, if my paths are like this:
Unit with X/Test 3V/
Unit with X/Test 4V/
Unit with X/Test 5V/
Unit without X/Test 3V/
Unit without X/Test 6V/
Unit without X/2nd Test 6V/
When not enough display space is available, they should be truncated to something like this:
...with X/...3V/
...with X/...4V/
...with X/...5V/
...without X/...3V/
...without X/...6V/
...without X/2nd ...6V/
(Assuming that an ellipsis generally is shorter than three letters).
This is just an example of a rather simple, ideal case (e.g. they'd all end up at different lengths now, and I wouldn't know how to create a good suggestion when a path "Thingie/Long Test/" is added to the pool).
There is no given structure of the path elements, they are assigned by the user, but often items will have similar segments. It should work for proportional fonts, so the algorithm should take a measure function (and not call it to heavily) or generate a suggestion list.
Data-wise, a typical use case would contain 2..4 path segments anf 20 elements per segment.
I am looking for previous attempts into that direction, and if that's solvable wiht sensible amount of code or dependencies.
I'm assuming you're asking mainly about how to deal with the set of folder names extracted from the same level of hierarchy, since splitting by rows and path separators and aggregating by hierarchy depth is simple.
Your problem reminds me a lot of the longest common substring problem, with the differences that:
You're interested in many substrings, not just one.
You care about order.
These may appear substantial, but if you examine the dynamic-programming solution in the article you can see that it revolves around creating a table of "character collisions" and then looking for the longest diagonal in this table. I think that you could instead enumerate all diagonals in the table by the order in which they appear, and then for each path replace, by order, all appearances of these strings with ellipses.
Enforcing a minimal substring length of 2 will return a result similar to what you've outlined in your question.
It does seem like it requires some tinkering with the algorithm (for example, ensuring a certain substring is first in all strings), and then you need to invoke it over your entire set... I hope this at least gives you a possible direction.
Well, the "natural number" ordering part is actually easy, simply replace all numbers with formatted number where there is enough leading zeroes, eg. Test 9V -> Test 000009V and Test 12B -> Test 000012B. These are now sortable by standard methods.
For the actual ellipsisizing. Unless this is actually a huge system, I'd just add manual ellipsisizing "list" (of regexes, for flexibility and pain) that'd turn certain words into ellipses. This does requires continuous work, but coming up with the algorithm eats your time too; there are myriads of corner cases.
I'd probably try a "Floodfill" approach. Arrange first level of directories as you would a bitmap, every letter is a pixel. iterate over all characters that are in names of directories. with all of them, "paint" this same character, then "paint" the next character from first string such that it follows this previous character (and so on etc.) Then select the longest painted string that you find.
Example (if prefixed with *, it's painted)
Foo
BarFoo
*Foo
Bar*Foo
*F*oo
Bar*F*oo
...
note that:
*ofoo
b*oo
*o*foo
b*oo
.. painting of first 'o' stops since there are no continuing characters.
of*oo
b*oo
...
And then you get to to second "o" and it will find a substring of at least 2.
So you will have to iterate over most possible character instances (one optimization is to stop in each string at position Length-n, where n is the longest already found common substring. But then there is yet another problem (here with "Beta Beta")
| <- visibility cutout
Alfa Beta Gamma Delta 1
Alfa Beta Gamma Delta 2
Alfa Beta Beta 1
Alfa Beta Beta 2
Beta Beta 1
Beta Beta 2
Beta Beta 3
Beta Beta 4
What do you want to do? Cut Alfa Beta Gamma Delta or Alfa Beta or Beta Beta or Beta?
This is a bit rambling, but might be entertaining :).