There is a big data file whose format is:
111111 11 22 33 44 55 66 77
222222 21 22 23 29 99 98 00
...... ..
then how can i use prolog to calculate each number's frequency ?
Sincerely!
You have two problems: Parsing the file and calculating the frequencies.
For parsing the file, I recommend using library(pio). In that manner you can use dcgs to process the file. So, I'd recommend you learn first about DCGs. They are Prolog's way to describe/generate and parse text. They are even more general than that. But to start with, just see it that way.
This you can then combine with calculating the frequencies. To make this also efficient for very large data see this question.
Related
Let's consider the following example with symbol- code length - canonical code data.
A - 2 - 00
B - 2 - 01
D - 2 - 10
C - 3 - 110
E - 3 - 111
I was wondering what would be the contents of encoded bit stream? Is it 00 01 10 110 111 (basically all codes) or 2,2,2,3,3 in binary equivalent as corresponding code lengths? I wanted to add here that some resources say just transmit code as encoded bit stream and few other resources talk about throwing code away from encoded bit stream and transmit only code length data.
Encoded bitstream
The code is:
00 01 10 110 111
Note that if we sent the code of 2,2,2,3,3, then it would be impossible to decide if the input was AAACC or BBBEE (or many other equivalent choices).
Because Huffman codes are a prefix code it means that we can unambiguously decode the bitstream despite not knowing where the spaces are.
In other words, when given the output 000110110111, we can uniquely decode it as ABDCE.
Transmitting code table
I think the confusion may be because you need to possess two things to decode the bitstream:
The coded bitstream
The lookup table
These two things are often coded in very different ways.
In many cases the lookup table is fixed in advance so does not need to be transmitted.
However, if the probabilities can change, then we need to tell the recipient what code table to use. In this case we can just transmit the lengths of each code word and this gives enough information for the receiver to construct the canonical Huffman code. Alternatives are also possible, for example we can send the number of each code word length followed by the values. This alternative is used by JPEG and explained more below.
Example
The JPEG image codec uses Huffman tables. Normally some default tables are used, but it is possible to optimize the size of images by transmitting a custom Huffman code. A tutorial about this is here.
Another description of the way of transmitting the Huffman table is here. The code lengths are sent (as bytes) followed by the code values (again as bytes).
Code to read it (taken from the link) is:
// Next sixteen bytes are the counts for each code length
u8 counts[16];
for (i = 0; i < 16; i++) {
counts[i] = fgetc(fp);
ctr++;
}
// Remaining bytes are the data values to be mapped
// Build the Huffman map of (length, code) -> value
for (i = 0; i < 16; i++) {
for (j = 0; j < counts[i]; j++) {
huffData[table][huffKey(i + 1, code)] = fgetc(fp);
code++;
ctr++;
}
code <<= 1;
}
What you are asking is how to send a description of the code to the receiver, so that the receiver knows how to decode the following code values.
There are many ways of varying levels of sophistication, depending on how much effort you want to put into compressing the description of the code. Peter de Rivaz describes a simple approach used by JPEG, which is to send 16 counts of the number of codes of each length, followed by the byte values of each of those symbols. So for your code that would be (in hex):
00 03 02 00 00 00 00 00 00 00 00 00 00 00 00 00 41 42 43 44 45
That's not terribly compact, and it can't represent one of the possible codes, which is 256 8-bit codes, since you are limited to a count of 255 for each length.
The first thing you can do is cut off the code lengths when you have a complete code. It is easy to calculate how many code patterns are left, in which case you can simply end it when there are none left. Follow that with the symbols. You then have:
00 03 02 41 42 43 44 45
We don't need eight bits for each count, since they are limited by the constraints on those counts. For example, you can't have more than two one-bit codes. So we could code these in fewer bits, e.g. n+1 bits for n codes. So two bits, three, bits, and so on until the code is complete. For your code, now in binary:
00 011 0010
followed by the bytes 41 42 43 44 45, offset in the bit stream appropriately. Now the list of counts takes nine bits instead of 24. Since we know that there can only be 256 symbols, we can cap off the number of bits for each count at nine, allowing for the count 256, solving the previous problem of not being able to represent the flat code. Then if the code is limited to 16 bits in length (as it is for JPEG), the largest number of bytes needed for the counts is 14.5, less than the original 16. Often the counts will end before 14.5 bytes.
You can get even more sophisticated, noting that at each code length, you have a limit on the possible count of codes of that length due to the shorter code lengths using up patterns. Then the number of bits for each count can be variable, based on how many possible values there are. Then the counts description would be:
00 011 10, then the eight-bit values 41 42 43 44 45
Since we have no preceding patterns used up for lengths one and two, those still need to be two and three bits respectively. However we now have only three possibilities left for length three: the counts 0, 1, or 2. A count of 3 would oversubscribe the code. So we can use two bits for that last one. It is now seven bits instead of nine, and this greatly reduces the number of bits in the counts for codes that use longer code lengths.
An entirely different scheme is the one used by the deflate format (used in zip, gzip, zlib, png, etc.). There the number of code lengths to follow is sent first, followed by the code length of each symbol in order up to the last one. The symbols themselves are implied by the code length location. That results in lots of zeros, to represent symbols that are not present. So for your code there would be a 70 to go up to symbol 69 ("E"), followed by 65 zeros, then 2 2 2 3 3. That seems awfully long, and it is. deflate then run-length and Huffman codes that list of lengths, to compress it. The long strings of zeros get compressed to a few bits, and the short lengths are also just a few bits each. So then you have to first send a description of the code lengths code lengths code (!) so that you can decode that.
You can read the deflate specification for more information on that scheme. brotli uses a similar scheme, with more sophistication still.
I have a survey with 29 questions, each with a 5-point Likert scale (0=None of the time; 4=Most of the time). I'd like to compress the total set of responses to a small number of alpha or alphanumeric characters, adding a check digit to the end.
So, the set of responses 00101244231023110242231421211 would get turned into something like A2CR7HW4. This output would be part of a printout that a non-techie user would enter on a website as a shortcut to entering the entire string. I'd want to avoid ambiguous characters, such as 0,O,D,I,l,5,S, leaving me with 21 or 22 characters to use (uppercase only). Alternatively, I could just stick with capital alpha only and use all 26 characters.
I'm thinking to convert each pair of digits to a letter (5^2=25, so the whole alphabet is adequate). That would reduce the sequence to 15 characters, which is still longish to type without errors.
Any other suggestions on how to minimize the length of the output?
EDIT: BTW, for context, the survey asks 29 questions about mental health symptoms, generating a predictive risk for 4 psychiatric conditions. Need a code representing all responses.
If the five answers are all equally likely, then the best you can do is ceiling(29 * log(5) / log(n)) symbols, where n is the number of symbols in your alphabet. (The base of the logarithm doesn't matter, so long as they're both the same.)
So for your 22 symbols, the best you can do is 16. For 26 symbols, the best is 15, as you described for 25. If you use 49 characters (e.g. some subset of the upper and lower case characters and the digits), you can get down to 12. The best you'll be able to do with printable ASCII characters would be 11, using 70 of the 94 characters.
The only way to make it smaller would be if the responses are not all equally likely and are heavily skewed. Though if that's the case, then there's probably something wrong with the survey.
First, choose a set of permissible characters, i.e.
characters = "ABC..."
Then, prefix the input-digits with a 1 and interpret it as a quinary number:
100101244231023110242231421211
Now, convert this quinary number to a number in base-"strlen(characters)", i.e. base26 if 26 characters are to be used:
02 23 18 12 10 24 04 19 00 15 14 20 00 03 17
Then, use these numbers as index in "characters", and you have your encoding:
CVSMKWETAPOUADR
For decoding, just reverse the steps.
Are you doing this in a specific language?
If you want to be really thrifty about it you might want to consider encoding the data at bit level.
Since there are only 5 possible answers per question you could do this with only 3 bits:
000
001
010
011
100
Your end result would be a string of bits, at 3-bits per answer so a total of 87 bits or 10 and a bit bytes.
EDIT - misread the question slightly, there are 5 possible answers not 4, my mistake.
The only problem now is that for 4 of your 5 answers you're wasting a bit...you ain't gonna benefit much from going to this much trouble I wouldn't say but it's worth considering.
EDIT:
I've been playing about with it and it's difficult to work out a mechanism that allows you to use both 2 and 3 bit values.
Since your output would be a 97 bit binary value you'd need ot be able make the distinction between 2 and 3 bits values when converting back to the original values.
If you're working with a larger number of values there are some methods you could use, like having a reserved bit for each values that can be used to sort of type a value and give it some meaning. But working with so few bits as it is, it's hard to shave anything off.
Your output at 97 bits could be padded out to 128 bits, which would give you 4 32-bit values if you wanted to simplify it. this 128 bit value would be like a unique fingerprint representing a specific set of answers. There are many ways you can represnt 128 bits.
But in the end borking at bit-level is about as good as it gets when it comes to actual compression and encoding of data...if you can express 5 unique values in less than 3 bits I'd be suitably impressed.
I met a problem when doing curve fit with this equation
y=a*exp(-x/b)
x is fixed x=[13 26 39 52 65 78 91]. y is the input. a and b are unknows. b is the output. I use LSQ estimation to do curve fitting, and add a constraint for the output b: b should be in the range of [0,1000].
Now the system works like this: when I have an input sequence like
y=[460 434 288 218 164 114 89]
The output is b=51.46, which is good.
If the input sequence is
y=[599 640 592 609 550 588 573 626]
The estimation result is b=1000. This is also good. No problem.
But when I input a pure noise sequence:
y=[24 19 31 5 27 31 17]
The result I get from my curve fitting algorithm is b=1000. In this case, the output b is a very high signal, and this is not acceptable for the system. I expect to output a low value of b, say b = 0.
I tried to add a threshold on y, say
if y<50 then b=0
But the system is not very stable. The noise level changes from time to time. Is there other way to solve this problem? Thank you in advance.
Frist, note that this category of problems commonly appears in literature in terms of logistic growth model (or see here). I believe your specific problem should be considered in the context of the Mixed Model, a statistical model containing both fixed effects and random effects.
More concretely, you might use Matlab's nlmefit from its statistics toolbox.
A bird-view of nlme can be found in this ppt.
given a blocksize of n and another size k, I search for a way to only output blocks with an offset from the start of an input of a multiple of k.
imagine a file consisting of a number of 4-tuples of 2-byte data. now given this input I want only the first entry of each tuple.
example input:
00 00 11 11 22 22 33 33
44 44 55 55 66 66 77 77
88 88 99 99 aa aa bb bb
cc cc dd dd ee ee ff ff
example output with n=2 and k=8:
00 00 44 44 88 88 cc cc
which is only the first "column" of the input.
Now while it would be simple to do this in perl, python, I need this functionality in a shell script as the target system does not have perl or python but only basic utilities. I'm hoping there is a way to misuse an existing tool for that. If it is not possible I would write some C doing that but I would like to avoid it.
One usecase would be to extract one audio channel from a raw audio file.
A term you might search for (other than "zebra stripes") is "stride." That's what some people call this idea of skipping k bytes each time.
It's not entirely clear from your post, but it looks like you actually want to be able to insert this filter in a pipeline and have it consume raw bytes and output the same. If this is the case, I'm not sure how it can be done easily in plain shell script, so would suggest you either hunker down and write it in C, or get Python or something installed on the target system.
Using only pure ruby (or justifiably commonplace gems) is there an efficient way to search a large binary document for a specific string of bytes?
Deeper context: the mpeg4 container format is a 4-byte indexed serialised data structure, without having to parse the structure fully (I can assume it is valid) I want to pull out specific tags.
For those of you that haven't come across this 'dmap' serialization before it works something like this:
<4-byte length<4-byte tag><4-byte length><4-byte type definition><8 bytes of something I can't remember><data>
eg, this defines the 'tvsh' (or TV Show) tag as being 'Futurama'
00 00 00 20 ...
74 76 73 68 tvsh
00 00 00 18 ....
64 61 74 61 data
00 00 00 01 ....
00 00 00 00 ....
46 75 74 75 Futu
72 61 6D 61 rama
The exact structure isn't really important, I'd like to write a method which can pull out the show name when I give it 'tvsh' or that it's season 2 if I give it 'tvsn'.
My first plan would be to use Regular Expressions, but I get the (unjustified) feeling that this would be slow.
Let me know your thoughts! Thanks in advance
In Ruby you can use the /n flag when creating your regex to tell Ruby that your input is 8-bit data.
You could use /(.{4})tvsh(.{4})data(.{8})([\x20-\x7F]+)/n to match 4 bytes, tvsh, 4 bytes, data, 8 bytes, and any number of ASCII characters. I don't see any reason why this regex would be significantly slower to execute than hand-coding a similar search. If you don't care about the 4-byte and 8-byte blocks, /tvsh.{4}data.{8}([\x20-\x7F])/n should be nearly as fast as a literal text search for tvsh.
If I understand your description correctly, whole file consists of a number of such "blocks" of a fixed structure?
In that case, I suggest scanning one by one, and skipping ones not of interest to you. So, your each step should do the following:
Read 8 bytes (using IO#readbytes or a similar method)
From the read header, extract the size (first 4 bytes), and the tag (second 4)
If the tag is the one you need, skip following 16 bytes and read size-24 bytes.
If the tag is not of interest, skip following size-16 bytes.
Repeat.
For skipping bytes, you can use IO#seek.
Theoretically you can use regexes against any arbitrary data, including binary strings. HTH.