Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to encode a number in range from 0 to 7^20 (integer). I want to use the minimum of bytes to encode a such number. So what is the minimum of bytes to do that? please help me.
Thank you so much
You need to encode from 0 to 720, which means 720 + 1 values in total. Without any information about the input, the best you can do is ⌈log2(720 + 1)⌉ = 57 bits per number.
You can give 8 byte per number, which it also easy to decode. But there is a wastage of 7 bit per number.
Another way is to store exactly 57 bit per number and pack the numbers tightly together. You can save 7 bytes per 8 number being stored (so 8 numbers will take up 57 bytes instead of 64 bytes). However, it will be slightly trickier to recover the original number.
My lack of knowledge does not allow me to talk about any method that can do better.
Rough estimate: 3 bits are enough to encode 0..7, so 3 ⋅ 20 = 60 bits are enough to encode 0..720.
More accurate: ⌈log2 720⌉ = 57 bits are enough.
If the numbers to encode are uniformly distributed over this range, you cannot do better. Otherwise, you can do better on average, by giving shorter codes to the more common numbers.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm building a circuit that will be reading PAL/NTSC (576i, 480i) frames from analog input. The microcontroller has 32 kB of memory. My goal is to scale down input to 32x16 resolution, and forward this image to LED matrix.
PAL frame can take ~400 kB of memory. So i thought about down-scaling online. Read 18 pixels, decimate to 1. Read 45 lines, decimate to 1. Peak memory usage: 45 x 32 = 1.44 kB (45 decimated lines awaiting decimation).
Question: What are other online image down-scaling algorithms, other than the above naive one? Googling is extremely hard because online services are being found (PDF resize, etc.)
Note that mentioned formats are interlaced, so you read at first 0th, 2nd, 4th.. lines (first semi-frame), then 1st, 3rd, .. lines (second semi-frame).
If you are using simple averaging of pixel values in resulting cell (I suspect it is OK for so small output matrix), then create output array (16x32=512 entries) and sum appropriate values for every cell. And you need buffer for a single input line (768 or 640 entries).
x_coeff = input_width / out_width
y_coeff = input_height / out_height
out_y = inputrow / y_coeff
for (inputcol = 0..input_width - 1)
out_x = inputcol / x_coeff
out_array[out_y][out_x] += input_line[inputcol]
inputrow = inputrow + 2
if (inputrow = input_height)
inputrow = 1
if (inputrow > input_height)
inputrow = 0
divide out_array[][] entries by ( x_coeff * y_coeff)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
How are real numbers kept in Ruby language? How can I keep 7.125 as real number in the memory? In this code:
myNumber = 7.125
puts("The number is #{myNumber}")
I do not understand how the number is kept in memory.
In Ruby 1.8 & 1.9, floats are never immediates, so all floats require a new memory allocation.
In Ruby 2.0.0, on 64 bit systems, many floats are now immediate. This means that the typical floats don't require memory allocation & deallocation anymore, so much faster operations.
Ruby stores its values in a pointer (32 or 64 bits, depending on the platform). It actually uses a trick to store immediates in that pointer. This is the reason why Fixnum can only hold 31 / 63 bits.
On 32 bit platforms, there's no clever way to store floats, but on 64 bits platforms, it's possible to use the first ones to flag this value as an immediate float and the remaining 60 or so to hold the data. The floats that do require the full 64 bits can not be immediates, though, so these are stored like before using an actual pointer.
More info on this optimization can be found in the https://bugs.ruby-lang.org/issues/6763
There is no way to keep real numbers in Ruby. The closest things are Float or Rational, both of which cannot express real numbers fully.
How about the Bigdecimal class (arbitrary precision arithmetics)? See the API, which follows standards on the topic.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am working on a software that needs commands in ASCII to define alert thresholds.
These thresholds, correspond to a quantity in littre according to a number of gas tank connected each others.
Examples :
-If it's 1 or 2 tanks the threshold will be 30 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 30 for 30 litters).
-If it's 3 tanks the threshold will be 45 L and the ASCII code is S6350145 ("S635" to set up, 01 for the first tank, 45 for 45 litters).
-If it's 4 or 5 tanks the threshold will be 60 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 60 for 60 litters).
-If it's 6 or 7 tanks the threshold will be 80 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 80 for 80 litters).
My problem is I have to repeat the command for each tank, and I want to know if it is possible make conditions in ASCII to write only one command?
I hope it is clear.
Thank you in advance.
No.
ASCII is a way to encode text in bytes.
It is not a programming language. It has no means of expressing logic.
Just about every programming language can be expressed in ASCII and can manipulate ASCII encoded data, so you can pick any programming language you like and write your logic in that.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
im new to assembly language and i know many codes.
However im working with 8086 emulator with only works with 16 bit numbers.
this is a home work that im really stuck in :
How can i write an assembly code with do the following :
1-get 20 , maximum 6-digits decimal numbers and store them in an array.
2- sort the array in ascending order.
its really hard for me to understand how to manage registers and stack for this long numbers.
every help will be appreciated in advance .
In order to sort 32bit numbers (or broader) with 16bit registers you have to compare the upper part of each number separately.
Assume we have these random two 32 bit numbers (shown in hex) 4567afdf and 321abc09.
Now when you look at them as 16 bit values they look like this:
4567 afdf
321a bc09
As you can easily see, the upper 16 bits you can compare individually.
If the upper 16 bits are higher or lower, then you know that the lower part doesn't matter anymore and you sort them accordingly.
If the upper 16 bits are equal, then you compare the lower 16 bits and if they are also equal, both numbers are equal => no sort needed, otherwise you shuffle them accordingly. Since the upper 16 bits are also equal, you don't even need to shuffle them.
If the upper 16bits are different, you still have to shuffle the lower 16 bits accordingly, as they might be different.
The basics of this approach can be used for an arbitrary number of bits not just 32bits. Generally when you have a seemingly hard problem, you should try to think of the easy examples and how you can solve it. Then you can extend it to more complicated cases.
EDIT:
An alternative approach would be, if you have strings of decimal numbers and you want to sort them based on the string representation instead of the numbers.
In this case, you can do it as follows
If the length of the two number strings are differnt, the shorter one is the lower number.
if the length is equal, then you can look at each digit individually (starting with the first digit) until you hit a non-equal digit or the string end. If you reaced the end of the string, the numbers are the same, otherwise you kn ow which one is higher/lower.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please recommend an error correcting algorithm for using very strange data channel.
The channel consists of two parts: Corrupter and Eraser.
Corrupter receives a word consisting of 10000 symbols in 3-symbol alphabet, say, {'a','b','c'}.
Corrupter changes each symbol with probability 10%.
Example:
Corrupter input: abcaccbacbbaaacbcacbcababacb...
Corrupter output: abcaacbacbbaabcbcacbcababccb...
Eraser receives corrupter output and erases each symbol with probability 94%.
Eraser produces word of the same length in 4-symbol alphabet {'a','b','c','*'}.
Example:
Eraser input: abcaacbacbbaabcbcacbcababccb...
Eraser output: *******a*****************c**...
So, on eraser output, approximately 6%*10000=600 symbols would not be erased, approximately 90%*600=540 of them would preserve their original values and approximately 60 would be corrupted.
What encoding-decoding algorithm with error correction is best suited for this channel?
What amount of useful data could be transmitted providing > 99.99% probability of successful decoding?
Is it possible to transmit 40 bytes of data through this channel? (256^40 ~ 3^200)
Here's something you can at least analyze:
Break your 40 bytes up into 13 25-bit chunks (with some wastage so this bit can obviously be improved)
2^25 < 3^16 so you can encode the 25 bits into 16 a/b/c "trits" - again wastage means scope for improvement.
With 10,000 trits available you can give each of your 13 encoded byte triples 769 output trits. Pick (probably at random) 769 different linear (mod 3) functions on 16 trits - each function is specified by 16 trits and you take a vector dot product between those trits and the 16 input trits. This gives you your 769 output trits.
Decode by considering all possible (2^25) chunks and pick the one which matches most of the surviving trits. You have some hope of getting the right answer as long as there are at least 16 surviving trits, which I think excel is telling me via BINOMDIST() happens often enough that there is a pretty good chance that it will happen for all of the 13 25-bit chunks.
I have no idea what error rate you get from garbling but random linear codes have a pretty good reputation, even if this one has a short blocksize because of my brain-dead decoding technique. At worst you could try simulating the encoding transmission and decoding of 25-bit chunks and work it out from there. You can get a slightly more accurate lower bound on error rate than above if you pretend that the garbling stage erases as well and so recalculate with a slightly higher probability of erasure.
I think this might actually work in practice if you can afford the 2^25 guesses per 25-bit block to decode. OTOH if this is a question in a class my guess is you need to demonstrate your knowledge of some less ad-hoc techniques already discussed in your class.