What are online down-scaling algorithms? [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm building a circuit that will be reading PAL/NTSC (576i, 480i) frames from analog input. The microcontroller has 32 kB of memory. My goal is to scale down input to 32x16 resolution, and forward this image to LED matrix.
PAL frame can take ~400 kB of memory. So i thought about down-scaling online. Read 18 pixels, decimate to 1. Read 45 lines, decimate to 1. Peak memory usage: 45 x 32 = 1.44 kB (45 decimated lines awaiting decimation).
Question: What are other online image down-scaling algorithms, other than the above naive one? Googling is extremely hard because online services are being found (PDF resize, etc.)

Note that mentioned formats are interlaced, so you read at first 0th, 2nd, 4th.. lines (first semi-frame), then 1st, 3rd, .. lines (second semi-frame).
If you are using simple averaging of pixel values in resulting cell (I suspect it is OK for so small output matrix), then create output array (16x32=512 entries) and sum appropriate values for every cell. And you need buffer for a single input line (768 or 640 entries).
x_coeff = input_width / out_width
y_coeff = input_height / out_height
out_y = inputrow / y_coeff
for (inputcol = 0..input_width - 1)
out_x = inputcol / x_coeff
out_array[out_y][out_x] += input_line[inputcol]
inputrow = inputrow + 2
if (inputrow = input_height)
inputrow = 1
if (inputrow > input_height)
inputrow = 0
divide out_array[][] entries by ( x_coeff * y_coeff)

Related

(DX12) Read-back buffer for 2D-Texture UAV [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to read-back ray-traced intersections of a ray's recursive path from the GPU to the CPU in DXR.
I am able to render the intersections into a layered unordered access view Texture2D array, so that each layer of the ray-tree corresponds to one layer in this UAV array.
The problem comes when I try to 'readback' this data from the GPU so the CPU can read it. I have not found a way to copy texture information from the GPU to the CPU - I cannot instantiate a 2D-Texture buffer on the read-back heap. I am now looking into writing this intersection information into a flattened 1D UAV buffer - essentially a g-buffer. However, I am having difficulty initializing this (since each pixel may necessarily contain an intersection, I need a buffer the size of the screen-dimensions*RAY_RECURSION_DEPTH (6 in my case), however the number of elements in a UAV-Buffer is limited to size 345599).
Getting to the point, is there a way for me to read-back from a UAV Texture2D resource? Is there a way for me to create a UAV-Buffer with a larger size than 345599? Or, is there another way I should be going about this altogether?
Thanks.
Readback resources for Direct3D 12 must be buffers (D3D12_RESOURCE_DIMENSION_BUFFER). You create one large enough to hold the Texture2D data (rowpitch * height) and then use CopyTextureRegion to copy it from the GPU to CPU.
D3D12_RESOURCE_DESC bufferDesc = {};
bufferDesc.Alignment = desc.Alignment;
bufferDesc.DepthOrArraySize = 1;
bufferDesc.Dimension = D3D12_RESOURCE_DIMENSION_BUFFER;
bufferDesc.Flags = D3D12_RESOURCE_FLAG_NONE;
bufferDesc.Format = DXGI_FORMAT_UNKNOWN;
bufferDesc.Height = 1;
bufferDesc.Width = srcPitch * desc.Height;
bufferDesc.Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR;
bufferDesc.MipLevels = 1;
bufferDesc.SampleDesc.Count = 1;
bufferDesc.SampleDesc.Quality = 0;
See ScreenGrab and Microsoft Docs

Is it possible to make conditions in ASCII? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am working on a software that needs commands in ASCII to define alert thresholds.
These thresholds, correspond to a quantity in littre according to a number of gas tank connected each others.
Examples :
-If it's 1 or 2 tanks the threshold will be 30 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 30 for 30 litters).
-If it's 3 tanks the threshold will be 45 L and the ASCII code is S6350145 ("S635" to set up, 01 for the first tank, 45 for 45 litters).
-If it's 4 or 5 tanks the threshold will be 60 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 60 for 60 litters).
-If it's 6 or 7 tanks the threshold will be 80 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 80 for 80 litters).
My problem is I have to repeat the command for each tank, and I want to know if it is possible make conditions in ASCII to write only one command?
I hope it is clear.
Thank you in advance.
No.
ASCII is a way to encode text in bytes.
It is not a programming language. It has no means of expressing logic.
Just about every programming language can be expressed in ASCII and can manipulate ASCII encoded data, so you can pick any programming language you like and write your logic in that.

What is the minimum of bytes to encode a number? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to encode a number in range from 0 to 7^20 (integer). I want to use the minimum of bytes to encode a such number. So what is the minimum of bytes to do that? please help me.
Thank you so much
You need to encode from 0 to 720, which means 720 + 1 values in total. Without any information about the input, the best you can do is ⌈log2(720 + 1)⌉ = 57 bits per number.
You can give 8 byte per number, which it also easy to decode. But there is a wastage of 7 bit per number.
Another way is to store exactly 57 bit per number and pack the numbers tightly together. You can save 7 bytes per 8 number being stored (so 8 numbers will take up 57 bytes instead of 64 bytes). However, it will be slightly trickier to recover the original number.
My lack of knowledge does not allow me to talk about any method that can do better.
Rough estimate: 3 bits are enough to encode 0..7, so 3 ⋅ 20 = 60 bits are enough to encode 0..720.
More accurate: ⌈log2 720⌉ = 57 bits are enough.
If the numbers to encode are uniformly distributed over this range, you cannot do better. Otherwise, you can do better on average, by giving shorter codes to the more common numbers.

Error correcting algorithm for very strange data channel [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please recommend an error correcting algorithm for using very strange data channel.
The channel consists of two parts: Corrupter and Eraser.
Corrupter receives a word consisting of 10000 symbols in 3-symbol alphabet, say, {'a','b','c'}.
Corrupter changes each symbol with probability 10%.
Example:
Corrupter input: abcaccbacbbaaacbcacbcababacb...
Corrupter output: abcaacbacbbaabcbcacbcababccb...
Eraser receives corrupter output and erases each symbol with probability 94%.
Eraser produces word of the same length in 4-symbol alphabet {'a','b','c','*'}.
Example:
Eraser input: abcaacbacbbaabcbcacbcababccb...
Eraser output: *******a*****************c**...
So, on eraser output, approximately 6%*10000=600 symbols would not be erased, approximately 90%*600=540 of them would preserve their original values and approximately 60 would be corrupted.
What encoding-decoding algorithm with error correction is best suited for this channel?
What amount of useful data could be transmitted providing > 99.99% probability of successful decoding?
Is it possible to transmit 40 bytes of data through this channel? (256^40 ~ 3^200)
Here's something you can at least analyze:
Break your 40 bytes up into 13 25-bit chunks (with some wastage so this bit can obviously be improved)
2^25 < 3^16 so you can encode the 25 bits into 16 a/b/c "trits" - again wastage means scope for improvement.
With 10,000 trits available you can give each of your 13 encoded byte triples 769 output trits. Pick (probably at random) 769 different linear (mod 3) functions on 16 trits - each function is specified by 16 trits and you take a vector dot product between those trits and the 16 input trits. This gives you your 769 output trits.
Decode by considering all possible (2^25) chunks and pick the one which matches most of the surviving trits. You have some hope of getting the right answer as long as there are at least 16 surviving trits, which I think excel is telling me via BINOMDIST() happens often enough that there is a pretty good chance that it will happen for all of the 13 25-bit chunks.
I have no idea what error rate you get from garbling but random linear codes have a pretty good reputation, even if this one has a short blocksize because of my brain-dead decoding technique. At worst you could try simulating the encoding transmission and decoding of 25-bit chunks and work it out from there. You can get a slightly more accurate lower bound on error rate than above if you pretend that the garbling stage erases as well and so recalculate with a slightly higher probability of erasure.
I think this might actually work in practice if you can afford the 2^25 guesses per 25-bit block to decode. OTOH if this is a question in a class my guess is you need to demonstrate your knowledge of some less ad-hoc techniques already discussed in your class.

Determining the level of dissonance between two frequencies [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Using continued fractions, I'm generating integer ratios between frequencies to a certain precision, along with the error (difference from integer ratio to the real ratio). So I end up with things like:
101 Hz with 200 Hz = 1:2 + 0.0005
61 Hz with 92 Hz = 2:3 - 0.0036
However, I've run into a snag on actually deciding which of these will be more dissonant than others. At first I thought low numbers = better, but something like 1:51 would likely be not very dissonant since it's a frequency up 51 octaves from the other. It might be a screaming high, ear bleeding pitch, but I don't think it would have dissonance.
It seems to me that it must be related to the product of the two sides of the ratio compared to the constituents somehow. 1 * 51 = 51, which doesn't "go up much" from one side. 2 * 3 = 6, which I would think would indicate higher dissonance than 1:51. But I need to turn this feeling into an actual number, so I can compare 5:7 vs 3:8, or any other combinations.
And how could I work error into this? Certainly 1:2 + 0 would be less dissonant than 1:2 + 1. Would it be easier to apply an algorithm that works for the above integer ratios directly to the frequencies themselves? Or does having the integer ratio with an error allow for a simpler calculation?
edit: Thinking on it, an algorithm that could extend to any set of N frequencies in a chord would be awesome, but I get the feeling that would be much more difficult...
edit 2: Clarification:
Let's consider that I am dealing with pure sine waves, and either ignoring the specific thresholds of the human ear or abstracting them into variables. If there are severe complications, then they are ignored. My question is how it could be represented in an algorithm, in that case.
Have a look at Chapter 4 of http://homepages.abdn.ac.uk/mth192/pages/html/maths-music.html. From memory:
1) If two sine waves are just close enough for the human ear to be confused, but not so close that the human ear cannot tell they are different, there will be dissonance.
2) Pure sine waves are extremely rare - most tones have all sorts of harmonics. Dissonance is very likely to occur from colliding harmonics, rather than colliding main tones - to sort of follow your example, two tones many octaves apart are unlikely to be dissonant because their harmonics may not meet, whereas with just a couple of octaves different and loads of harmonics a flute could sound out of tune with a double bass. Therefore dissonance or not depends not only on the frequencies of the main tones, but on the harmonics present, and this has been experimentally demonstrated by constructing sounds with peculiar pseudo-harmonics.
The answer is in Chapter 4 of Music: a Mathematical Offering. In particular, see the following two figures:
consonance / dissonance plotted against the x critical bandwidth in 4.3. History of consonance and dissonance
dissonance vs. frequency in 4.5. Complex tones
Of course you still have to find a nice way to turn these data into a formula / program that gives you a measure of dissonance but I believe this gives you a good start. Good luck!
This will help:
http://www.acs.psu.edu/drussell/demos/superposition/superposition.html
You want to look at superposition.
Discrete or Fast Fourier Transform is the most generic means to get what you're asking for.

Resources