Fetching integer/byte texture data "as is" in openGL ES 3.0 - opengl-es

I am playing some tricks on IOS to try to build a CPU-GPU-hybrid JPEG encoder. From my tests with CPU, I believe using GPU to do the DCT and quantization steps makes good sense and should boost the over performance significantly (compressing a huge number of JPEGs is the bottle neck in my app). With transform feedback, this should be doable, as I have used that to get great results in GPGPU computing. The tricky part is how to get the data (unsigned int8's of RGBA) in efficiently.
As mentioned, I used to use openGL ES 3.0 to do GPGPU computing, so I only have experience with float-point textures, which is set-up by
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,WIDTH,HEIGHT,0,GL_RGBA,GL_GLOAT,data);
and delivered to the shaders by
texelFetch()
But now my input data is stored as an array of unsigned bytes (or uint8) and I need to sequentially fetch 64 of them each time. I think I can either fetch them as a texture of unsigned bytes, or more efficiently, as a texture of unsigned integers then separate them with bit shifts.
My question is, how do I actually do either of them? More specifically, how should I set the internalFormat, format and type for glTexImage2D()? I tried a lot of combinations but all of them delivers only 0 in the shaders (and I double-checked the data source that they are none-zero).

In ES 3, seriously consider creating a pixel unpack buffer and mapping it in order to get a location to which to formulate your pixel data. That will at least save a driver-internal memcpy and can be used significantly to decrease synchronisation. See GL_PIXEL_UNPACK_BUFFER on glBindBuffer and gl[Un]MapBuffer[Range]; you'll end up with a glTexImage2D(..., (void *)0); to specify the pixel unpack buffer as a source, analogously to the way that bound buffers are specified as the source for attributes, elements, etc. See glFenceSync for synchronisation assuming you use GL_MAP_UNSYNCHRONIZED_BIT and thereby intend to handle synchronisation yourself.
For full-integer RGBA (no scaling) use GL_RGBA8UI as the internal format, GL_RGBA_INTEGER as the format, GL_UNSIGNED_BYTE as the type; then declare a usampler2d ('u' for unsigned, implicitly integer) and use a standard texture(sampler, coordinate) to sample.
You'll also want GL_CLAMP_TO_EDGE and GL_NEAREST texture parameters.
EDIT: also potentially worth mentioning, the values coming from a usampler2d are of type uvec4, so they're integral. Unlike ES 2, ES 3 has true integers, including bitwise operators — ES 2 permits them to be emulated by floats (for those of us from the '90s, this truly is an unexpected future). So, a simplified and sufficiently trivial to be worth mentioning snippet from a recent emulation project of mine:
vec4 rgb_sample(usampler2D sampler, vec2 coordinate)
{
uint texValue = texture(sampler, coordinate).r;
return vec4(texValue & 4u, texValue & 2u, texValue & 1u, 1.0);
}
Which, of course, is unpacking a TTL-style RGB-in-one-byte single-channel texture to a format suitable for gl_FragColor (relying upon saturation).

Related

NEXSYS A7 Board - I2S2 PMOD

I'm working on a guitar effects "pedal" using the NEXSYS A7 Board.
For this purpose, I've purchased the I2S2 PMOD and successfully got it up and running using the example code provided by Digilent.
Currently, the design is a "pass-through", meaning that audio comes into the FPGA and immediately out.
I'm wondering what would be the correct way to store the data, make some DSP on this data to create the effects, and then transmit the modified data back to the I2S2 PMOD.
Maybe it's unnecessary to store the data?
maybe I can pass it through an RTL block that's responsible for applying the effect and then simply transmit the modified data out?
Collated from comments and extended.
For a live performance pedal you don't want to store much data; usually 10s of ms or less. Start with something simple : store 50 or 100ms of data in a ring (read old data, store new data, inc address modulo memory size). Output = Newdata = ( incoming sample * 0.n + olddata * (1 - 0.n)) for variable n. Very crude reverb or echo.
Yes, ring = ring buffer FIFO. And you'll see my description is a very crude implementation of a ring buffer FIFO.
Now extend it to separate read and write pointers. Now read and write at different, harmonically related rates ... you have a pitch changer. With glitches when the pointers cross.
Think of ways to hide the glitches, and soon you'll be able to make the crappy noises Autotune adds to most all modern music from that bloody Cher song onwards. (This takes serious DSP : something called interpolating filters is probably the simplest way. Live with the glitches for now)
btw if I'm interested in a distortion effect, can it be accomplished by simply multiplying the incoming data by a constant?
Multiplying by a constant is ... gain.
Multiplying a signal by itself is squaring it ... aka second harmonic distortion or 2HD (which produces components on the octave of each tone in the input).
Multiplying a signal by the 2HD is cubing it ... aka 3HD, producing components a perfect fifth above the octave.
Multiplying the 2HD by the 2HD is the fourth power ... aka 4HD, producing components 2 octaves higher, or a perfect fourth above that fifth.
Multiply the 4HD by the signal to produce 5HD ... and so on to probably the 7th. Also note that these components will decrease dramatically in level; you probably want to add gain beyond 2HD, multiply by 4 (= shift left 2 bits) as a starting point, and increase or decrease as desired.
Now multiply each of these by a variable gain and mix them (mixing is simple addition) to add as many distortion components you want as loud as you want ... don't forget to add in the original signal!
There are other approaches to adding distortion. Try simply saturating all signals above 0.25 to 0.25, and all signals below -0.25 to -0.25, aka clipping. Sounds nasty but mix a bit of this into the above, for a buzz.
Learn how to make white noise (pseudo-random number, usually from a LFSR).
Multiply this by the input signal, and mix or match with the above, for some fuzz.
Learn digital filtering (low pass, high pass, band pass for EQ), and how to control filters with noise or the input signal, the world of sound is open to you.

InterlockedAdd HLSL potential optimization

I was wondering if anyone might know whether there might be some kind of optimization going on with HLSL InterlockedAdd, specifically when it is used on a single global atomic counter (added value is constant across all threads) by a large number of threads.
Some information I dug up on the web says that atomic adds can create significant contention issues:
https://developer.nvidia.com/blog/cuda-pro-tip-optimized-filtering-warp-aggregated-atomics/
Granted, the article above is written for CUDA (also a little old dating to 2014), whereas I am interested in HLSL InterlockedAdd. To that end, I wrote a dummy HLSL shader for Unity (compiled to d3d11 via FXC, to my knowledge), where I call InterlockedAdd on a single global atomic counter, such that the added value is always the same across all the shaded fragments. The snippet in question (run in http://shader-playground.timjones.io/, compiled via FXC, optimization lvl 3, shading model 5.0):
**HLSL**:
RWStructuredBuffer<int> counter : register(u1);
void PSMain()
{
InterlockedAdd(counter[0], 1);
}
----
**Assembly**:
ps_5_0
dcl_globalFlags refactoringAllowed
dcl_uav_structured u1, 4
atomic_iadd u1, l(0, 0, 0, 0), l(1)
ret
I then slightly modified the code, and instead of always adding some constant value, I now add a value that varies across fragments, so something like this:
**HLSL**:
RWStructuredBuffer<int> counter : register(u1);
void PSMain(float4 pixel_pos : SV_Position)
{
InterlockedAdd(counter[0], int(pixel_pos.x));
}
----
**Assmebly**:
ps_5_0
dcl_globalFlags refactoringAllowed
dcl_uav_structured u1, 4
dcl_input_ps_siv linear noperspective v0.x, position
dcl_temps 1
ftoi r0.x, v0.x
atomic_iadd u1, l(0, 0, 0, 0), r0.x
ret
I implemented the equivalents of the aforementioned snippets in Unity, and used them as my fragment shaders for rendering a full-screen quad (granted, there is no output semantics, but that is irrelevant). I profiled the resulting shaders with Nsight Grphics. Suffice to say that the difference between two draw calls was massive, with the fragment shader based on the second snippet (InterlockedAdd with variable value) being considerably slower.
I also made captures with RenderDoc to check the assembly, and they look identical to what is shown above. Nothing in the assembly code suggests such dramatic difference. And yet, the difference is there.
So my question is: is there some kind of optimization taking place when using HLSL InterlockedAdd on a single global atomic counter, such that the added value is a constant? Is it, perhaps, possible that the GPU driver can somehow rearrange the code?
System specs:
NVIDIA Quadro P4000
Windows 10
Unity 2019.4
The pixel shader on the GPU runs pixels in simd groups, called wavefronts. If the code currently executing would not change based on which pixel is being rendered the code only has to be run once for the entire group. If it changes based on the pixel then each of the pixels will need to run unique code.
In the first version, a 64 pixel wavefront would execute the code as a single simd InterlockedAdd<64>(counter[0], 1); or might even optimize it into InterlockedAdd(counter[0], 64);
In the second example it turns into a series of serial, non-simd Adds and becomes 64 times as expensive.
This is an oversimplification, and there are other tricks the GPU uses to share computing resources. But a good general rule of thumb is to make as much code as possible sharable by every nearby pixel.

Lightweight (de)compression algorithm for embedded use

I have a low-resource embedded system with a graphical user interface. The interface requires font data. To conserve read-only memory (flash), the font data needs to be compressed. I am looking for an algorithm for this purpose.
Properties of the data to be compressed
transparency data for a rectangular pixel map with 8 bits per pixel
there are typically around 200..300 glyphs in a font (typeface sampled in certain size)
each glyph is typically from 6x9 to 15x20 pixels in size
there are a lot of zeros ("no ink") and somewhat less 255's ("completely inked"), otherwise the distribution of octets is quite even due to the nature of anti-aliasing
Requirements for the compression algorithm
The important metrics for the decompression algorithm is the size of the data plus the size of the algorithm (as they will reside in the same limited memory).
There is very little RAM available for the decompression; it is possible to decompress the data for a single glyph into RAM but not much more.
To make things more difficult, the algorithm has to be very fast on a 32-bit microcontroller (ARM Cortex-M core), as the glyphs need to be decompressed while they are being drawn onto the display. Ten or twenty machine cycles per octet is ok, a hundred is certainly too much.
To make things easier, the complete corpus of data is known a priori, and there is a lot of processing power and memory available during the compression phase.
Conclusions and thoughts
The naïve approach of just packing each octet by some variable-length encoding does not give good results due to the relatively high entropy.
Any algorithm taking advantage of data decompressed earlier seems to be out of question as it is not possible to store the decompressed data of other glyphs. This makes LZ algorithms less efficient as they can only reference to a small amount of data.
Constraints on the processing power seem to rule out most bitwise operations, i.e. decompression should handle the data octet-by-octet. This makes Huffman coding difficult and arithmetic coding impossible.
The problem seems to be a good candidate for static dictionary coding, as all data is known beforehand, and the data is somewhat repetitive in nature (different glyphs share same shapes).
Questions
How can a good dictionary be constructed? I know finding the optimal dictionary for certain data is a np complete problem, but are there any reasonably good approximations? I have tried the zstandard's dictionary builder, but the results were not very good.
Is there something in my conclusions that I've gotten wrong? (Am I on the wrong track and omitting something obvious?)
Best algorithm this far
Just to give some background information, the best useful algorithm I have been able to figure out is as follows:
All samples in the font data for a single glyph are concatenated (flattened) into a one-dimensional array (vector, table).
Each sample has three possible states: 0, 255, and "something else".
This information is packed five consecutive samples at a time into a 5-digit base-three number (0..3^5).
As there are some extra values available in an octet (2^8 = 256, 3^5 = 243), they are used to signify longer strings of 0's and 255's.
For each "something else" value the actual value (1..254) is stored in a separate vector.
This data is fast to decompress, as the base-3 values can be decoded into base-4 values by a smallish (243 x 3 = 729 octets) lookup table. The compression ratios are highly dependent on the font size, but with my typical data I can get around 1:2. As this is significantly worse than LZ variants (which get around 1:3), I would like to try the static dictionary approach.
Of course, the usual LZ variants use Huffman or arithmetic coding, which naturally makes the compressed data smaller. On the other hand, I have all the data available, and the compression speed is not an issue. This should make it possible to find much better dictionaries.
Due to the nature of the data I could be able to use a lossy algorithm, but in that case the most likely lossy algorithm would be reducing the number of quantization levels in the pixel data. That won't change the underlying compression problem much, and I would like to avoid the resulting bit-alignment hassle.
I do admit that this is a borderline case of being a good answer to my question, but as I have researched the problem somewhat, this answer both describes the approach I chose and gives some more information on the nature of the problem should someone bump into it.
"The right answer" a.k.a. final algorithm
What I ended up with is a variant of what I describe in the question. First, each glyph is split into trits 0, 1, and intermediate. This ternary information is then compressed with a 256-slot static dictionary. Each item in the dictionary (or look-up table) is a binary encoded string (0=0, 10=1, 11=intermediate) with a single 1 added to the most significant end.
The grayscale data (for the intermediate trits) is interspersed between the references to the look-up table. So, the data essentially looks like this:
<LUT reference><gray value><gray value><LUT reference>...
The number of gray scale values naturally depends on the number of intermediate trits in the ternary data looked up from the static dictionary.
Decompression code is very short and can easily be written as a state machine with only one pointer and one 32-bit variable giving the state. Something like this:
static uint32_t trits_to_decode;
static uint8_t *next_octet;
/* This should be called when starting to decode a glyph
data : pointer to the compressed glyph data */
void start_glyph(uint8_t *data)
{
next_octet = data; // set the pointer to the beginning of the glyph
trits_to_decode = 1; // this triggers reloading a new dictionary item
}
/* This function returns the next 8-bit pixel value */
uint8_t next_pixel(void)
{
uint8_t return_value;
// end sentinel only? if so, we are out of ternary data
if (trits_to_decode == 1)
// get the next ternary dictionary item
trits_to_decode = dictionary[*next_octet++];
// get the next pixel from the ternary word
// check the LSB bit(s)
if (trits_to_decode & 1)
{
trits_to_decode >>= 1;
// either full value or gray value, check the next bit
if (trits_to_decode & 1)
{
trits_to_decode >>= 1;
// grayscale value; get next from the buffer
return *next_octet++;
}
// if we are here, it is a full value
trits_to_decode >>= 1;
return 255;
}
// we have a zero, return it
trits_to_decode >>= 1;
return 0;
}
(The code has not been tested in exactly this form, so there may be typos or other stupid little errors.)
There is a lot of repetition with the shift operations. I am not too worried, as the compiler should be able to clean it up. (Actually, left shift could be even better, because then the carry bit could be used after shifting. But as there is no direct way to do that in C, I don't bother.)
One more optimization relates to the size of the dictionary (look-up table). There may be short and long items, and hence it can be built to support 32-bit, 16-bit, or 8-bit items. In that case the dictionary has to be ordered so that small numerical values refer to 32-bit items, middle values to 16-bit items and large values to 8-bit items to avoid alignment problems. Then the look-up code looks like this:
static uint8_t dictionary_lookup(uint8_t octet)
{
if (octet < NUMBER_OF_32_BIT_ITEMS)
return dictionary32[octet];
if (octet < NUMBER_OF_32_BIT_ITEMS + NUMBER_OF_16_BIT_ITEMS)
return dictionary16[octet - NUMBER_OF_32_BIT_ITEMS];
return dictionary8[octet - NUMBER_OF_16_BIT_ITEMS - NUMBER_OF_32_BIT_ITEMS];
}
Of course, if every font has its own dictionary, the constants will become variables looked up form the font information. Any half-decent compiler will inline that function, as it is called only once.
If the number of quantization levels is reduced, it can be handled, as well. The easiest case is with 4-bit gray levels (1..14). This requires one 8-bit state variable to hold the gray levels. Then the gray level branch will become:
// new state value
static uint8_t gray_value;
...
// new variable within the next_pixel() function
uint8_t return_value;
...
// there is no old gray value available?
if (gray_value == 0)
gray_value = *next_octet++;
// extract the low nibble
return_value = gray_value & 0x0f;
// shift the high nibble into low nibble
gray_value >>= 4;
return return_value;
This actually allows using 15 intermediate gray levels (a total of 17 levels), which maps very nicely into linear 255-value system.
Three- or five-bit data is easier to pack into a 16-bit halfword and set MSB always one. Then the same trick as with the ternary data can be used (shift until you get 1).
It should be noted that the compression ratio starts to deteriorate at some point. The amount of compression with the ternary data does not depend on the number of gray levels. The gray level data is uncompressed, and the number of octets scales (almost) linearly with the number of bits. For a typical font the gray level data at 8 bits is 1/2 .. 2/3 of the total, but this is highly dependent on the typeface and size.
So, reduction from 8 to 4 bits (which is visually quite imperceptible in most cases) reduces the compressed size typically by 1/4..1/3, whereas the further reduction offered by going down to three bits is significantly less. Two-bit data does not make sense with this compression algorithm.
How to build the dictionary?
If the decompression algorithm is very straightforward and fast, the real challenges are in the dictionary building. It is easy to prove that there is such thing as an optimal dictionary (dictionary giving the least number of compressed octets for a given font), but wiser people than me seem to have proven that the problem of finding such dictionary is NP-complete.
With my arguably rather lacking theoretical knowledge on the field I thought there would be great tools offering reasonably good approximations. There might be such tools, but I could not find any, so I rolled my own mickeymouse version. EDIT: the earlier algorithm was rather goofy; a simpler and more effective was found
start with a static dictionary of '0', g', '1' (where 'g' signifies an intermediate value)
split the ternary data for each glyph into a list of trits
find the most common consecutive combination of items (it will most probably be '0', '0' at the first iteration)
replace all occurrences of the combination with the combination and add the combination into the dictionary (e.g., data '0', '1', '0', '0', 'g' will become '0', '1', '00', 'g' if '0', '0' is replaced by '00')
remove any unused items in the dictionary (they may occur at least in theory)
repeat steps 3-5 until the dictionary is full (i.e. at least 253 rounds)
This is still a very simplistic approach and it probably gives a very sub-optimal result. Its only merit is that it works.
How well does it work?
One answer is well enough, but to elaborate that a bit, here are some numbers. This is a font with 864 glyphs, typical glyph size of 14x11 pixels, and 8 bits per pixel.
raw uncompressed size: 127101
number of intermediate values: 46697
Shannon entropies (octet-by-octet):
total: 528914 bits = 66115 octets
ternary data: 176405 bits = 22051 octets
intermediate values: 352509 bits = 44064 octets
simply compressed ternary data (0=0, 10=1, 11=intermediate) (127101 trits): 207505 bits = 25939 octets
dictionary compressed ternary data: 18492 octets
entropy: 136778 bits = 17097 octets
dictionary size: 647 octets
full compressed data: 647 + 18492 + 46697 = 65836 octets
compression: 48.2 %
The comparison with octet-by-octet entropy is quite revealing. The intermediate value data has high entropy, whereas the ternary data can be compressed. This can also be interpreted by the high number of values 0 and 255 in the raw data (as compared to any intermediate values).
We do not do anything to compress the intermediate values, as there do not seem to be any meaningful patterns. However, we beat entropy by a clear margin with ternary data, and even the total amount of data is below entropy limit. So, we could do worse.
Reducing the number of quantization levels to 17 would reduce the data size to approximately 42920 octets (compression over 66 %). The entropy is then 41717 octets, so the algorithm gets slightly worse as is expected.
In practice, smaller font sizes are difficult to compress. This should be no surprise, as larger fraction of the information is in the gray scale information. Very big font sizes compress efficiently with this algorithm, but there run-length compression is a much better candidate.
What would be better?
If I knew, I would use it! But I can still speculate.
Jubatian suggests there would be a lot of repetition in a font. This must be true with the diacritics, as aàäáâå have a lot in common in almost all fonts. However, it does not seem to be true with letters such as p and b in most fonts. While the basic shape is close, it is not enough. (Careful pixel-by-pixel typeface design is then another story.)
Unfortunately, this inevitable repetition is not very easy to exploit in smaller size fonts. I tried creating a dictionary of all possible scan lines and then only referencing to those. Unfortunately, the number of different scan lines is high, so that the overhead added by the references outweighs the benefits. The situation changes somewhat if the scan lines themselves can be compressed, but there the small number of octets per scan line makes efficient compression difficult. This problem is, of course, dependent on the font size.
My intuition tells me that this would still be the right way to go, if both longer and shorter runs than full scan lines are used. This combined with using 4-bit pixels would probably give very good results—only if there were a way to create that optimal dictionary.
One hint to this direction is that LZMA2 compressed file (with xz at the highest compression) of the complete font data (127101 octets) is only 36720 octets. Of course, this format fulfils none of the other requirements (fast to decompress, can be decompressed glyph-by-glyph, low RAM requirements), but it still shows there is more redundance in the data than what my cheap algorithm has been able to exploit.
Dictionary coding is typically combined with Huffman or arithmetic coding after the dictionary step. We cannot do it here, but if we could, it would save another 4000 octets.
You can consider using something already developed for a scenario similar to Yours
https://github.com/atomicobject/heatshrink
https://spin.atomicobject.com/2013/03/14/heatshrink-embedded-data-compression/
You could try lossy compression using a sparse representation with custom dictionary.
The output of each glyph is a superposition of 1-N blocks from the dictionary;
most cpu time spent in preprocessing
predetermined decoding time (max, average or constant N) additions per pixel
controllable compressed size (dictionary size + xyn codes per glyph)
It seems that the simplest lossy method would be to reduce the number of bits-per-pixel. With glyphs of that size, 16 levels are likely to be sufficient. That would halve the data immediately, then you might apply your existing algorithm in the values 0, 16 or "something else" to perhaps halve it again.
I would go for Clifford's answer, that is, converting the font to 4 bits per pixel first which is sufficient for this task.
Then, since this is a font, you have lots of row repetitions, that is when rows defining one character match those of another character. Take for example the letter 'p' and 'b', the middle part of these letters should be the same (you will have even more matches if the target language uses loads of diacritics). Your encoder then could first collect all distinct rows of the font, store these, and then each character image is formed by a list of pointers to the rows.
The efficiency depends on the font of course, depending on the source, you might need some preprocessing to get it compress better with this method.
If you want more, you might rather choose to go for 3 bits per pixel or even 2 bits per pixel, depending on your goals (and some will for hand-tuning the font images), these might still be satisfactory.
This method in overall of course works very well for real-time display (you only need to traverse a pointer to get the row data).

OSX AudioUnit SMP

I'd like to know if someone has experience in writing a HAL AudioUnit rendering callback taking benefits of multi-core processors and/or symmetric multiprocessing?
My scenario is the following:
A single audio component of sub-type kAudioUnitSubType_HALOutput (together with its rendering callback) takes care of additively synthesizing n sinusoid partials with independent individually varying and live-updated amplitude and phase values. In itself it is a rather straightforward brute-force nested loop method (per partial, per frame, per channel).
However, upon reaching a certain upper limit for the number of partials "n", the processor gets overloaded and starts producing drop-outs, while three other processors remain idle.
Aside from general discussion about additive synthesis being "processor expensive" in comparison to let's say "wavetable", I need to know if this can be resolved right way, which involves taking advantage of multiprocessing on a multi-processor or multi-core machine? Breaking the rendering thread into sub-threads does not seem the right way, since the render callback is already a time-constraint thread in itself, and the final output has to be sample-acurate in terms of latency. Has someone had positive experience and valid methods in resolving such an issue?
System: 10.7.x
CPU: quad-core i7
Thanks in advance,
CA
This is challenging because OS X is not designed for something like this. There is a single audio thread - it's the highest priority thread in the OS, and there's no way to create user threads at this priority (much less get the support of a team of systems engineers who tune it for performance, as with the audio render thread). I don't claim to understand the particulars of your algorithm, but if it's possible to break it up such that some tasks can be performed in parallel on larger blocks of samples (enabling absorption of periods of occasional thread starvation), you certainly could spawn other high priority threads that process in parallel. You'd need to use some kind of lock-free data structure to exchange samples between these threads and the audio thread. Convolution reverbs often do this to allow reasonable latency while still operating on huge block sizes. I'd look into how those are implemented...
Have you looked into the Accelerate.framework? You should be able to improve the efficiency by performing operations on vectors instead of using nested for-loops.
If you have vectors (of length n) for the sinusoidal partials, the amplitude values, and the phase values, you could apply a vDSP_vadd or vDSP_vmul operation, then vDSP_sve.
As far as I know, AU threading is handled by the host. A while back, I tried a few ways to multithread an AU render using various methods, (GCD, openCL, etc) and they were all either a no-go OR unpredictable. There is (or at leas WAS... i have not checked recently) a built in AU called 'deferred renderer' I believe, and it threads the input and output separately, but I seem to remember that there was latency involved, so that might not help.
Also, If you are testing in AULab, I believe that it is set up specifically to only call on a single thread (I think that is still the case), so you might need to tinker with another test host to see if it still chokes when the load is distributed.
Sorry I couldn't help more, but I thought those few bits of info might be helpful.
Sorry for replying my own question, I don't know the way of adding some relevant information otherwise. Edit doesn't seem to work, comment is way too short.
First of all, sincere thanks to jtomschroeder for pointing me to the Accelerate.framework.
This would perfectly work for so called overlap/add resynthesis based on IFFT. Yet I haven't found a key to vectorizing the kind of process I'm using which is called "oscillator-bank resynthesis", and is notorious for its processor taxing (F.R. Moore: Elements of Computer Music). Each momentary phase and amplitude has to be interpolated "on the fly" and last value stored into the control struct for further interpolation. Direction of time and time stretch depend on live input. All partials don't exist all the time, placement of breakpoints is arbitrary and possibly irregular. Of course, my primary concern is organizing data in a way to minimize the number of math operations...
If someone could point me at an example of positive practice, I'd be very grateful.
// Here's the simplified code snippet:
OSStatus AdditiveRenderProc(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// local variables' declaration and behaviour-setting conditional statements
// some local variables are here for debugging convenience
// {... ... ...}
// Get the time-breakpoint parameters out of the gen struct
AdditiveGenerator *gen = (AdditiveGenerator*)inRefCon;
// compute interpolated values for each partial's each frame
// {deltaf[p]... ampf[p][frame]... ...}
//here comes the brute-force "processor eater" (single channel only!)
Float32 *buf = (Float32 *)ioData->mBuffers[channel].mData;
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buf[frame] = 0.;
for(UInt32 p = 0; p < candidates; p++){
if(gen->partialFrequencyf[p] < NYQUISTF)
buf[frame] += sinf(phasef[p]) * ampf[p][frame];
phasef[p] += (gen->previousPartialPhaseIncrementf[p] + deltaf[p]*frame);
if (phasef[p] > TWO_PI) phasef[p] -= TWO_PI;
}
buf[frame] *= ovampf[frame];
}
for(UInt32 p = 0; p < candidates; p++){
//store the updated parameters back to the gen struct
//{... ... ...}
;
}
return noErr;
}

Tips for improving performance of a 2d image 'tracing' CUDA kernel?

Can you give me some tips to optimize this CUDA code?
I'm running this on a device with compute capability 1.3 (I need it for a Tesla C1060 although I'm testing it now on a GTX 260 which has the same compute capability) and I have several kernels like the one below. The number of threads I need to execute this kernel is given by long SUM and depends on size_t M and size_t N which are the dimensions of a rectangular image received as parameter it can vary greatly from 50x50 to 10000x10000 in pixels or more. Although I'm mostly interested in working the bigger images with Cuda.
Now each image has to be traced in all directions and angles and some computations must be done over the values extracted from the tracing. So, for example, for a 500x500 image I need 229080 threads computing that kernel below which is the value of SUM (that's why I check that the thread id idHilo doesn't go over it). I copied several arrays into the global memory of the device one after another since I need to access them for the calculations all of length SUM. Like this
cudaMemcpy(xb_cuda,xb_host,(SUM*sizeof(long)),cudaMemcpyHostToDevice);
cudaMemcpy(yb_cuda,yb_host,(SUM*sizeof(long)),cudaMemcpyHostToDevice);
...etc
So each value of every array can be accessed by one thread. All are done before the kernel calls. According to the Cuda Profiler on Nsight the highest memcopy duration is 246.016 us for a 500x500 image so that is not taking so long.
But the kernels like the one I copied below are taking too long for any practical use (3.25 seconds according to the Cuda profiler for the kernel below for a 500x500 image and 5.052 seconds for the kernel with the highest duration) so I need to see if I can optimize them.
I arrange the data this way
First the block dimension
dim3 dimBlock(256,1,1);
then the number of blocks per Grid
dim3 dimGrid((SUM+255)/256);
For a number of 895 blocks for a 500x500 image.
I'm not sure how to use coalescing and shared memory in my case or even if it's a good idea to call the kernel several times with different portions of the data. The data is independent one from the other so I could in theory call that kernel several times and not with the 229080 threads all at once if needs be.
Now take into account that the outer for loop
for(t=15;t<=tendbegin_cuda[idHilo]-15;t++){
depends on
tendbegin_cuda[idHilo]
the value of which depends on each thread but most threads have similar values for it.
According to the Cuda Profiler the Global Store Efficiency is of 0.619 and the Global Load Efficiency is 0.951 for this kernel. Other kernels have similar values .
Is that good? bad? how can I interpret those values? Sadly the devices of compute capability 1.3 don't provide other useful info for assessing the code like the Multiprocessor and Kernel Memory or Instruction analysis. The only results I get after the analysis is "Low Global Memory Store Efficiency" and "Low Global Memory Load Efficiency" but I'm not sure how I can optimize those.
void __global__ t21_trazo(long SUM,int cT, double Bn, size_t M, size_t N, float* imagen_cuda, double* vector_trazo_cuda, long* xb_cuda, long* yb_cuda, long* xinc_cuda, long* yinc_cuda, long* tbegin_cuda, long* tendbegin_cuda){
long xi;
long yi;
int t;
int k;
int a;
int ji;
long idHilo=blockIdx.x*blockDim.x+threadIdx.x;
int neighborhood[31];
int v=0;
if(idHilo<SUM){
for(t=15;t<=tendbegin_cuda[idHilo]-15;t++){
xi = xb_cuda[idHilo] + floor((double)t*xinc_cuda[idHilo]);
yi = yb_cuda[idHilo] + floor((double)t*yinc_cuda[idHilo]);
neighborhood[v]=floor(xi/Bn);
ji=floor(yi/Bn);
if(fabs((double)neighborhood[v]) < M && fabs((double)ji)<N)
{
if(tendbegin_cuda[idHilo]>30 && v==30){
if(t==0)
vector_trazo_cuda[20+idHilo*31]=0;
for(k=1;k<=15;k++)
vector_trazo_cuda[20+idHilo*31]=vector_trazo_cuda[20+idHilo*31]+fabs(imagen_cuda[ji*M+(neighborhood[v-(15+k)])]-
imagen_cuda[ji*M+(neighborhood[v-(15-k)])]);
for(a=0;a<30;a++)
neighborhood[a]=neighborhood[a+1];
v=v-1;
}
v=v+1;
}
}
}
}
EDIT:
Changing the DP flops for SP flops only slightly improved the duration. Loop unrolling the inner loops practically didn't help.
Sorry for the unstructured answer, I'm just going to throw out some generally useful comments with references to your code to make this more useful to others.
Algorithm changes are always number one for optimizing. Is there another way to solve the problem that requires less math/iterations/memory etc.
If precision is not a big concern, use floating point (or half precision floating point with newer architectures). Part of the reason it didn't affect your performance much when you briefly tried is because you're still using double precision calculations on your floating point data (fabs takes double, so if you use with float, it converts your float to a double, does double math, returns a double and converts to float, use fabsf).
If you don't need to use the absolute full precision of float use fast math (compiler option).
Multiply is much faster than divide (especially for full precision/non-fast math). Calculate 1/var outside the kernel and then multiply instead of dividing inside kernel.
Don't know if it gets optimized out, but you should use increment and decrement operators. v=v-1; could be v--; etc.
Casting to an int will truncate toward zero. floor() will truncate toward negative infinite. you probably don't need explicit floor(), also, floorf() for float as above. when you use it for the intermediate computations on integer types, they're already truncated. So you're converting to double and back for no reason. Use the appropriately typed function (abs, fabs, fabsf, etc.)
if(fabs((double)neighborhood[v]) < M && fabs((double)ji)<N)
change to
if(abs(neighborhood[v]) < M && abs(ji)<N)
vector_trazo_cuda[20+idHilo*31]=vector_trazo_cuda[20+idHilo*31]+
fabs(imagen_cuda[ji*M+(neighborhood[v-(15+k)])]-
imagen_cuda[ji*M+(neighborhood[v-(15-k)])]);
change to
vector_trazo_cuda[20+idHilo*31] +=
fabsf(imagen_cuda[ji*M+(neighborhood[v-(15+k)])]-
imagen_cuda[ji*M+(neighborhood[v-(15-k)])]);
.
xi = xb_cuda[idHilo] + floor((double)t*xinc_cuda[idHilo]);
change to
xi = xb_cuda[idHilo] + t*xinc_cuda[idHilo];
The above line is needlessly complicated. In essence you are doing this,
convert t to double,
convert xinc_cuda to double and multiply,
floor it (returns double),
convert xb_cuda to double and add,
convert to long.
The new line will store the same result in much, much less time (also better because if you exceed the precision of double in the previous case, you would be rounding to a nearest power of 2). Also, those four lines should be outside the for loop...you don't need to recompute them if they don't depend on t. Together, i wouldn't be surprised if this cuts your run time by a factor of 10-30.
Your structure results in a lot of global memory reads, try to read once from global, handle calculations on local memory, and write once to global (if at all possible).
Compile with -lineinfo always. Makes profiling easier, and i haven't been able to assess any overhead whatsoever (using kernels in the 0.1 to 10ms execution time range).
Figure out with the profiler if you're compute or memory bound and devote time accordingly.
Try to allow the compiler use registers when possible, this is a big topic.
As always, don't change everything at once. I typed all this out with compiling/testing so i may have an error.
You may be running too many threads simultaneously. The optimum performance seems to come when you run the right number of threads: enough threads to keep busy, but not so many as to over-fragment the local memory available to each simultaneous thread.
Last fall I built a tutorial to investigate optimization of the Travelling Salesman problem (TSP) using CUDA with CUDAFY. The steps I went through in achieving a several-times speed-up from a published algorithm may be useful in guiding your endeavours, even though the problem domain is different. The tutorial and code is available at CUDA Tuning with CUDAFY.

Resources