(DX12) Read-back buffer for 2D-Texture UAV [closed] - memory-management

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to read-back ray-traced intersections of a ray's recursive path from the GPU to the CPU in DXR.
I am able to render the intersections into a layered unordered access view Texture2D array, so that each layer of the ray-tree corresponds to one layer in this UAV array.
The problem comes when I try to 'readback' this data from the GPU so the CPU can read it. I have not found a way to copy texture information from the GPU to the CPU - I cannot instantiate a 2D-Texture buffer on the read-back heap. I am now looking into writing this intersection information into a flattened 1D UAV buffer - essentially a g-buffer. However, I am having difficulty initializing this (since each pixel may necessarily contain an intersection, I need a buffer the size of the screen-dimensions*RAY_RECURSION_DEPTH (6 in my case), however the number of elements in a UAV-Buffer is limited to size 345599).
Getting to the point, is there a way for me to read-back from a UAV Texture2D resource? Is there a way for me to create a UAV-Buffer with a larger size than 345599? Or, is there another way I should be going about this altogether?
Thanks.

Readback resources for Direct3D 12 must be buffers (D3D12_RESOURCE_DIMENSION_BUFFER). You create one large enough to hold the Texture2D data (rowpitch * height) and then use CopyTextureRegion to copy it from the GPU to CPU.
D3D12_RESOURCE_DESC bufferDesc = {};
bufferDesc.Alignment = desc.Alignment;
bufferDesc.DepthOrArraySize = 1;
bufferDesc.Dimension = D3D12_RESOURCE_DIMENSION_BUFFER;
bufferDesc.Flags = D3D12_RESOURCE_FLAG_NONE;
bufferDesc.Format = DXGI_FORMAT_UNKNOWN;
bufferDesc.Height = 1;
bufferDesc.Width = srcPitch * desc.Height;
bufferDesc.Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR;
bufferDesc.MipLevels = 1;
bufferDesc.SampleDesc.Count = 1;
bufferDesc.SampleDesc.Quality = 0;
See ScreenGrab and Microsoft Docs

Related

PIC18 Signal Measurement Timer SMT1 (Counter Mode) Not Incrementing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm trying to use SMT1 on a PIC18F45K42 to count cycles of a square wave on pin RB0. I can't get the counter to increment, not sure what I'm doing wrong. If I understand correctly, SMT1TMR should be incrementing but it's not. (I also checked SMT1TMRL, etc, directly and it's not changing).
1) I am trying to do a normal counter, not gated, so I'm not using the Window signal at all (I don't want to have to use it, I just want to zero the counter from time to time then check to see how many square cycles have arrived).
2) I have registers set as follows (pic below) according to the paused debugger in MPLAB X. I am putting a scope probe directly on the pin and I can see the square wave is arriving. I can also pause the debugger occasionally to read PORTB and see PORTB.0 is changing between high and low, so I believe it is being received.
3) I'm playing with square waves from 20 Hz to around 400 Hz created from a function generator.
I have attached an image of the registers. Here are the values for reference:
SMT1SIGPPS 0x08 (should be RB0)
SMT1CON0 0x80
SMT1CON1 0xC8
SMT1STAT 0x05
SMT1SIG 0x00
TRISB 0xE3
PMD6 0x17 (SMT1MD is 0, which should be "not disabled")
Any suggestions much appreciated. This seems like it should be so simple and straightforward.
Thank you.
I figured it out. The key is in data sheet 25.1.2 Period Match Interrupt. The Period register has to be set to longer than the counter will run. It defaults to 0, so the counter couldn't increment. Fixed it by manually loading the 3 period registers with max value... added the following to my ini code, seems to be working as expected now.
SMT1PRU = 0xFF; //set max period for SMT1 so counter doesn't roll over
SMT1PRH = 0xFF;
SMT1PRL = 0xFF;

What are online down-scaling algorithms? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm building a circuit that will be reading PAL/NTSC (576i, 480i) frames from analog input. The microcontroller has 32 kB of memory. My goal is to scale down input to 32x16 resolution, and forward this image to LED matrix.
PAL frame can take ~400 kB of memory. So i thought about down-scaling online. Read 18 pixels, decimate to 1. Read 45 lines, decimate to 1. Peak memory usage: 45 x 32 = 1.44 kB (45 decimated lines awaiting decimation).
Question: What are other online image down-scaling algorithms, other than the above naive one? Googling is extremely hard because online services are being found (PDF resize, etc.)
Note that mentioned formats are interlaced, so you read at first 0th, 2nd, 4th.. lines (first semi-frame), then 1st, 3rd, .. lines (second semi-frame).
If you are using simple averaging of pixel values in resulting cell (I suspect it is OK for so small output matrix), then create output array (16x32=512 entries) and sum appropriate values for every cell. And you need buffer for a single input line (768 or 640 entries).
x_coeff = input_width / out_width
y_coeff = input_height / out_height
out_y = inputrow / y_coeff
for (inputcol = 0..input_width - 1)
out_x = inputcol / x_coeff
out_array[out_y][out_x] += input_line[inputcol]
inputrow = inputrow + 2
if (inputrow = input_height)
inputrow = 1
if (inputrow > input_height)
inputrow = 0
divide out_array[][] entries by ( x_coeff * y_coeff)

Accessing an element without initializing the vector. Do I need extra space? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am reading about a "trick" (references to Aho,Hopcroft,Ullman) on how to use a data vector without explicitely initializing it.
The trick is to use 2 extra vectors (From To) and an integer Top.
Before accesing an element in the vector DATA[i] if a specific condition between From To and Top is met the element i has been considered as initialized.
If the condition does not meet then the element is initialized and the From To and Top are updated as follows:
Top = Top + 1
From[i] = Top
To[Top] = i
Data[i] = 0
The condition is to know whether an element has been initialized is:
From[i] <= Top && To[From[i]] == i
If true then it has been initialized.
My question is: why are the extra vectors needed?
From my point of view, if I access an element and i<=Top then the element is initialized. Then I increment i i.e. i++.
In this case if i <= TOP means that DATA[i] has been initialized.
Am I not seeing a boundary case? It seems to me this is enough.
Or am I wrong?
If this is the example I am thinking of, then you don't know the order in which the elements of DATA[] will be accessed - it is used as a sparse array, for example the values in an almost empty hash table. So the first 3 items to be accessed might be DATA[113], DATA[29], and DATA[123123], not DATA[0], DATA[1], and DATA[2]. You could in fact get away without From[], in which case To would store {113, 29, 123123} - but then you would have to search all of To every time you wanted to see if an element of DATA was valid, e.g. if you wanted to see if 123123 was valid you would see To[0] = 113 no luck To[1] = 29 no luck To[2] = 123123 Oh yes 123123 is valid.
The time-saving idea is that none of To, From, Data need to be initialized beforehand, and all of them can be arrays so large that initialization takes appreciable time.
At the outset, any entry of any of the arrays can have any value. It could be the case, by chance, that for some i, To[From[i]] == i. (That condition can be true by chance or when Data[From[i]] has been set.) However, Top is counting the number of Data elements set so far, so that the test From[i] <= Top can distinguish cases completely.

Tutorial on Autocorrelation? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Ive recently been considering using Autocorrelation for Pitch Detection. However, I am finding it difficult on finding good sources of where to learn autocorrelation, by this I mean sources that make it easy to understand autocorrelation step-by-step.
Im not that very good a programmer yet and also not really big on formulas so the sources that I find are really difficult to understand.
Basically, what I know now is the concept of autocorrelation is like a compare-and-contrast method of a signal? But I would really appreciate it if I can have more understanding of the autocorrelation algorithm.
Thank you very much!
UPDATE: Here is a sample code I got from a site. Maybe you can use it as reference. Ive tested this code and it does return the correct pitch properly (albeit there are some incorrect ones here and there)
maxOffset = sampleRate / minFreq;
minOffset = sampleRate / maxFreq;
for (int lag = maxOffset; lag >= minOffset; lag--)
{
float corr = 0; // this is calculated as the sum of squares
for (int i = 0; i < framesize; i++)
{
int oldIndex = i - lag;
float sample = ((oldIndex < 0) ? prevBuffer[frames + oldIndex] : buffer[oldIndex]);
corr += (sample * buffer[i]);
}
if (corr > maxCorr)
{
maxCorr = corr;
maxLag = lag;
}
}
return sampleRate / maxLag;
Here's what I hope is a simple explanation.
Firstly consider how sonar works - you send out a known signal and then compare a received signal with the original - the signals are compared over a range of possible delays and the best match corresponds to the round trip time for the reflected signal.
OK - now think of a periodic signal, such as a sustained middle C note on a piano. If you compare the note with itself at a range of different delays you will get a match for any delay which corresponds to the pitch period of the note. This is essentially what autocorrelation is: comparing a signal with itself over a range of possible delays and getting a peak wherever signal matches the delayed version of itself. For most musical notes the first such peak corresponds to exactly one pitch period, and so you can deduce the pitch from this (pitch or frequency = reciprocal of delay).

What is the optimal algorithm design for a water-saving urinal? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
At work, we have one of those nasty communal urinals. There is no flush handle. Rather, it has a motion sensor that sometimes triggers when you stand in front of it and sometimes doesn't. When it triggers, a tank fills, which when full is used to flush the urinal.
In my many trips before this nastraption, I have pondered both what the algorithm is the box uses to determine when to turn on and what would be the optimal algorithm, in terms of conserving water while still maintaining a relatively pleasant urinal experience.
I'll share my answer once folks have had a chance to share their ideas.
OnUserEnter()
{
if (UsersDetected == 0)
{
FirstDetectionTime = Now();
}
UsersDetected++;
CurrentlyInUse = true;
}
OnUserExit()
{
CurrentlyInUse = false;
if (UsersDetected >= MaxUsersBetweenFlushes ||
Now() - FirstDetectionTime > StinkInterval)
{
Flush();
}
}
OnTimer()
{
if (!CurrentlyInUse &&
UsersDetected > 0 &&
Now() - FirstDetectionTime > StinkInterval)
{
Flush();
}
}
Flush()
{
FlushTheUrinal();
UsersDetected = 0;
}
How do you know that it really isn't a camera that feeds its video to a bank of monitors in the basement where Milton triggers the flush when he sees you walk away from the urinal?
/me puts on his tin-foil hat
The best water-conserving algorithm is a urinal without a handle and a broken sensor.
This seems to be the state of our urinal most of the time, so i suppose it has to be intentionally designed to do that in order to conserve precious drinking water.
I would trigger on sense but use a slow fill in the hope that by the time it actually flushes, someone else has had a slash. This approach would minimise stinky stagnation and occasionally skip a flush cycle.
The "parallel-processing" (aka "multi-user") urinals in our school always triggered a complete flush each time before the break bell rings and of course shortly after the "break-is-over" bell. Very simple and effective.
At the risk of sounding Ludditish, I think the best solution is a handle. But that isn't the question. I would assume the mechanism is very simple. Someone moves in front of it, a count starts. When the count is fulfilled, the urinal is "primed". When the person moves away, the trigger is pulled, and the sensor turns off for an arbitrary amount of time (I don't think it has or needs any awareness of the act of flushing/tank-refilling).
Am I overthinking this?

Resources