How is buffer data accessed in D3D10? - windows

Basically, I'm trying to copy the front or back buffer to a texture, grab the 1x1 mipmap level of said texture, and then spew the color of the resulting back to the Arduino to control my room's lighting. Everything else is up and running, and I've already gotten it to work via GetDC(NULL) and StretchBlt. But this was about 15FPS, and the windows GUI ran choppy.
The downsampling is just DEMANDING to be used on a GPU.
In D3D9, it seemed like there was simply GetBackBuffer() or something like that, but I see nothing similar in D3D10. And I'm not even sure it would grab anything from the windows GUI.
Questions:
-What function(s) would I use?:
-Do I need to explicitly create a swapchain beforehand?
-Would this only capture data from other D3D programs?
Okay, here's where I'm at as far as creating the texture goes But I'm not seeing anything that gets me back to textures:
//Create Texture
D3D10_TEXTURE2D_DESC tBufferDesc;
ID3D10Texture2D *tBuffer = NULL;
DXGI_SAMPLE_DESC iBufferSamples = {1,0};
tBufferDesc.Width = iScreenSizeX;
tBufferDesc.Height = iScreenSizeY;
tBufferDesc.MipLevels = 0;
tBufferDesc.ArraySize = 1;
tBufferDesc.Format = DXGI_FORMAT_B8G8R8A8_TYPELESS;
tBufferDesc.SampleDesc = iBufferSamples;
tBufferDesc.Usage = D3D10_USAGE_DEFAULT;
tBufferDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE | D3D10_BIND_RENDER_TARGET;
tBufferDesc.CPUAccessFlags = 0;
tBufferDesc.MiscFlags = D3D10_RESOURCE_MISC_GENERATE_MIPS;
HRESULT tBufferResult = pDevice->CreateTexture2D(&tBufferDesc, NULL , &tBuffer);
hrResult(tBufferResult);
ID3D10RenderTargetView *rtBuffer;
ID3D10Resource *rsBuffer;
pDevice->OMGetRenderTargets(1, &rtBuffer, NULL);
rtBuffer->GetResource(&rsBuffer);
rsBuffer->

Related

Direct2D fails when drawing a single-channel bitmap

I'm an experienced programmer specialized in Computer Graphics, mainly using Direct3D 9.0c, OpenGL and general algorithms. Currently, I am evaluating Direct2D as rendering technology for a professional application dealing with medical image data. As for rendering, it is a x64 desktop application in windowed mode (not fullscreen).
Already with my very initial steps I struggle with a task I thought would be a no-brainer: Rendering a single-channel bitmap on screen.
Running on a Windows 8.1 machine, I create an ID2D1DeviceContext with a Direct3D swap chain buffer surface as render target. The swap chain is created from a HWND and buffer format DXGI_FORMAT_B8G8R8A8_UNORM. Note: See also the code snippets at the end.
Afterwards, I create a bitmap with pixel format DXGI_FORMAT_R8_UNORM and alpha mode D2d1_ALPHA_MODE_IGNORE. When calling DrawBitmap(...) on the device context, a debug break point is triggered with the debug message "D2d DEBUG ERROR - This operation is not compatible with the pixel format of the bitmap".
I know that this output is quite clear. Also, when changing the pixel format to DXGI_FORMAT_R8G8B8A8_UNORM with DXGI_ALPHA_MODE_IGNORE everything works well and I see the bitmap rendered. However, I simply cannot believe that! Graphics cards support single-channel textures ever since - every 3D graphics application can use them without thinking twice. This goes without speaking.
I tried to find anything here and at Google, without success. The only hint I could find was the MSDN Direct2D page with the (supported pixel formats). The documentation suggests - by not mentioning it - that DXGI_FORMAT_R8_UNORM is indeed not supported as bitmap format. I also find posts talking about alpha masks (using DXGI_FORMAT_A8_UNORM), but that's not what I'm after.
What am I missing that I can't convince Direct2D to create and draw a grayscale bitmap? Or is it really true that Direct2D doesn't support drawing of R8 or R16 bitmaps??
Any help is really appreciated as I don't know how to solve this. If I can't get this trivial basics to work, I think I'd have to stop digging deeper into Direct2D :-(.
And here is the code snippets of relevance. Please note that they might not compile since I ported this on the fly from my C++/CLI code to plain C++. Also, I threw away all error checking and other noise:
Device, Device Context and Swap Chain Creation (D3D and Direct2D):
// Direct2D factory creation
D2D1_FACTORY_OPTIONS options = {};
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
ID2D1Factory1* d2dFactory;
D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, options, &d2dFactory);
// Direct3D device creation
const auto type = D3D_DRIVER_TYPE_HARDWARE;
const auto flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
ID3D11Device* d3dDevice;
D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, &d3dDevice, nullptr, nullptr);
// Direct2D device creation
IDXGIDevice* dxgiDevice;
d3dDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&dxgiDevice));
ID2D1Device* d2dDevice;
d2dFactory->CreateDevice(dxgiDevice, &d2dDevice);
// Swap chain creation
DXGI_SWAP_CHAIN_DESC1 desc = {};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 2;
IDXGIAdapter* dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
IDXGIFactory2* dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&dxgiFactory));
IDXGISwapChain1* swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice, hwnd, &swapChainDesc, nullptr, nullptr, &swapChain);
// Direct2D device context creation
const auto options = D2D1_DEVICE_CONTEXT_OPTIONS_NONE;
ID2D1DeviceContext* deviceContext;
d2dDevice->CreateDeviceContext(options, &deviceContext);
// create render target bitmap from swap chain
IDXGISurface* swapChainSurface;
swapChain->GetBuffer(0, __uuidof(swapChainSurface), reinterpret_cast<void **>(&swapChainSurface));
D2D1_BITMAP_PROPERTIES1 bitmapProperties;
bitmapProperties.dpiX = 0.0f;
bitmapProperties.dpiY = 0.0f;
bitmapProperties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
bitmapProperties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
bitmapProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
bitmapProperties.colorContext = nullptr;
ID2D1Bitmap1* swapChainBitmap = nullptr;
deviceContext->CreateBitmapFromDxgiSurface(swapChainSurface, &bitmapProperties, &swapChainBitmap);
// set swap chain bitmap as render target of D2D device context
deviceContext->SetTarget(swapChainBitmap);
D2D single-channel Bitmap Creation:
const D2D1_SIZE_U size = { 512, 512 };
const UINT32 pitch = 512;
D2D1_BITMAP_PROPERTIES1 d2dProperties;
ZeroMemory(&d2dProperties, sizeof(D2D1_BITMAP_PROPERTIES1));
d2dProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
d2dProperties.pixelFormat.format = DXGI_FORMAT_R8_UNORM;
char* sourceData = new char[512*512];
ID2D1Bitmap1* d2dBitmap;
deviceContext->DeviceContextPointer->CreateBitmap(size, sourceData, pitch, d2dProperties, &d2dBitmap);
Bitmap drawing (FAILING):
deviceContext->BeginDraw();
D2D1_COLOR_F d2dColor = {};
deviceContext->Clear(d2dColor);
// THIS LINE FAILS WITH THE DEBUG BREAKPOINT IF SINGLE CHANNELED
deviceContext->DrawBitmap(bitmap, nullptr, 1.0f, D2D1_INTERPOLATION_MODE_LINEAR, nullptr);
swapChain->Present(1, 0);
deviceContext->EndDraw();
From my little experience, Direct2D seems very limited, indeed.
Have you tried Direct2D effects (ID2D1Effect)? You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple].
There is one called Color matrix effect (CLSID_D2D1ColorMatrix). It might work to have your DXGI_FORMAT_R8_UNORM (or DXGI_FORMAT_A8_UNORM, any single-channel would do) as input (inputs to effects are ID2D1Image, and ID2D1Bitmap inherits from ID2D1Image). Then set the D2D1_COLORMATRIX_PROP_COLOR_MATRIX for copying the input channel to all output channels. Have not tried it, though.

How to DEBUG OpenGL a gray/black texture box?

I'm altering someone else's code. They used PNG's which are loaded via BufferedImage. I need to load a TGA instead, which is just simply a 18 byte header and BGR codes. I have the textures loaded and running, but I get a gray box instead of the texture. I don't even know how to DEBUG this.
Textures are loaded in a ByteBuffer:
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
static ByteBuffer buffer = ByteBuffer.allocateDirect(datasize);
FileInputStream fin = new FileInputStream("/Volumes/RAMDisk/shot00021.tga");
FileChannel inc = fin.getChannel();
inc.position(18); // skip header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
I've followed this: [how-to-manage-memory-with-texture-in-opengl][1] ... because I am updating the texture once per frame, like video.
Called once:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
Called repeatedly:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, width, height, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, byteBuffer);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
return textureID;
The render code hasn't changed and is based on:
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
Make sure you set the texture sampling mode. Especially min filter: glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR). The default setting is mip mapped (GL_NEAREST_MIPMAP_LINEAR) so unless you upload mip maps you will get a white read result.
So either set the texture to no mip or generate them. One way to do that is to call glGenerateMipmap after the tex img call.
(see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml).
It's a very common gl pitfall and something people just tend to know after getting bitten by it a few times.
There is no easy way to debug stuff like this. There are good gl debugging tools in for example xcode but they will not tell you about this case.
Debugging GPU code is always a hassle. I would bet my money on a big industry progress in this area as more companies discover the power of GPU. Until then; I'll share my two best GPU debugging friends:
1) Define a function to print OGL errors:
int printOglError(const char *file, int line)
{
/* Returns 1 if an OpenGL error occurred, 0 otherwise. */
GLenum glErr;
int retCode = 0;
glErr = glGetError();
while (glErr != GL_NO_ERROR) {
printf("glError in file %s # line %d: %s\n", file, line, gluErrorString(glErr));
retCode = 1;
glErr = glGetError();
}
return retCode;
}
#define printOpenGLError() printOglError(__FILE__, __LINE__)
And call it after your render draw calls (possible earlier errors will also show up):
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
printOpenGLError();
This alerts if you make some invalid operations (which might just be your case) but you usually have to find where the error occurs by trial and error.
2) Check out gDEBugger, free software with tons of GPU memory information.
[Edit]:
I would also recommend using the opensource lib DevIL - its quite competent in loading various image formats.
Thanks to Felix, by not calling glTexSubImage2D (leaving the memory valid, but uninitialized) I noticed a remnant pattern left by the default memory. This indicated that the texture is being displayed, but the load is most likely the problem.
**UPDATE:
The, problem with the code above is essentially the buffer. The buffer is 1024*1024, but it is only partially filled in by the read, leaving the limit marker of the ByteBuffer at 2359296(1024*768*3) instead of 3145728(1024*1024*3). This gives the error:
Number of remaining buffer elements is must be ... at least ...
I thought that OpenGL needed space to return data, so I doubled the size of the buffer.
The buffer size is doubled to compensate for the error.
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
This is wrong, what is needed is the flip() function (Big THANKS to Reto Koradi for the small hint to the buffer rewind) to put the ByteBuffer in read mode. Since the buffer is only semi-full, the OpenGL buffer check gives an error. The correct thing to do is not double the buffer size; use buffer.position(buffer.capacity()) to fill the buffer before doing a flip().
final static int datasize = (WIDTH*HEIGHT*3); // not +18 no header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
buffer.position(buffer.capacity()); // make sure buffer is completely FILLED!
buffer.flip(); // flip buffer to read mode
To figure this out, it is helpful to hardcode the memory of the buffer to make sure the OpenGL calls are working, isolating the load problem. Then when the OpenGL calls are correct, concentrate on the loading of the buffer. As suggested by Felix K, it is good to make sure one texture has been drawn correctly before calling glTexSubImage2D repeatedly.
Some ideas which might cause the issue:
Your texture is disposed somewhere. I don't know the whole code but I guess somewhere there is a glDeleteTextures and this could cause some issues if called at the wrong time.
Are the texture width and height powers of two? If not this might be an issue depending on your hardware. Old hardware sometimes won't support non-power of two images.
The texture parameters changed between the draw calls at some other point ( Make a debug check of the parameters with glGetTexParameter ).
There could be a loading issue when loading the next image ( edit: or even the first image ). Check if the first image is displayed without loading the next images. If so it must be one of the cases above.

mayavi volume animation not updating

I’m trying to animate a Mayavi pipeline volume:
src = mlab.pipeline.volume(mlab.pipeline.scalar_field(data),vmin=.1*np.max(data),vmax=.2*np.max(data))
that is combined in the pipeline by another dataset represented as a cut plane.
However, I can’t get the volume visualization to update - only the first frame shows up. The animation is stepping through the data correctly (I get different values of the np.max(data[t]) below) but nothing in the visualization changes.
My understanding is that mlab_source_set should re-render correctly, and there’s nothing on the web anywhere that describes this (as far as I can tell).
The animation looks like:
#mlab.show
#mlab.animate(delay=250,ui=True)
def anim(src,data,tax,fig):
"""Animate."""
t = 0
nt = len(tax)
while 1:
vmin = .1*np.max(data[t])
vmax = .2*np.max(data[t])
print 'animation t = ',tax[t],', max = ',np.max(data[t])
src.mlab_source.set(scalar = mlab.pipeline.scalar_field(data[t]), vmin=vmin,vmax=vmax)
t = mod(t+1,nt)
yield
Any thoughts?

Creating X11 window to span multiple displays

I'm having the exact problem described here. How to make X11 window span multiple monitors
I have six monitors and am trying to create a window larger than the size of one of the monitors. It keeps getting resized by the window manager.
Apologize if I should post within that thread, the etiquette is not clear to me.
Anhow, I do the following in my code:
/* Pass some information along to the window manager to size the window */
sizeHints.flags = USSize; // | PMinSize;
sizeHints.width = sizeHints.base_width = width;
sizeHints.height = sizeHints.base_height = height;
// sizeHints.min_width = width;
// sizeHints.min_height = height;
// sizeHints.max_width = mScreenWidth;
// sizeHints.max_height = mScreenHeight;
if (geometry->x != DONT_CARE && geometry->y != DONT_CARE) {
sizeHints.x = geometry->x;
sizeHints.y = geometry->y;
sizeHints.flags |= USPosition;
}
XSetNormalHints(mDisplay, mWindow, &sizeHints);
SetTitle(suggestedName);
XSetStandardProperties(mDisplay, mWindow,
suggestedName.toAscii(), suggestedName.toAscii(),
None, (char **)NULL, 0, &sizeHints);
/* Bring it up; then wait for it to actually get here. */
XMapWindow(mDisplay, mWindow);
The problem I'm having is that if I set min_width and min_height, the user cannot resize the window, which is not what I want. But if I don't, then when I do any X11 call later, such as
XGetWindowAttributes(mDisplay, mWindow, &win_attributes);
the window manager resizes my window to fit into one monitor instead of being larger than the monitor. I cannot just get a window of the desired size for some reason. Note that WidthOfScreen and HeightOfScreen give me the combined width and height of all monitors as expected.
Can anyone help? I hope I'm explaining myself clearly enough.

ExtCreatePen and Windows 7 GDI

I created DIBPATTERN pens with ExtCreatePen API for custom pattern pens.
It sucessfully draws desired lines on Windows XP,
But on Windows 7 (x64 for my case), it does not draw any lines; no changes on screen.
(Other simply created pens, for example CreatePen(PS_DOT,1,0), are working.)
I found that calling SetROP2(hdc, R2_XORPEN) makes the following line-drawing API calls draw something but with XOR operation. I don't want XOR drawing.
Here is my code to create the pen. It has no problem on Windows XP:
LOGBRUSH lb;
lb.lbStyle = BS_DIBPATTERN;
lb.lbColor = DIB_RGB_COLORS;
int cb = sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 2 + 8*4;
HGLOBAL hg = GlobalAlloc(GMEM_MOVEABLE, cb);
BITMAPINFO* pbmi = (BITMAPINFO*) GlobalLock(hg);
ZeroMemory(pbmi, cb);
pbmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
pbmi->bmiHeader.biWidth = 8;
pbmi->bmiHeader.biHeight = 8;
pbmi->bmiHeader.biPlanes = 1;
pbmi->bmiHeader.biBitCount = 1;
pbmi->bmiHeader.biCompression = BI_RGB;
pbmi->bmiHeader.biSizeImage = 8;
pbmi->bmiHeader.biClrUsed = 2;
pbmi->bmiHeader.biClrImportant = 2;
pbmi->bmiColors[1].rgbBlue =
pbmi->bmiColors[1].rgbGreen =
pbmi->bmiColors[1].rgbRed = 0xFF;
DWORD* p = (DWORD*) &pbmi->bmiColors[2];
for(int k=0; k<8; k++) *p++ = patterns[k];
GlobalUnlock(hg);
lb.lbHatch = (LONG) hg;
s_aSelectionPens[i] = ExtCreatePen(PS_GEOMETRIC, 1, &lb, 0, NULL);
ASSERT(s_aSelectionPens[i]); // success on both XP and Win7
GlobalFree(hg);
Is it bug only on my PC? Please check this problem.
Thank you.
This is a known bug with the Windows 7 GDI, though good luck getting Microsoft to acknowledge it.
http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/a70ab0d5-e404-4e5e-b510-892b0094caa3
-Noel
I will admit, I was dubious as first, but I compiled and ran your program, and it does indeed fail to draw the second line on Windows 7, buy only in aero mode
By switching to Windows basic or classic mode, all four lines are drawn, as expected.
I can only assume that this is some kind of bad interaction with your custom pen and the new way aero mode implements GDI calls. This seems like it might be a Microsoft bug, perhaps you can post this question on one of their message boards?
So you are creating an 8x8 black/white (monochrome) bitmap as a DIB, and then using that to create a pen. I see nothing wrong with this code. this definitely looks like a windows bug, but there may be a workaround.
Try setting
pbmi->bmiHeader.biClrUsed = 0;
pbmi->bmiHeader.biClrImportant = 0;
In this context, setting the values to 0 should mean the same thing as setting them to 2, but 0 is more standard behavior for situations where you have are using the full palette. You still need two entries in your palette, 0 just means "full size based on biBitCount".
Also, each palette entrie is a RGBQUAD, which means there is room for alpha, and your alpha is set to 0, which should be ignored, but maybe it isn't. so try setting the high byte of your two palette entries to 0xFF or 0x80.
Finally, it's possible that your palette is being ignored entirely, and Windows is using the BkMode, BkColor and TextColor of the destination DC for everything, so you need to make sure that they are set to values that you can see.
My guess is that this has something to do with alpha transparency, since GDI ignores alpha entirely, but Aero doesn't.

Resources