We have a piece of software that is programmed against DirextX 7 SDK (i.e the code uses LPDIRECTDRAWSURFACE7 and the likes) and runs in fullscreen. The main task is putting something on the screen in response to external triggers in a reliable manner. This behaves very well on Windows XP: bsically the software waits for some trigger and when triggered, creates a new frame, puts it in the backbuffer, then tells DX to flip the buffers. The result is the approximate delay between the trigger and when the frame is effectively shown on the screen is, depending on video card and drivers, 3 frames or 50mSec for a 60Hz screen. This is tested on a variety of systems, all running NVidia cards. On some systems with higher end cards we even get 2 frames.
When running the same software on Windows 7 (with no other software installed at all) however, we cannot get lower than 5 frames. Meaning somewhere in the pipeline the OS or driver or both eat 2 extra frames, which is near to unacceptable for the application. We tried disabling aero/desktop composition/different driver versions/different video cards but no avail.
where does this come from? is this documented somewhere?
is there an easy way to fix? I know DirectX 7 is old, but upgrading to compile agains a more recent version might be tons of work so another type of fix would be nice. Maybe some flag that can be set in code?
edit here's some code which seems relevant:
Creation of front/back surfaces:
ddraw7->SetCooperativeLevel( GetSafeHwnd(),
DDSCL_EXCLUSIVE | DDSCL_FULLSCREEN | DDSCL_ALLOWMODEX | DDSCL_MULTITHREADED )
DDSURFACEDESC2 desc;
ZeroMemory( &desc, sizeof(desc) );
desc.dwSize = sizeof( desc );
desc.dwFlags = DDSD_CAPS | DDSD_BACKBUFFERCOUNT;
desc.ddsCaps.dwCaps = DDSCAPS_PRIMARYSURFACE | DDSCAPS_FLIP |
DDSCAPS_COMPLEX | DDSCAPS_3DDEVICE |
DDSCAPS_VIDEOMEMORY | DDSCAPS_LOCALVIDMEM;
desc.dwBackBufferCount = 1;
ddraw7->CreateSurface( &desc, &primsurf, 0 )
DDSCAPS2 surfcaps;
ZeroMemory( &surfcaps,sizeof( surfcaps ) );
surfcaps.dwCaps = DDSCAPS_BACKBUFFER;
primsurf->GetAttachedSurface( &surfcaps, &backsurf );
Creation of surfaces used to render frames before they get drawn:
DDSURFACEDESC2 desc;
ZeroMemory( &desc, sizeof(desc) );
desc.dwSize = sizeof(desc);
desc.dwFlags = DDSD_WIDTH | DDSD_HEIGHT | DDSD_CAPS ;
desc.dwWidth = w;
desc.dwHeight = h;
desc.ddsCaps.dwCaps = DDSCAPS_OFFSCREENPLAIN | DDSCAPS_VIDEOMEMORY;
desc.ddpfPixelFormat.dwSize = sizeof( DDPIXELFORMAT );
desc.ddpfPixelFormat.dwFlags = DDPF_PALETTEINDEXED8;
LPDIRECTDRAWSURFACE7 surf;
HRESULT r=ddraw7->CreateSurface( &desc, &surf, 0 )
Rendering loop, in OnIdle:
//clear surface
DDBLTFX bltfx;
ZeroMemory( &bltfx, sizeof(bltfx) );
bltfx.dwSize = sizeof( bltfx );
bltfx.dwFillColor = RGBtoPixel( r, g, b );
backsurf->Blt( rect, 0, 0, DDBLT_COLORFILL | DDBLT_WAIT, &bltfx )
//blit some prerendered surface onto it, x/y/rect etc are calculated properly)
backsurf->BltFast( x, y, sourceSurf, s&sourceRect, DDBLTFAST_WAIT );
primsurf->Flip( 0, DDFLIP_WAIT )
primsurf->Blt(&drect,backsurf,&srect,DDBLT_WAIT,0);
I think that the Windows XP thing is a red herring. The last version of Windows that ran DirectX 7 directly was Windows 2000. Windows XP is just emulating DX7 in DX9, same as Windows 7 is doing.
I'll venture a guess that your application uses palettized textures, and that when DX emulates that functionality (as it was dropped after DX7) it's generating a texture using the indexed colors. You might try profiling the app with GPUView to see if there's a delay in pushing the texture to the GPU. E.g., perhaps the Win7 driver compressing it first?
Related
I'm trying to build Qt5.6 project in MSVS2013 express (i wrote all code under QtCreator in Linux). First of all in Visual studio it i could build it only in Release mode, and it works fine. Then i used windeployqt.exe utility for creating deployment pack. I also put assimp32.dll (i use it for model loading).
And everything works fine, except PIXEL_BUFFER functionality (I draw some stuff to texture in additional Framebuffer, make some analysis of drawing result, prepare another one texture and push it for drawing).
I've got some errors in Dependency Walker (msvcr90.dll, Dcomp.dll, API-MS-WIN-CORE-*.dll) even though i've installed every MS Redistributable crap that exist in this world.
Here the code that i try to use:
void AUVGBO::PrepareGBO(QOpenGLShaderProgram *shader) {
if (mEnabled) {
this->PrepareRender(shader, AUVCamera::PR_PROJECTION | AUVCamera::PR_VIEW | AUVCamera::PR_VIEW_POS | AUVCamera::PR_LIGHT | AUVCamera::PR_LIGHT_POS );
// Attach framebuffer for intermediate rendering
m_GL->glBindFramebuffer(GL_FRAMEBUFFER, mFBO);
m_GL->glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
m_GL->glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_GL->glViewport(0, 0, mGBO_Width, mGBO_Height);
}
}
void AUVGBO::FinishGBO(QOpenGLShaderProgram *shader) {
// Read pixels from FBO texture
GLuint indexAsync = mPBO_inIndex;
GLuint indexSync = (mPBO_inIndex + 1) % PBO_NUM;
// Bind pixel buffer for asynchronous reading from framebuffer
m_GL->glReadBuffer(GL_COLOR_ATTACHMENT0);
m_GL->glBindBuffer(GL_PIXEL_PACK_BUFFER, mPBO_in[indexAsync]);
m_GL->glReadPixels(0, 0, mGBO_Width, mGBO_Height, GL_RGB, GL_FLOAT, 0); // This call will be async
if (firstAsyncCalls && ( indexSync != 0 )) {
mPBO_inIndex = indexSync;
return;
} else {
firstAsyncCalls = false;
}
// Bind pixel buffer which already has data fetched one step ago
m_GL->glBindBuffer(GL_PIXEL_PACK_BUFFER, mPBO_in[indexSync]);
memcpy(mGBO_Pixels,
m_GL->glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_WRITE),
mGBO_Size * 3 * sizeof(GLfloat));
m_GL->glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
m_GL->glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
// Swap buffer for sync/async readback
mPBO_inIndex = indexSync;
m_GL->glBindFramebuffer(GL_FRAMEBUFFER, 0);
so now every pixel in memory area [mGBO_Pixels : mGBO_Size * 3 * sizeof(GLfloat)] is black, but they should'not be black. I put (for test) 0.12345f value in third byte of every pixel
color = vec4(cos, length(r), 0.12345f, 1.0f);
but (mGBO_Pixels + 2) is 0.0 if i run exe file. But in VS everything is ok as i've already said ((mGBO_Pixels + 2) = 0.12345f).
I found some SO answers, where people said that it could be Qt openGL bugs (their application crashes on initializeGLContext() stuff), but in my situation pushing created texture to openGL is ok. So guess that i've made a mistake somewhere. It drives me crazy. huh. Wish Sara and John Conor with T800 go to Microsoft instead of Cyberdyne.
P.S. I create release build using Cmake in my Arch Linux, and everything work as Avtomat Kalashnikova, so at least in Linux Qt OpenGL stuff works. If i find enough time i will try to use Cmake with MinGW in Windows.
Basically, I'm trying to copy the front or back buffer to a texture, grab the 1x1 mipmap level of said texture, and then spew the color of the resulting back to the Arduino to control my room's lighting. Everything else is up and running, and I've already gotten it to work via GetDC(NULL) and StretchBlt. But this was about 15FPS, and the windows GUI ran choppy.
The downsampling is just DEMANDING to be used on a GPU.
In D3D9, it seemed like there was simply GetBackBuffer() or something like that, but I see nothing similar in D3D10. And I'm not even sure it would grab anything from the windows GUI.
Questions:
-What function(s) would I use?:
-Do I need to explicitly create a swapchain beforehand?
-Would this only capture data from other D3D programs?
Okay, here's where I'm at as far as creating the texture goes But I'm not seeing anything that gets me back to textures:
//Create Texture
D3D10_TEXTURE2D_DESC tBufferDesc;
ID3D10Texture2D *tBuffer = NULL;
DXGI_SAMPLE_DESC iBufferSamples = {1,0};
tBufferDesc.Width = iScreenSizeX;
tBufferDesc.Height = iScreenSizeY;
tBufferDesc.MipLevels = 0;
tBufferDesc.ArraySize = 1;
tBufferDesc.Format = DXGI_FORMAT_B8G8R8A8_TYPELESS;
tBufferDesc.SampleDesc = iBufferSamples;
tBufferDesc.Usage = D3D10_USAGE_DEFAULT;
tBufferDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE | D3D10_BIND_RENDER_TARGET;
tBufferDesc.CPUAccessFlags = 0;
tBufferDesc.MiscFlags = D3D10_RESOURCE_MISC_GENERATE_MIPS;
HRESULT tBufferResult = pDevice->CreateTexture2D(&tBufferDesc, NULL , &tBuffer);
hrResult(tBufferResult);
ID3D10RenderTargetView *rtBuffer;
ID3D10Resource *rsBuffer;
pDevice->OMGetRenderTargets(1, &rtBuffer, NULL);
rtBuffer->GetResource(&rsBuffer);
rsBuffer->
this is the code
CvMemStorage *mem123 = cvCreateMemStorage(0);
CvSeq* ptr123;CvRect face_rect123;
CvHaarClassifierCascade* cascade123 = (CvHaarClassifierCascade*)cvLoad("haarcascade_frontalface_alt2.xml" ); //detects the face if it's frontal
void HeadDetection(IplImage* frame,CvRect* face){
ptr123=cvHaarDetectObjects(frame,cascade123,mem123,1.2,2,CV_HAAR_DO_CANNY_PRUNING);
if(!ptr123){return ;}
if(!(ptr123->total)){return ;}
face_rect123=*(CvRect*)cvGetSeqElem( ptr123, 0 ); //CvRect face_rect holds the position of Rectangle
face->height=face_rect123.height;
face->width=face_rect123.width;
face->x=face_rect123.x;
face->y=face_rect123.y;
return ;
}//detects the position of head and it is fed in CvRect*face as rectangle
int main(){
IplImage* oldframe=cvCreateImage(cvSize(640,480),8,3);
CvCapture* capture=cvCaptureFromCAM(CV_CAP_ANY);
CvRect a;a.height=0;a.width=0;a.x=0;a.y=0;
while(1){
oldframe=cvQueryFrame(capture); //real frame captured of size 640x480
cvFlip(oldframe,oldframe,1);
cvResize(oldframe,frame); //frame scaled down 4 times
HeadDetection(frame,&a);
cvShowImage("frame",frame);
cvWaitKey(1);
}
}
Here if "HeadDetection(frame,&a);" is commented, then using task manager i see that angledetection.exe (name of my project) consumes 20188 Kb memory (No memory leak happening then).
However if I don't comment that the taskmanager shows that some memory leak is happening (around 300Kb/s )
I'm using VS 2010 on 64 bit windows 7 bit OS (core 2 duo).
This code is trying to detect face and get the four corners of square by haar detection in OpenCV 2.1
In case anything is unclear please ask. :-)
Thanks in advance.
You are getting a pointer to an object when you call cvHaarDetectObjects.
But you never free it ( the object that ptr123 points to).
Also face_rect123 isnt freed.
Btw you should consider refactoring the code and give better names to the variables.
Where, in Windows, is this icon stored? I need to use it in a TaskDialog emulation for XP and am having a hard time tracking it down.
It's not in shell32.dll, explorer.exe, ieframe.dll or wmploc.dll (as these contain a lot of icons commonly used in Windows).
Edit:
For clarification, I am emulating a certain type of dialog in XP. The icon is (most likely) not present there. So I want to extract it from the library that holds it in Windows 7. I am extending an existing implementation of this emulation and want to provide a full feature set.
I wanted to point it out explicitly.
You are supposed to put a shield on UI elements that will trigger an elevation: MSDN: Step 4: Redesign Your UI for UAC Compatibility.
Of course, you don't have to go spelunking around DLLs to extract images (although it certainly does make it easier at design time when you can design your design with a design time interface).
Microsoft provides a couple of supported (and therefore guaranteed) ways that you can get ahold of the shield icon at runtime:
Add a shield icon to the user interface?:
Extract a small icon
SHSTOCKICONINFO sii;
sii.cbSize = sizeof(sii);
SHGetStockIconInfo(SIID_SHIELD, SHGSI_ICON | SHGSI_SMALLICON, &sii);
hiconShield = sii.hIcon;
Extract a large icon
SHSTOCKICONINFO sii;
sii.cbSize = sizeof(sii);
SHGetStockIconInfo(SIID_SHIELD, SHGSI_ICON | SHGSI_LARGEICON, &sii);
hiconShield = sii.hIcon;
Extract an icon of custom size
SHSTOCKICONINFO sii;
sii.cbSize = sizeof(sii);
SHGetStockIconInfo(SIID_SHIELD, SHGSI_ICONLOCATION, &sii);
hiconShield = ExtractIconEx(sii. ...);
Add a Shield Icon to a Button
Button_SetElevationRequiredState(hwndButton, TRUE);
The article forgot to mention LoadIcon:
hIconShield = LoadIcon(0, IDI_SHIELD);
Although LoadIcon has been "superseded" by LoadImage:
hIconShield = LoadImage(0, IDI_SHIELD, IMAGE_ICON, desiredWith, desiredHeight, LR_SHARED); //passing LR_SHARED causes size to be ignored. And you must pass LR_SHARED
Loading the size you want - by avoiding shared images
In order to avoid loading a "shared" version of the icon, you have to load the icon directly out of the file.
We all know that the shield exists in user32.dll as resource id 106:
| Icon | Standard Icon ID | Real Resource ID |
|------------------|-------------------|------------------|
| IDI_APPLICATION | 32512 | 100 |
| IDI_QUESTION | 32514 | 102 |
| IDI_WINLOGO | 32517 | 105 |
| IDI_WARNING | 32515 | 101 |
| IDI_ERROR | 32513 | 103 |
| IDI_INFORMATION | 32516 | 104 |
| IDI_SHIELD | 32518 | 106 |
That was undocumented spellunking.
SHGetStockIconInfo can give us the actual, current, guaranteed to change in the future, path and index:
TSHStockIconInfo sii;
ZeroMemory(#sii, SizeOf(sii));
sii.cbSize := SizeOf(sii);
SHGetStockIconInfo(SIID_SHIELD, SHGSI_ICONLOCATION, {var}sii);
resulting in:
sii.szPath: C:\WINDOWS\System32\imageres.dll
sii.iIcon: -78
You can load this image for the size you desire using LoadImage:
HMODULE hmod := LoadLibrary(sii.szPath);
Integer nIconIndex := Abs(sii.iIcon); //-78 --> 78
ico = LoadImage(hmod, MAKEINTRESOURCE(nIconIndex), IMAGE_ICON, 256, 256, 0);
Another slightly easier way is to use SHDefExtractIcon:
HICON GetStockIcon(DWORD StockIconID, Integer IconSize)
{
HRESULT hr;
TSHStockIconInfo sii;
ZeroMemory(#sii, SizeOf(sii));
sii.cbSize := SizeOf(sii);
hr = SHGetStockIconInfo(SIID_SHIELD, SHGSI_ICONLOCATION, {var}sii);
OleCheck(hr);
HICON ico;
hr = SHDefExtractIcon(sii.szPath, sii.iIcon, 0, ref ico, null, IconSize);
OleCheck(hr);
return ico;
}
It does the loading for you, and it handles the negative icon index (and the secret meaning that has):
HICON shieldIcon = GetStockIcon(SIID_SHIELD, 256);
Personally, i then use WIC to wrap that into a IWICBitmap:
IWICBitmap GetStockWicBitmap(DWORD StockIconID, Integer IconSize)
{
HICON ico = GetStockIcon(StockIconID, IconSize);
IWICBitmap bitmap;
IWICImagingFactory factory = new WICImagingFactory();
HRESULT hr = factory.CreateBitmapFromHICON(ico, out bitmap);
OleCheck(hr);
return bitmap;
}
and so:
IWICBitmap bmp = GetStockWicBitmap(SIID_SHIELD, 256);
Now that you have the bitmap, at runtime, do with it what you want.
Small and Large
The problem with ExtractIconEx is that you're again stuck with the two shell sizes:
"small" (i.e. GetSystemMetrics(SM_CXSMICON))
"large" (i.e. GetSystemMetrics(SM_CXICON))
Loading icons is something that is quite a dark art in Windows:
LoadIcon
LoadImage
LoadImage(..., LR_SHARED)
ExtractIcon
ExtractIconEx
IExtractImage
SHDefExtractIcon
SHGetFileInfo(..., SHGFI_ICON, ...);
SHGetFileInfo(..., SHGFI_SYSICONINDEX, ...);
SHGetFileInfo(..., SHGFI_ICONLOCATION, ...);
IThumbnailProvider
Icons available through SHGetStockIconInfo
Microsoft gives a handy page that gives an example, and description, of all the stock icons.
SHSTOCKICONID (archive)
And the 256px shield icon (as of Windows 10):
The shield icon is located in the file C:\Windows\System32\imageres.dll (at least, in my copy of English 32-bit Windows 7). There are several versions of the shield icon there, including the blue and yellow version you have above (icon 78).
Icons extracted from Windows 7 x64 SP1 English:
16x16 shield icon:
24x24 shield icon:
32x32 shield icon:
You are asking the wrong question. It doesn't matter where this icon is stored on any version of windows. If Microsoft don't tell you then you should not use it - it might not be there in windows 8 (or whatever comes after 7).
If you want the icon so bad, there is a decent graphical representation of it above in this question. You could do alt-prt scrn then use your favourite graphics app to turn it into an icon and add it to your app. This may not be legal though (remember, IANAL)
I created DIBPATTERN pens with ExtCreatePen API for custom pattern pens.
It sucessfully draws desired lines on Windows XP,
But on Windows 7 (x64 for my case), it does not draw any lines; no changes on screen.
(Other simply created pens, for example CreatePen(PS_DOT,1,0), are working.)
I found that calling SetROP2(hdc, R2_XORPEN) makes the following line-drawing API calls draw something but with XOR operation. I don't want XOR drawing.
Here is my code to create the pen. It has no problem on Windows XP:
LOGBRUSH lb;
lb.lbStyle = BS_DIBPATTERN;
lb.lbColor = DIB_RGB_COLORS;
int cb = sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 2 + 8*4;
HGLOBAL hg = GlobalAlloc(GMEM_MOVEABLE, cb);
BITMAPINFO* pbmi = (BITMAPINFO*) GlobalLock(hg);
ZeroMemory(pbmi, cb);
pbmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
pbmi->bmiHeader.biWidth = 8;
pbmi->bmiHeader.biHeight = 8;
pbmi->bmiHeader.biPlanes = 1;
pbmi->bmiHeader.biBitCount = 1;
pbmi->bmiHeader.biCompression = BI_RGB;
pbmi->bmiHeader.biSizeImage = 8;
pbmi->bmiHeader.biClrUsed = 2;
pbmi->bmiHeader.biClrImportant = 2;
pbmi->bmiColors[1].rgbBlue =
pbmi->bmiColors[1].rgbGreen =
pbmi->bmiColors[1].rgbRed = 0xFF;
DWORD* p = (DWORD*) &pbmi->bmiColors[2];
for(int k=0; k<8; k++) *p++ = patterns[k];
GlobalUnlock(hg);
lb.lbHatch = (LONG) hg;
s_aSelectionPens[i] = ExtCreatePen(PS_GEOMETRIC, 1, &lb, 0, NULL);
ASSERT(s_aSelectionPens[i]); // success on both XP and Win7
GlobalFree(hg);
Is it bug only on my PC? Please check this problem.
Thank you.
This is a known bug with the Windows 7 GDI, though good luck getting Microsoft to acknowledge it.
http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/a70ab0d5-e404-4e5e-b510-892b0094caa3
-Noel
I will admit, I was dubious as first, but I compiled and ran your program, and it does indeed fail to draw the second line on Windows 7, buy only in aero mode
By switching to Windows basic or classic mode, all four lines are drawn, as expected.
I can only assume that this is some kind of bad interaction with your custom pen and the new way aero mode implements GDI calls. This seems like it might be a Microsoft bug, perhaps you can post this question on one of their message boards?
So you are creating an 8x8 black/white (monochrome) bitmap as a DIB, and then using that to create a pen. I see nothing wrong with this code. this definitely looks like a windows bug, but there may be a workaround.
Try setting
pbmi->bmiHeader.biClrUsed = 0;
pbmi->bmiHeader.biClrImportant = 0;
In this context, setting the values to 0 should mean the same thing as setting them to 2, but 0 is more standard behavior for situations where you have are using the full palette. You still need two entries in your palette, 0 just means "full size based on biBitCount".
Also, each palette entrie is a RGBQUAD, which means there is room for alpha, and your alpha is set to 0, which should be ignored, but maybe it isn't. so try setting the high byte of your two palette entries to 0xFF or 0x80.
Finally, it's possible that your palette is being ignored entirely, and Windows is using the BkMode, BkColor and TextColor of the destination DC for everything, so you need to make sure that they are set to values that you can see.
My guess is that this has something to do with alpha transparency, since GDI ignores alpha entirely, but Aero doesn't.