I've managed to get a buffer pointer from a Bitmap object as follows:
hDesktopWnd=GetDesktopWindow();
hDesktopDC=GetDC(hDesktopWnd);
// Get Screen Dimensions
int nWidth=GetSystemMetrics(SM_CXSCREEN);
int nHeight=GetSystemMetrics(SM_CYSCREEN);
CImage objCImg;
objCImg.Create( nWidth, nHeight, 24, BI_RGB);
HDC hdcMemDC = objCImg.GetDC();
if(!BitBlt(hdcMemDC,0,0,nWidth, nHeight,hDesktopDC, 0,0,SRCCOPY))
{
printf("Error during Bitblt");
}
unsigned char* pData = (unsigned char*)objCImg.GetBits();
I can modify the image using this pData.
I am trying to optimize screen capture timing, but even though this operation contains only one BitBlt, still this operation is taking 94ms on my PC. While
// Get the Desktop windows handle and device context
hDesktopWnd=GetDesktopWindow();
hDesktopDC=GetDC(hDesktopWnd);
// Get the handle to the existing DC and create a bitmap file object
HDC hBmpFileDC=CreateCompatibleDC(hDesktopDC);
HBITMAP hBmpFileBitmap=CreateCompatibleBitmap(hDesktopDC,nWidth,nHeight);
// Assign the object in Memory DC to the bitmap and perform a bitblt
SelectObject(hBmpFileDC,hBmpFileBitmap);
BitBlt(hBmpFileDC,0,0,nWidth,nHeight,hDesktopDC,0,0,SRCCOPY|CAPTUREBLT);
pBuf=malloc(bmpInfo.bmiHeader.biSizeImage); // Size of Image
GetDIBits(hdc,hBitmap,0,bmpInfo.bmiHeader.biHeight,pBuf,&bmpInfo,DIB_RGB_COLORS);
This operation has one BitBlt and one Memory Copy operation in GetDIBits still this total operation is taking 46 ms on my PC.
Can someone please clarify this discrepancy in the two BitBlt operation and why the bitblt is taking more time when the DC is not derived as a compatible DC (CreateCompatibleDC) in the first case because as per my understanding BitBlt is almost similar to a memcpy operation.
Inturn I would also like to ask is there any way to access the image buffer pointer directly from the hdc.
which means can get buffer pointer to the image directly from the HDC
hDesktopWnd=GetDesktopWindow();
hDesktopDC=GetDC(hDesktopWnd);
// Now derive buffer from this **hDesktopDC**
I've also asked the same question before: Unable to access buffer data
Also, can someone pls comment if windows allows such type of data handling?
Related
In my Vulkan application, I'd like to have one memory buffer that can store multiple textures of different sizes. Then, I'd like to have a VkImageView corresponding to each texture in the buffer. I'm unsure exactly how I can create such a buffer, here's what I came up with:
// Create images
VkImage images[TEXTURE_COUNT];
for (int i = 0; i < TEXTURE_COUNT, i++) {
VkImageCreateInfo imageCreateInfo{};
// Specific to texture i
imageCreateInfo.extent.width = ...
iamgeCreateInfo.extent.height = ...
// Other imageCreateInfo properties are constant across all textures
...
VkCreateImage(device, &imageCreateInfo, nullptr, &images[i]);
}
// Find total size of memory buffer & image offsets
int totalSize = 0;
int offsets[TEXTURE_COUNT];
for (int i = 0; i < TEXTURE_COUNT; i++) {
VkMemoryRequirements memoryRequirements;
vkGetImageMemoryRequirements(device, images[i], &memoryRequirements);
offsets[i] = totalSize;
totalSize += memoryRequirements.size;
}
// Get memory type index of memory buffer
VkMemoryRequirements firstImageMemoryRequirements;
vkGetImageMemoryRequirements(device, images[0], &firstImageMemoryRequirements);
int memoryTypeIndex = ... // Get memory type index using firstImageMemoryRequirements.memoryTypeBits
// Allocate memory
VkMemoryAllocateInfo memoryAllocateInfo{};
memoryAllocateInfo.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
memoryAllocateInfo.allocationSize = totalSize;
memoryAllocateInfo.memoryTypeIndex = memoryTypeIndex;
VkMemory memory;
vkAllocateMemory(device, &memoryAllocateInfo, nullptr, &memory)
// Bind images to memory at corresponding offset
VkBindImageMemoryInfo bindImageMemoryInfos[TEXTURE_COUNT];
for (int i = 0; i < TEXTURE_COUNT, i++) {
VkBindImageMemoryInfo bindImageMemoryInfo{};
bindImageMemoryInfo.sType = VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO;
bindImageMemoryInfo.image = images[i];
bindImageMemoryInfo.memory = memory;
bindImageMemoryInfo.memoryOffset = offsets[i];
bindImageMemoryInfos[i] = bindImageMemoryInfo;
}
vkBindImageMemory2(device, TEXTURE_COUNT, &bindImageMemoryInfos);
// Create image views
VkImageView imageViews[TEXTURE_COUNT];
for (int i = 0; i < TEXTURE_COUNT; i++) {
VkImageViewCreateInfo imageViewCreateInfo{};
imageViewCreateInfo.image = images[i];
...
VkCreateImageView(device, &imageViewCreateInfo, nullptr, &imageViews[i]);
}
// Now I have a bunch of image views tied to each texture,
// where each texture is stored in one memory buffer at a certain offset.
Does this seem reasonable, or is this not the right way? One thing that seems a little odd to me is when I'm getting the memory type index of the memory buffer. To get this, you need an image of which I have TEXTURE_COUNT of, so I just pick the first image. The only image properties that very per texture is the extent width and height so I'm hoping that does not affect the memory type bits of each image. I'm assuming each image will have the same memory type bits and so I can get the memory type index using the memory type bits of the first image. Thoughts on this assumption would be great aswell.
For every VkImage you use, it must be stored in a piece of memory appropriate to that particular VkImage object. This means that the offset/size must match the alignment and size for that image object, and the memory type it is being bound to must be one of the memory types that the image can be used with.
This must be queried independently for each VkImage object you use. Or usually; two identical VkImage objects (ie: created from identical VkImageCreateInfo structures) will have the same requirements, so if you repeat identically created VkImages, then you don't need to query their requirements again. There are a few other circumstances that allow images with different creation parameters to have the same requirements, so if you want to take advantage of that, you'll need to look up the details.
If you're operating in an environment where you can control the sizes, formats, usage, and other creation parameters of images, then you can figure out what the requirements are ahead of time for these few kinds of images, and work within those restrictions. Otherwise, you're going to have to do the work of querying requirement information for your images before allocating their memory and then allocating memory for them once you know exactly what you need.
Alternatively, you can just allocate memory as needed in large slabs. That is, if you need a new memory allocation for a specific VkImage (either because the last slab is full or the image needs a new memory type), then you allocate a big block of it, then you can put later images into the same storage. This requires you to keep track of what you've put into which blocks of memory.
One problem with your code is that you don't take into account the memory alignment requirements for the images (VkMemoryRequirements::alignment). You also don't consider the possibility that not all of the images can share the same allocation; you assume they all can use the same memory type.
So you'll need to change your code accordingly.
That being said, the limitations that Vulkan imposes on implementations for VkImage memory requirements includes a statement that effectively says that the memory type an image requires for images will be the same for all color formats, assuming many of the other creation parameters are the same. So you shouldn't be concerned about different memory types for just changing sizes of images or having images with different formats.
The things that may kick you into using different memory types are mainly usage parameters (and color vs. depth formats). Images intended to be used as render targets can have their own memory types.
I'm writing a rendering app that communicates with an image processor as a sort of virtual camera, and I'm trying to figure out the fastest way to write the texture data from one process to the awaiting image buffer in the other.
Theoretically I think it should be possible with 1 DirectX copy from VRAM directly to the area of memory I want it in, but I can't figure out how to specify a region of memory for a texture to occupy, and thus must perform an additional memcpy. DX9 or DX11 solutions would be welcome.
So far, the docs here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174363(v=vs.85).aspx have held the most promise.
"In Windows Vista CreateTexture can create a texture from a system memory pointer allowing the application more flexibility over the use, allocation and deletion of the system memory"
I'm running on Windows 7 with the June 2010 Directx SDK, However, whenever I try and use the function in the way it specifies, I the function fails with an invalid arguments error code. Here is the call I tried as a test:
static char s_TextureBuffer[640*480*4]; //larger than needed
void* p = (void*)s_TextureBuffer;
HRESULT res = g_D3D9Device->CreateTexture(640,480,1,0, D3DFORMAT::D3DFMT_L8, D3DPOOL::D3DPOOL_SYSTEMMEM, &g_ReadTexture, (void**)p);
I tried with several different texture formats, but with no luck. I've begun looking into DX11 solutions, it's going slowly since I'm used to DX9. Thanks!
I have a class that represents a text box, in the constructor of the class I call the CreateWindow function, and I want to store in the extra window memory, pointer to this object, so in the WndProc function I will get the pointer, and will use the class members.
I tried to do that with this code, but it's not working, can someone to write an example how to do this:
What value should I give in cbWndExtra member of WNDCLASSEX structure.
How to call SetWindowLong.
How to call GetWindowLong.
the code I wrote:
wcex.cbWndExtra = 4;
and I wrote this in the constructor of the text box class:
hWnd = CreateWindow(...);
SetWindowLong(hWnd,0,(LONG)this);
and this in the WndProc function
unique_ptr<TextBox> pTextBox;
pTextBox.reset((TextBox*)GetWindowLong(hWnd,0));
=== edit ===
now I see that if I change the code in the WndProc function, to this code:
TextBox *pTextBox;
pTextBox = (TextBox*)GetWindowLong(hWnd,0);
it work as well, but with unique_ptr it do not work.
From the MSDN Documentation on "SetWindowLong", about the 'Index' parameter.
The zero-based offset to the value to be set. Valid values are in the range zero through the number of bytes of extra window memory, minus the size of an integer. To set any other value, specify one of the following values.
Positive offsets can point to any byte offset so long as you created the window with at least that amount of "cbWndExtra" in the WNDCLASS structure.
I suspect the issue in this case may be related to the size of the pointer. You are explicitly allocating 4 extra bytes to the end of the window structure, but if you are on a 64-bit system, the pointer size would be 8. This could explain it sometimes working, and sometimes not. (If the high order word happens to be all zeros, it may work even though the address is being truncated.)
If this is the case, you would need to either set the high and low word in two separate calls, or preferably use the 64-bit variant "SetWindowLongPtr".
Here's a simple example that uses this feature to store two pointers in a windows extra data region (Note the following will work with both 32 and 64 bit)
wndclass.cbWndExtra = sizeof(char*) * 2; // Reserve space for 2 pointers.
Then later set a values with:
SetWindowLongPtr(hwnd, 0, (LONG_PTR)firstPtr);
SetWindowLongPtr(hwnd, sizeof(char*), (LONG_PTR)secondPtr); // Index is byte offset.
And retrieve the values with:
LONG_PTR firstPtr = GetWindowLongPtr(hwnd, 0);
LONG_PTR secondPtr = GetWindowLongPtr(hwnd, sizeof(char*));
If you only need to store one single pointer however, you can get away with not setting any extra memory, leave cbWndExtra at zero, and just pass GWLP_USERDATA as the index. Like the other pre-defined values, GWLP_USERDATA is a negative offset 'backwards' into the class/window data. It is reserved space for this kind of purpose, but it can only fit one pointers worth of data.
Context: I have a piece of code that knows the value of a waveOut handle (HWAVEOUT). However the code did not create the handle, thus the WAVEFORMATEX that was passed to waveOutOpen when creating the handle is unknown.
I want to find out the contents of that WAVEFORMATEX struct that was passed to the waveOutOpen call.
Some more details where this is used: The code runs in a hook function that's invoked instead of waveOutWrite. Thus the code knows the handle value, but does not know the details of the handle creation.
Just so that people do not need to look it up:
The signature of waveOutOpen is
MMRESULT waveOutOpen(
LPHWAVEOUT phwo,
UINT uDeviceID,
LPWAVEFORMATEX pwfx,
DWORD dwCallback,
DWORD dwInstance,
DWORD fdwOpen
);
The signature of waveOutWrite is:
MMRESULT waveOutWrite(
HWAVEOUT hwo,
LPWAVEHDR pwh,
UINT cbwh
);
Note: I am also hooking waveOutOpen, but it could already be called before I have a hook.
You can't get this information from the wave API. You'll have to get it from whoever opened the wave device.
You can get the playback rate using waveOutGetPlaybackRate(), and knowing that, you could (in theory) know cell size by timing how long it takes to play a buffer of known size. (0 is always silence) But 8 bit stereo will end up taking the same amount of time to play back as 16 bit mono. same with float/32 bit mono and 16 bit stereo.
I'd say that 99% of the time 16 bit stereo will the the right answer, but when you guess wrong, the result sounds really bad (and loud!) so guessing may not be a good idea.
You can also use waveOutMessage() to send custom messages to the wave driver. It's possible that there is some custom_query_wave_format message, but there is no message like that defined in the standard. It's assumed that whoever opened the wave device will keep track of what format (s)he opened it with.
You access the pwfx item of the waveOutOpen struct just as you would access any other struct.
myWaveOutOpen.pwfx.wFormatTag
Or the equivalent format in your language.
Your question is hard to understand. I'm not sure what you want...?
I want to get the adpater RAM or graphics RAM which you can see in Display settings or Device manager using API. I am in C++ application.
I have tried seraching on net and as per my RnD I have come to conclusion that we can get the graphics memory info from
1. DirectX SDK structure called DXGI_ADAPTER_DESC. But what if I dont want to use DirectX API.
2. Win32_videocontroller : But this class does not always give you adapterRAM info if availability of video controller is offline. I have checked it on vista.
Is there any other way to get the graphics RAM?
There is NO way to directly get graphics RAM on windows, windows prevents you doing this as it maintains control over what is displayed.
You CAN, however, create a DirectX device. Get the back buffer surface and then lock it. After locking you can fill it with whatever you want and then unlock and call present. This is slow, though, as you have to copy the video memory back across the bus into main memory. Some cards also use "swizzled" formats that it has to un-swizzle as it copies. This adds further time to doing it and some cards will even ban you from doing it.
In general you want to avoid directly accessing the video card and letting windows/DirectX do the drawing for you. Under D3D1x Im' pretty sure you can do it via an IDXGIOutput though. It really is something to try and avoid though ...
You can write to a linear array via standard win32 (This example assumes C) but its quite involved.
First you need the linear array.
unsigned int* pBits = malloc( width * height );
Then you need to create a bitmap and select it to the DC.
HBITMAP hBitmap = ::CreateBitmap( width, height, 1, 32, NULL );
SelectObject( hDC, (HGDIOBJ)hBitmap );
You can then fill the pBits array as you please. When you've finished you can then set the bitmap's bits.
::SetBitmapBits( hBitmap, width * height * 4, (void*)pBits )
When you've finished using your bitmap don't forget to delete it (Using DeleteObject) AND free your linear array!
Edit: There is only one way to reliably get the video ram and that is to go through the DX Diag interfaces. Have a look at IDxDiagProvider and IDxDiagContainer in the DX SDK.
Win32_videocontroller is your best course to get the amount of gfx memory. That's how its done in Doom3 source.
You say "..availability of video controller is offline. I have checked it on vista." Under what circumstances would the video controller be offline?
Incidentally, you can find the Doom3 source here. The function you're looking for is called Sys_GetVideoRam and it's in a file called win_shared.cpp, although if you do a solution wide search it'll turn it up for you.
User mode threads cannot access memory regions and I/O mapped from hardware devices, including the framebuffer. Anyway, what you would want to do that? Suppose the case you can access the framebuffer directly: now you must handle a LOT of possible pixel formats in the framebuffer. You can assume a 32-bit RGBA or ARGB organization. There is the possibility of 15/16/24-bit displays (RGBA555, RGBA5551, RGBA4444, RGBA565, RGBA888...). That's if you don't want to also support the video-surface formats (overlays) such as YUV-based.
So let the display driver and/or the subjacent APIs to do that effort.
If you want to write to a display surface (which not equals exactly to framebuffer memory, altough it's conceptually almost the same) there are a lot of options. DX, Win32, or you may try the SDL library (libsdl).