I think the headline already explains what I want to know.
Is there a possible way to open and save images with 16-bit with Qt? And I don't mean the 3*8=24bit or 4*8=32bit, what is quite the same as a pure 8-bit image, I mean pure 16-bit for R, G and B.
Contrary to what Patrice says, there is no 16 bits per component format in QImage. The most you can get is QImage::Format_ARGB32 at 8 bits per component. Even if you used 8 bits indexed mode, the color tables do not support more than 8 bits per component. Moreover, the QImageIOHandler class works in terms of QImage, so you cannot create a custom image format plug-in to work with 16 bits per color component, unfortunately.
You can use libpng (png++) for that purpose.
QImage::Format not define pure 16 bit but can define 10 bit.
QImage::Format_A2RGB30_Premultiplied (http://doc.qt.io/qt-5/qimage.html), can use RGB 10 bits per channel (0...1).
But if use QImage::setPixel they still use 8 bit per channel for define pixel from QColor.
As others already mentioned, there is no format with 16 bits per component supported in Qt for now.
However there is a request opened to implement this, with a patch attached: https://bugreports.qt.io/browse/QTBUG-45858
Not sure what is the status of this, and if it will get integrated.
Qt5.13 is supported
here is the example
QImage raw((uchar*)byte.data(), 480, 640, QImage::Format_Grayscale16);
raw.save(QString("%1/depth.raw").arg(fileName));
use the flag QImage::Format_Grayscale16
Related
I am attempting to change the cursor in Windows 10 (version 1703) to a custom made one (conditional on some event when a script activates), that is larger than the default 32 by 32 size. The MWE based on my Autohotkey script is the following:
ImagePath = %A_ScriptDir%\foo.cur
Cursor_ID := 32512 ; Standard arrow
Cursor_Size := 128
^0::
SetSystemCursor( ImagePath, Cursor_ID, Cursor_Size, Cursor_Size )
return
SetSystemCursor( path, id, sizeX, sizeY )
{
Cursor := DllCall( "LoadImage",
UInt,0, Str,path, UInt,0x2, Int,sizeX, Int,sizeY, UInt,0x00000010, Ptr)
DllCall( "SetSystemCursor", Ptr,Cursor, Int,id )
}
(My code is based off of that found at https://autohotkey.com/board/topic/32608-changing-the-system-cursor/.)
As far as I can tell from the documentation of LoadImage, the function SetSystemCursor(...) should load the image with dimensions (sizeX, sizeY) when those parameters are not 0 (since the flag LR_DEFAULTSIZE = 0x00000040 is not set), but instead I get the following behaviour: no matter what sizes I set, the image gets scaled to (sizeX, sizeY), and then down/upscaled to (32, 32). This is most obvious by setting, say Cursor_Size := 2, then I get an upscaled version of a 2 by 2 image.
After some searching around I have found both information suggesting that this should work, and also to the effect that the size of cursors is always dictated by getSystemMetrics(SM_CXCURSOR)
and getSystemMetrics(SM_CYCURSOR): The biggest size of Windows Cursor (see also GetSystemMetrics).
Additional tests/ideas I've tried:
I checked the dimensions of the image corresponding to the handle returned
by LoadImage, and it seems to be (sizeX, sizeY), just as it should be,
therefore the scaling to 32 most likely happens upon executing SetSystemCursor.
I wanted to see if an application-specific cursor could bypass the
apparent 32 by 32 restriction, so using Resource Hacker, I replaced one of
the resources in Paint. It was scaled down to size 32 in the same way.
Setting the values that are returned by
getSystemMetrics(SM_CXCURSOR) and getSystemMetrics(SM_CYCURSOR)
might be an option if these indeed restrict cursor sizes, but I
could not find an appropriate function. I checked
SystemParametersInfo, but the only remotely relevant option
I found was SPI_SETCURSORS, and that just reloads the cursors from
registry.
It might be possible to change a registry value, though it would not
be my preferred solution, as it would most likely require a reboot
to take effect. Additionally, I haven't been able to find the relevant key.
My question would therefore be the following:
Is there a way to add an image of arbitrary size as a cursor in Windows 10, preferably without the need to reboot the computer? If so, how? Do SM_CXCURSOR and SM_CYCURSOR absolutely restrict the cursor's size? If they do, can these values be changed somehow?
EDIT:
It has been pointed out that yes, the documentation of GetSystemMetrics states "the system cannot create cursors of other sizes" than SM_CXCURSOR and SM_CYCURSOR, but at the same time at some of the other webpages I linked, people seem to claim to be able to create arbitrary sized cursors. Hence my request for confirmation/clarification of the matter.
Apart from that, the question about changing these values, or the existence of any other possible workaround would still be important to me.
I'm having trouble understanding how to port glReadBuffer() & glDrawBuffer() calls into Open GL ES 1.1. Various forum posts on the internet just say "use VBOs," without going into more depth.
Can you please help me understand an appropriate conversion? Say I have:
glReadBuffer(GL_FRONT);
followed by
glDrawBuffer(GL_BACK_LEFT);
state->paint(state_id, f);
How can I write the pixels out?
glReadBuffer and glDrawBuffer just set the source and target for subsequent drawing operations. Assuming you're targeting a monoscopic device, such as the iPhone or an Android device, and have requested two buffers then you're already set for drawing to the back buffer. The only means of reading the colour buffer in GL ES is glReadPixels, which will read from the same buffer that you're drawing to.
All of these are completely unrelated to VBOs, which pass off management of arrays of data to the driver, often implicitly allowing them to be put into the GPU's direct address space.
I wrote a program about 10 years ago in Visual Basic 6 which was basically a full-screen game similar to Breakout / Arkanoid but had 'demoscene'-style backgrounds. I found the program, but not the source code. Back then I hard-coded the display mode to 800x600x24, and the program crashes whenever I try to run it as a result. No virtual machine seems to support 24-bit display when the host display mode is 16/32-bit. It uses DirectX 7 so DOSBox is no use.
I've tried all sorts of decompiler and at best they give me the form names and a bunch of assembly calls which mean nothing to me. The display mode setting was a DirectX 7 call but there's no clear reference to it in the decompilation.
In this situation, is there any pointers on how I can:
pin-point the function call in the program which is setting the display mode to 800x600x24 (ResHacker maybe?) and change the value being passed to it so it sets 800x600x32
view/intercept DirectX calls being made while it's running
or if that's not possible, at least
run the program in an environment that emulates a 24-bit display
I don't need to recover the source code (as nice as it would be) so much as just want to get it running.
One technique you could try in your disassembler is to do a search for the constants you remember, but as the actual bytes that would be contained within the executable. I guess you used the DirectDraw SetDisplayMode call, which is a COM object so can't be as easily traced to/from an entry point in a DLL. It takes parameters for width, height and bits per pixel and they are DWORDs (32-bit) so do a search for "58 02 00 00", "20 03 00 00" and "18 00 00 00". Hopefully that will narrow it down to what you need to change.
By the way which disassembler are you using?
This approach may be complicated somewhat if your VB6 program compiled to p-code rather than native code as you'll just get a huge chunk of data that represents the program rather than useful assembler instructions.
Check this:
http://www.sevenforums.com/tutorials/258-color-bit-depth-display-settings.html
If your graphics card doesn't have an entry for 24-bit display....I guess hacking your code's the only possibility. That or finding an old machine to throw windows 95 on :P.
I want to get the adpater RAM or graphics RAM which you can see in Display settings or Device manager using API. I am in C++ application.
I have tried seraching on net and as per my RnD I have come to conclusion that we can get the graphics memory info from
1. DirectX SDK structure called DXGI_ADAPTER_DESC. But what if I dont want to use DirectX API.
2. Win32_videocontroller : But this class does not always give you adapterRAM info if availability of video controller is offline. I have checked it on vista.
Is there any other way to get the graphics RAM?
There is NO way to directly get graphics RAM on windows, windows prevents you doing this as it maintains control over what is displayed.
You CAN, however, create a DirectX device. Get the back buffer surface and then lock it. After locking you can fill it with whatever you want and then unlock and call present. This is slow, though, as you have to copy the video memory back across the bus into main memory. Some cards also use "swizzled" formats that it has to un-swizzle as it copies. This adds further time to doing it and some cards will even ban you from doing it.
In general you want to avoid directly accessing the video card and letting windows/DirectX do the drawing for you. Under D3D1x Im' pretty sure you can do it via an IDXGIOutput though. It really is something to try and avoid though ...
You can write to a linear array via standard win32 (This example assumes C) but its quite involved.
First you need the linear array.
unsigned int* pBits = malloc( width * height );
Then you need to create a bitmap and select it to the DC.
HBITMAP hBitmap = ::CreateBitmap( width, height, 1, 32, NULL );
SelectObject( hDC, (HGDIOBJ)hBitmap );
You can then fill the pBits array as you please. When you've finished you can then set the bitmap's bits.
::SetBitmapBits( hBitmap, width * height * 4, (void*)pBits )
When you've finished using your bitmap don't forget to delete it (Using DeleteObject) AND free your linear array!
Edit: There is only one way to reliably get the video ram and that is to go through the DX Diag interfaces. Have a look at IDxDiagProvider and IDxDiagContainer in the DX SDK.
Win32_videocontroller is your best course to get the amount of gfx memory. That's how its done in Doom3 source.
You say "..availability of video controller is offline. I have checked it on vista." Under what circumstances would the video controller be offline?
Incidentally, you can find the Doom3 source here. The function you're looking for is called Sys_GetVideoRam and it's in a file called win_shared.cpp, although if you do a solution wide search it'll turn it up for you.
User mode threads cannot access memory regions and I/O mapped from hardware devices, including the framebuffer. Anyway, what you would want to do that? Suppose the case you can access the framebuffer directly: now you must handle a LOT of possible pixel formats in the framebuffer. You can assume a 32-bit RGBA or ARGB organization. There is the possibility of 15/16/24-bit displays (RGBA555, RGBA5551, RGBA4444, RGBA565, RGBA888...). That's if you don't want to also support the video-surface formats (overlays) such as YUV-based.
So let the display driver and/or the subjacent APIs to do that effort.
If you want to write to a display surface (which not equals exactly to framebuffer memory, altough it's conceptually almost the same) there are a lot of options. DX, Win32, or you may try the SDL library (libsdl).
I can't find any help to implement PROV_RSA_AES CSP in c++. is there any article or book to help me out with it?
Here is an article about it.
Here is another one.
i just want to use one, i figured
how to get context but i'm still
thinking about the size of buffer i
need to use for CryptEncrypt() to get
it working with aes256 ? i also want
to use random salt.
AES256 in CBC-mode with PKCS#7-padding (which is the default) will need a buffersize that is the input-data rounded up to the next multiple of 16 (but always at least one byte more). Ie. 35 -> 48, 52 -> 64, 80 -> 96.
There is no salt involved in AES256. Are you talking about key-derivation? Or do you mean the IV?