I am trying to find out what actually happens in background when we do this (please see the image)
As you can see in image I have added few buttons and have checked Content View from Interface builder for window.
Now as we know it will make use of core animation or say will create layers. (Please correct me if I am wrong. Still studying...)
I want to know how does these buttons are drawn?
My assumption is when we tick Content View, these buttons are drawn from CGBitmapContextRef and bitmap created from it are handed over to Core Animation (OpenGL). But I am not being able to prove it so far. How do I prove it?
Any example or some approach idea would be great?
Thing I am sure of is buttons created from CGBitmapContextRef. But what happens to those button images is unknown.
Can anyone explain how is that integration possible? How those image would have got on screen?
Edit:
To add some more information on same topic, please check the image below for layers of OpenGL. I think I am targeting common OpenGL Framework layer.
I would start by making a tight loop that re-draws your buttons forever. Then, while it's running, use Activity Monitor to do a sample trace of your process. You'll see all the code paths it's taking to draw the buttons. You should be able to see what's happening from there from the names of the routines in the drawing stack. If you can't make sense of it, post the relevant bits and here and we could take a look.
Buttons are draw on CGBitmapContextRef.
Lets say, we have CGBitmapContextRef objected created using
CGContextRef CGBitmapContextCreate (
void *data,
size_t width,
size_t height,
size_t bitsPerComponent,
size_t bytesPerRow,
CGColorSpaceRef colorspace,
CGBitmapInfo bitmapInfo
);
Here void *data, is a pointer to the destination in memory where the drawing is to be rendered.
CGContext API can be then used to perform various operation on data. Thus buttons and background can be draw on it.
Once done we can release the CGContextRef but data is still in memory which can be passed to OpenGLContext(CGLContextObj).
I still do not know how it uploads data to CGLContextObj. Must be using some private api.
Related
While there are lots of variations of the question, there doesn't seem to be a specific answer to a simple case of wanting to use built-in common controls on a transparent window using Win32. I don't want the controls to be transparent, I just want the border around it to be transparent. I can't believe MS didn't update the .dll's to handle transparency when they added it, but I guess they forgot? Is there a specific method that works. A button can get close with WS_EX_TRANSPARENT, but flaky where it works most of the time but at times part of the border shows up. Edit controls, change depending on when get focus or not.
So the question is simply:
Is there a way to make common controls on transparent window so there is no white border around them?
If not, is there a good replacement library that does it via owner draw?
If some, which ones and what is the method?
Seems silly to reinvent the wheel just because of the area around the control.
TIA!!
If I am not mistaken, you can take the following steps to achieve this effect.
1.Create a GDI+ Bitmap object with the PixelFormat32bppPARGB pixel format.
2.Create a Graphics object to draw in this Bitmap object.
3.Do all your drawing into this object using GDI+.
4.Destroy the Graphics object created in step 2.
5.Call the GetHBITMAP method on the Bitmap object to get a Windows HBITMAP.
6.Destroy the Bitmap object.
7.Create a memory DC using CreateCompatibleDC and select the HBITMAP from step 5 into it.
8.Call UpdateLayeredWindow using the memory DC as a source.
9.Select previous bitmap and delete the memory DC.
10.Destroy the HBITMAP created in step 5.
This method should allow you to control the alpha channel of everything that is drawn: transparent for the background, opaque for the button.
A similar discussion: Transparent window containing opaque text and buttons
First of all, keep in mind that I am a beginner in win32, so I am very likely to be missing the obvious.
I am working with Code::Blocks, C++, win32. I am making a program that:
would load an image from a file
would load some info from another file and draw it over the image.
The program would then draw additional stuff over the image later on. Also, I don't need this drawing to be actually incorporated into the image, the image only acts as a reference for the drawing.
I have managed to display the image in a child (static) window and I have successfully drawn the info onto the main window. When I wanted to combine the two so the drawing would go over the image, however, I got stuck - I didn't know what window to draw to and which message to process for the drawing. I have searched the Internet for any hints, examples, anything, but I found nothing. (This is probably because I didn't know exactly how to describe my problem.)
I have been trying different things over the past few days, like drawing to the static control with the image, and trying to paint to a transparent static control on top of the one for the image, but nothing worked.
If anyone could give me any hints, that would be great! Thanks!
Trap the WM_PAINT message for the window you want to draw. In the handler, add code draw the image (BitBlt function perhaps) first and then the drawing you want. You must also handle WM_ERASEBKGND message which is used to erase the background of the window when re-sizing etc.
Refer: WM_PAINT message, WM_ERASEBKGND message
Ok so I have a problem. I have a method that is called imageFromText, it requires one parameter, the string itself, and it returns a NSImage. I also have another one which is called: imageFromView, this one basically have to "take a screenshot" of the view and return a NSImage, it also has only one parameter, the view itself. So it looks like this:
-(NSImage*)imageFromText: (NSString*)text {
}
-(NSImage*)imageFromView: (NSView*)view {
}
There's only one problem, I have no idea how to do this. Well so, I spent my afternoon searching around and I didn't find nothing. I've tried, to the second one, a method dataWithPDFInsideRect, but obviously, this method was not made to this propose. Please help me out!.
PLEASE NOT: I'M NOT ASKING FOR THE CODE READY. LIKE THE OLD DICTATION (IN MY COUNTY): DON'T GIVE THEM THE FISH, THEACH THEM HOW TO FISH. (TRANSLATED).
An alternative way is to lock focus on the view, then create a bitmap image rep with the contents of the view's bounds. You can then create a blank image whose size is the size of the bounds, and add the image rep to it.
The third way is dataWithPDFInsideRect:. Yes, the one you tried and couldn't get to work (I wish you'd explained what problem you had with it instead of just dismissing it!). Pass the view's bounds, then pass the data to NSImage's initWithData:.
As for imageFromView: check the Organizer Documentation for Screen Capture.
And imageFromText: You want an image (PNG I assume) that just shows a text? Don't you want to specify things like image size, font size, font color, background color, ...?
Summarizing, lock focus on the image, then draw. The NSImage docs should have more if you search for lockFocus.
The methods you're interested in are:
-[NSImage lockFocus]
+[NSGraphicsContext currentContext]
-[NSView displayRectIgnoringOpacity:inContext:]
-[NSImage lockFocus]
To draw to an image, allocate one, and then lock focus on it, then issue drawing calls and then unlock focus.
To draw a view into an image, lock focus on an image, get the current graphic context (which is now the image), and pass that to -[NSView displayRectIgnoringOpacity:inContext:].
Reading https://learn.microsoft.com/en-us/windows/win32/direct2d/comparing-direct2d-and-gdi :
Presentation Model
When Windows was
first designed, there was insufficient
memory to allow every window to be
stored in its own bitmap. As a result,
GDI always rendered logically directly
to the screen, with various clipping
regions applied to ensure that it did
not render outside of its window. In
contract, Direct2D follows a model
where the application renders to a
back-buffer and the result is
atomically “flipped” when the
application is done drawing. This
allows Direct2D to handle animation
scenarios much more fluidly that GDI
can.
The author says Direct2D uses back-buffer and by 'flipped' he meant swap-chain I guess. I created a simple demo that draw a rectangle at random location on mouse click. But previous rectangles are not cleared so it seems that it is drawn directly to the screen and does not use any back-buffer.
When you initialize the RenderTarget for your Direct2D operations you can specify in the second parameter the D2D1_PRESENT_OPTIONS option.
I think what confuses you is the D2D1_PRESENT_OPTIONS_RETAIN_CONTENTS and the fact that the buffer isn't swapped but copied.
That doesn't disprove the existence of back-buffers, it only means the back-buffer isn't cleared between redraws. Right observation, wrong conclusion!
If you increase the number of back-buffers in the chain, you'll start noticing flickering rectangles as you keep clicking, so you should always clear your back-buffer between redraws.
Direct2D indeed uses back-buffer.
Perhaps you forgot to clear your render target, which is the back-buffer, right after calling begindraw and so previous draws stayed there?
I've been banging my head about this seemingly easy task and I could really use some help.
I have a wide Image loaded in the gui (using the designer..) and I want to be able to draw only a portion of it, a rectangle.
I need to be able to change this rectangle position over the large image, in order to draw a different part of the larger image at will. In this process the rect must maintain its size.
Using the Ui::MainWindow object I'm able to access the label holding the image and a solution that involves using this option is preferred (in order to keep up with the rest of the code I've already written )
Any solution will be much appreciated :)
Thanks,
Itamar
I would definitely (for ease of use) just place an empty label as placeholder in Designer.
Then implement the paintEvent for this label (delegate it to your own method). You'll have also have to look into QPainter, QPixMap, etc... Should be doable based on these hints and the documentation.
If you want more, I suggest you provide a small code snippet to work upon.
If you want to do this more or less purely through designer, you could put a QScrollArea where you want the portion of the image to appear. If you set the scroll area's scrollbar policy to be never shown, you can then manually change what part is visible via the scroll area widget. However, this would probably be more complex that creating a derived widget and reimplementing the paint function.