Scrollable drawing in Gtk::Layout - scroll

I'd like to use custom drawing within a Gtk::Layout. That is, I'm using the C++ bindings for Gtk3 (GTKmm 3.14.0), and I have embedded widgets placed on the "canvas", on top of my custom drawing. Basically this works just fine.
Now the problem is related to scrolling. Gtk::Layout can be placed into a Gtk::ScrolledWindow, and when the scrollable area is set to something larger than the visible allocation, scrollbars will show up. Unfortunately, those scrollbars influence only the placement of the embedded widgets, while my custom drawing remains at a fixed position within the window.
This means, both the Gtk::Allocation and the cairo context seem to be related to precisely the visible area, not to the extended virtual "canvas". I could work around that problem by accessing the adjustments from the scrollbars and then translate the cairo context accordingly...
My question is:
is this the proper way to handle such a scrollable drawing?
or is there some way to let the framework do this work for me?

Judging from the source code of gtk+3.0-3.14.5 (which is in Debian/Stable), the Gtk::Layout does nothing to adjust the drawing context. It just invokes the inherited draw() function from GtkWidget. On the other hand, Gtk::Layout is a full-blown container (it inherits from Gtk::Container), and it is scrollable, which together means that it handles gtk_layout_size_allocate() by passing a suitable allocation (screen area) to each of the embedded child widgets -- and in this respect it does handle the moving and clipping related to scrolling the virtual canvas (calls gdk_window_move_resize()).
Thus, if we want to combine the embedded child widgets with custom drawing, we need to bridge this discrepancy manually. This is quite easy actually: all we need to do is to look into the Gtk::Adjusments corresponding to the scrollbars. Because the value of these adjusments is precisely the upper left corner of the visible viewport. Now, if we want our custom drawing to use absolute canvas coordinates, we just have to translate() the given Cairo context. Beware: it is important to save() the state and to restore() it to pristine state when done, otherwise those translations will accumulate.
Here is some example code to demonstrate this custom drawing
we derive a custom container class called Canvas from Gtk::Layout
we override the on_draw() handler, because only there all size allocation to embedded child widgets have been processed
Layering: child widgets are always drawn in the order they have been added to the Gtk::Layout container. Any custom drawing done before invoking the inherited on_draw() function will be below those widgets; any drawing done afterwards will happen on top of them.
if necessary, we can use the foreach(callback) mechanism to visit all child widgets to find out their current position and extension
void
Canvas::determineExtension()
{
if (not recalcExtension_) return;
uint extH=20, extV=20;
Gtk::Container::ForeachSlot callback
= [&](Gtk::Widget& chld)
{
auto alloc = chld.get_allocation();
uint x = alloc.get_x();
uint y = alloc.get_y();
x += alloc.get_width();
y += alloc.get_height();
extH = max (extH, x);
extV = max (extV, y);
};
foreach(callback);
recalcExtension_ = false;
set_size (extH, extV); // define extension of the virtual canvas
}
bool
Canvas::on_draw(Cairo::RefPtr<Cairo::Context> const& cox)
{
if (shallDraw_)
{
uint extH, extV;
determineExtension();
get_size (extH, extV);
auto adjH = get_hadjustment();
auto adjV = get_vadjustment();
double offH = adjH->get_value();
double offV = adjV->get_value();
cox->save();
cox->translate(-offH, -offV);
// draw red diagonal line
cox->set_source_rgb(0.8, 0.0, 0.0);
cox->set_line_width (10.0);
cox->move_to(0, 0);
cox->line_to(extH, extV);
cox->stroke();
cox->restore();
// cause child widgets to be redrawn
bool event_is_handled = Gtk::Layout::on_draw(cox);
// any drawing which follows happens on top of child widgets...
cox->save();
cox->translate(-offH, -offV);
cox->set_source_rgb(0.2, 0.4, 0.9);
cox->set_line_width (2.0);
cox->rectangle(0,0, extH, extV);
cox->stroke();
cox->restore();
return event_is_handled;
}
else
return Gtk::Layout::on_draw(cox);
}

Related

How to draw a new line on Gtk::DrawingArea area, while peristing previous lines that have already been drawn?

I am using C++11 with GNU tool chain with gtkmm3, on Ubuntu 12.04 LTS 32 bit.
I have been playing wtih some of the examples for gtkmm3 in Programming with gtkmm 3.
Based on 17.2.1.Example there, I inherited from Gtk::DrawingArea (MyDrawingArea here) and overrode the on_draw() event handler as follows:
MyDrawingArea.hpp
...
protected:
bool on_draw ( const Cairo::RefPtr<Cairo::Context>& cr ) override;
MyDrawingArea.cpp
bool MyDrawingArea::on_draw( const Cairo::RefPtr<Cairo::Context>& cr )
{
Gtk::Allocation allocation = get_allocation( );
const int width = allocation.get_width( );
const int height = allocation.get_height( );
int coord1{ height - 3 };
cr->set_line_width( 3.0 );
this->get_window( )->freeze_updates( );
cr->set_source_rgb( 0, 0.40, 0.60 );
cr->move_to( 0, coord1 );
cr->line_to( width, coord1 );
cr->stroke( );
cr->set_source_rgb( 1, 0.05, 1 );
cr->move_to( mXStart, coord1 );
cr->line_to( mXStart, mYAxis * 1.5 );
cr->show_text( to_string( mYAxis ) );
cr->stroke( );
mXStart += 5;
this->get_window( )->thaw_updates( );
return true;
}
My goal is to draw a simple bar graph based on a calculation I do in a little test application, the idea being that each time the on_draw() event is called, the next bar would be moved 5 units to the right on mXAxis and a vertical line would be drawn based on the new mYaxis value, which is computed based on the results of the new calculation.
When I want to repaint my graph and trigger the MyDrawingArea::on_draw() event, I call MyDrawingArea.show_all() from my application after the calculation has completed, and new x and y axes have been set.
However, this does not work as I expected: MyDrawingArea.show_all() invalidates the entire drawing window and draws from scratch: the new graph line appears in its proper place, but the previous ones are erased. I also tried MyDrawingArea.queue_draw(), which had the same effect. But I want to persist the previous graph results so I can get a profile of the calculation results, as I calculate with different values.
This implementation is also causing the bottom line on my graph (my x axis on the graph)- drawn by the first stroke() call in my code example, to be rendered anew on each call to on_draw() - although this should not be necassary since this line persists for the lifetime of MyDrawingArea - it should not be necessary to invalidate and then re-draw it on each new on_draw() event, as my code is currently doing, because I am haven't yet found a way to handle this.
I am very new to Cairo, so I'm sure I'm probably doing this completely wrong, but explicit, task-oriented documentation appears to be sparse - have not found anything that explains how to do this, although I'm sure it is quite simple.
What do I need to do to draw a new line on Gtk::DrawingArea, while persisting previous graph lines that have already been drawn on previous passes, and establish graphics elements that will persist for the lifetime of the Gtk::DrawingArea widget. Obviously using show_all() or queue_draw() and doing it all in the on_draw() event is not the way to go.
In general, you must draw the entire widget and Cairo will clip the drawing to the predefined dirty region. See also GTK reference manual for the "GtkWidget::draw" signal for performance tips:
The signal handler will get a cr with a clip region already set to the
widget's dirty region, i.e. to the area that needs repainting.
Complicated widgets that want to avoid redrawing themselves completely
can get the full extents of the clip region with
gdk_cairo_get_clip_rectangle(), or they can get a finer-grained
representation of the dirty region with
cairo_copy_clip_rectangle_list().
So you may be able to redraw only the region you want with gtk_widget_queue_draw_area().

GetObject() on bitmap handle from LoadImage() sometimes returns incorrect bitmap size

We are seeing an intermittent problem in which owner drawn buttons under Windows XP that are using a bitmap as a backdrop are displaying the bitmap incorrectly. The window containing multiple buttons that are using the same bitmap file for the bitmap image used for the button backdrop will display and most of the buttons will be correct though in some cases there may be one or two buttons which are displaying the bitmap backdrop reduced to a smaller size.
If you exit the application and then restart it you may see the same behavior of the incorrect display of the icon on the buttons however it may or may not be the same buttons as previously. Nor is this behavior of incorrect display of icons on the buttons always seen. Sometimes it shows and sometimes it does not. Since once we load an icon for a button we just keep it, once the button is displayed incorrectly it will always be displayed incorrectly.
Using the debugger we have finally found that what appears to be happening is that when the GetObject() function is called, the data returned for the bitmap size is sometimes incorrect. For instance in one case the bitmap was 75x75 pixels and the size returned by GetObject() was 13x13 instead. Since this size is used as part of the drawing of the bitmap, the displayed backdrop becomes a small decoration on the button window.
The actual source area is as follows.
if (!hBitmapFocus) {
CString iconPath;
iconPath.Format(ICON_FILES_DIR_FORMAT, m_Icon);
hBitmapFocus = (HBITMAP)LoadImage(NULL, iconPath, IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE);
}
if (hBitmapFocus) {
BITMAP bitmap;
int iNoBytes = GetObject(hBitmapFocus, sizeof(BITMAP), &bitmap);
if (iNoBytes < 1) {
char xBuff[128];
sprintf (xBuff, "GetObject() failed. GetLastError = %d", GetLastError ());
NHPOS_ASSERT_TEXT((iNoBytes > 0), xBuff);
}
cxSource = bitmap.bmWidth;
cySource = bitmap.bmHeight;
//Bitmaps cannot be drawn directly to the screen so a
//compatible memory DC is created to draw to, then the image is
//transfered to the screen
CDC hdcMem;
hdcMem.CreateCompatibleDC(pDC);
HGDIOBJ hpOldObject = hdcMem.SelectObject(hBitmapFocus);
int xPos;
int yPos;
//The Horizontal and Vertical Alignment
//For Images
//Are set in the Layout Manager
//the proper attribute will have to be checked against
//for now the Image is centered on the button
//Horizontal Alignment
if(btnAttributes.horIconAlignment == IconAlignmentHLeft){//Image to left
xPos = 2;
}else if(btnAttributes.horIconAlignment == IconAlignmentHRight){//Image to right
xPos = myRect.right - cxSource - 5;
}else {//Horizontal center
xPos = ((myRect.right - cxSource) / 2) - 1;
}
//Vertical Alignment
if(btnAttributes.vertIconAlignment == IconAlignmentVTop){//Image to top
yPos = 2;
}else if(btnAttributes.vertIconAlignment == IconAlignmentVBottom){//Image to bottom
yPos = myRect.bottom - cySource - 5;
}else{//Vertical Center
yPos = ((myRect.bottom - cySource) / 2) - 1;
}
pDC->BitBlt(xPos, yPos, cxSource, cySource, &hdcMem, 0, 0, SRCCOPY);
hdcMem.SelectObject(hpOldObject);
}
Using the debugger we can see that the iconPath string is correct and the bitmap is loaded as hBitmapFocus is not NULL. Next we can see that the call to GetObject() is made and the value returned for iNoBytes equals 24. For those buttons that display correctly the values in bitmap.bmWidth and bitmap.bmHeight are correct however for those that do not the values are much too small leading to an incorrect sizing when drawing the bitmap.
The variable is defined in the class header as
HBITMAP hBitmapFocus;
As part of doing the research for this I found this stack overflow question, GetObject returns strange size and I am wondering if there is some kind of an alignment issue here.
Does the bitmap variable used in the call to GetObject() need to be on some kind of an alignment boundary? While we are using packed for some of our data we are using pragma directives to only specify specific portions of code containing specific structs in include files that need to be packed on one byte boundaries.
Please read this Microsoft KB how to load a bitmap with palette information. It has a great example as well.
On the side note: I do not see anywhere in your code where you call ::DeleteObject(hBitmapFocus). It is very important to call this, as you can run out of GDI objects very quickly.
It is always a good idea to use Windows Task manager to see that your program does not exhaust the GDI resources. Just add "GDI Objects" column to the Task Manager and see that the number of objects is not constantly increasing in your app, but stays within an expected range, similar to other programs

FabricJS Canvas, Scrolling parent container moves child hit area

I am using FabricJS to create an application. I am finding that scrolling a parent div/container offsets the selectable area of an object to the right in direct relation to amount scrolled.
So, if I have a canvas that is 1200x600 and a container div that is 600x600 and I add a rect to that canvas at 400, 120; when I scroll 200px, I can't click on the rect to select it. Rather, I have to move my mouse to 600, 120 (empty space) to get the cross bar and select the rect.
Not sure if this is known, or has a work around - but I would appreciate any help possible.
You'll have to modify FabricJs code to make it work.
The problem is in the getPointer function, if you search for it in all.js you'll notice the comment "this method needs fixing" from kangax.
A workaround can be substituting this function with
function getPointer(event) {
// TODO (kangax): this method needs fixing
return { x: pointerX(event) + document.getElementById("container").scrollLeft, y: pointerY(event) + document.getElementById("container").scrollTop };
}
where "container" is the wrapper div of you canvas. It's not nice, since you have to put the exact id, but it works.
Hope this helps.

Convert a given point from the window’s base coordinate system to the screen coordinate system

I am trying to figure out the way to convert a given point from the window’s base coordinate system to the screen coordinate system. I mean something like - (NSPoint)convertBaseToScreen:(NSPoint)point.
But I want it from quartz/carbon.
I have CGContextRef and its Bounds with me. But the bounds are with respect to Window to which CGContextRef belongs. For Example, if window is at location (100, 100, 50, 50) with respect to screen the contextRef for window would be (0,0, 50, 50). i.e. I am at location (0,0) but actually on screen I am at (100,100). I
Any suggestion are appreciated.
Thank you.
The window maintains its own position in global screen space and the compositor knows how to put that window's image at the correct location in screen space. The context itself, however doesn't have a location.
Quartz Compositor knows where the window is positioned on the screen, but Quartz 2D doesn't know anything more than how big the area it is supposed to draw in is. It has no idea where Quartz Compositor is going to put the drawing once it is done.
Similarly, when putting together the contents of a window, the frameworks provide the view system. The view system allows the OS to create contexts for drawing individual parts of a window and manages the placement of the results of drawing in those views, usually by manipulating the context's transform, or by creating temporary offscreen contexts. The context itself, however, doesn't know where the final graphic is going to be rendered.
I'm not sure if you can use directly CGContextRef, you need window or view reference or something like do the conversion.
The code I use does the opposite convert mouse coordinates from global (screen) to view local and it goes something like this:
Point mouseLoc; // point you want to convert to global coordinates
HIPoint where; // final coordinates
PixMapHandle portPixMap;
// portpixmap is needed to get correct offset otherwise y coord off at least by menu bar height
portPixMap = portPixMap = GetPortPixMap( GetWindowPort( GetControlOwner( view ) ) );
QDGlobalToLocalPoint(GetWindowPort( GetControlOwner( view ), &mouseLoc);
where.x = mouseLoc.h - (**portPixMap).bounds.left;
where.y = mouseLoc.v - (**portPixMap).bounds.top;
HIViewConvertPoint( &where, NULL, view );
so I guess the opposite is needed for you (haven't tested if it actually works):
void convert_point_to_screen(HIView view, HIPoint *point)
{
Point point; // used for QD calls
PixMapHandle portPixMap = GetPortPixMap( GetWindowPort( GetControlOwner( view ) ) );
HIViewConvertPoint( &where, view, NULL ); // view local to window local coordtinates
point.h = where->x + (**portPixMap).bounds.left;
point.w = where->y + (**portPixMap).bounds.top;
QDLocalToGlobalPoint(GetWindowPort( GetControlOwner( view ), &point);
// convert Point to HIPoint
where->x = point.h;
where->y = point.v;
}

If window spans multiple monitors, I can't draw to it

If I have a window that spans both monitors on a multimonitor system, I can't seem to erase (paint black) the entire window. Instead, only the primary window is drawn black. The secondary remains the original white color. Has anyone seen this behavior?
wxwidgets:
wxClientDC dc(this);
Erase(dc);
void SpriteWindowFrame::Erase(wxDC& dc)
{
dc.SetBackground(*wxBLACK_BRUSH);
dc.SetBrush(*wxBLACK_BRUSH);
dc.Clear();
//wxLogDebug("Erase called. Rect is %i, %i w:%i, h:%i", GetPosition().x, GetPosition().y, GetSize().GetWidth(), GetSize().GetHeight());
}
Inside dc.Clear() function, there is this code
wxwidgets:
void wxDC::Clear()
{
WXMICROWIN_CHECK_HDC
RECT rect;
if ( m_canvas )
{
GetClientRect((HWND) m_canvas->GetHWND(), &rect);
}
else
{
// No, I think we should simply ignore this if printing on e.g.
// a printer DC.
// wxCHECK_RET( m_selectedBitmap.Ok(), wxT("this DC can't be cleared") );
if (!m_selectedBitmap.Ok())
return;
rect.left = -m_deviceOriginX; rect.top = -m_deviceOriginY;
rect.right = m_selectedBitmap.GetWidth()-m_deviceOriginX;
rect.bottom = m_selectedBitmap.GetHeight()-m_deviceOriginY;
}
#ifndef __WXWINCE__
(void) ::SetMapMode(GetHdc(), MM_TEXT);
#endif
DWORD colour = ::GetBkColor(GetHdc());
HBRUSH brush = ::CreateSolidBrush(colour);
::FillRect(GetHdc(), &rect, brush);
::DeleteObject(brush);
#ifndef __WXWINCE__
int width = DeviceToLogicalXRel(VIEWPORT_EXTENT)*m_signX,
height = DeviceToLogicalYRel(VIEWPORT_EXTENT)*m_signY;
::SetMapMode(GetHdc(), MM_ANISOTROPIC);
::SetViewportExtEx(GetHdc(), VIEWPORT_EXTENT, VIEWPORT_EXTENT, NULL);
::SetWindowExtEx(GetHdc(), width, height, NULL);
::SetViewportOrgEx(GetHdc(), (int)m_deviceOriginX, (int)m_deviceOriginY, NULL);
::SetWindowOrgEx(GetHdc(), (int)m_logicalOriginX, (int)m_logicalOriginY, NULL);
#endif
}
Using the debugger, I checked what GetClientRect returned and sure enough it returns a rectange with location 0 and width/height of the combined two monitors so it's right. Maybe fillrect function is not capable of drawing to two displays?
Can you trace into the constructor of the wxClientDC?
wxClientDC dc(this);
A lot depends on what type of DC wx has given you. The windows API to retrieve a window DC is hdc = GetDC(hwnd), and, on multimonitor systems, it retrieves a handle to a 'mirror driver' DC, thats meant to reflect calls to all the underlying display device DCs that the monitor spans.
The only possible reason I can think of for this behaviour is wx is somehow retrieving a display DC rather than a window DC.
I'm sure Chris is correct, that the "overlapping window" case is handled somewhere for you. But where?
Rendering with windows GDI and "display contexts" such as you mention is very primitive and prone to all sorts of problems. GDI is one of poorest interfaces ever seen, poor even for Microsoft. Since most "window" programs work OK on multiple monitors, think of animating things in a "window" - and how that "window" makes its way to the "display" is best left a mystery.
Maybe DC is fundamentally not multi-monitor capable. Look for anything that allows multiple DCs to be treated uniformly. Rending graphics onto a grid of paper sheets would be like a tiled "printer DC". A video wall would be a tiled "display DC" and you would be happy with a 2-monitor hack, i.e. "multimon dc" echoes to "owning" display and "another one" if a window spans both.
If you want to do "real" animation on windows, you will need to move to DirectX. It is also a lot to learn, but much more capable: scene graphs, textures, video, alpha channels, ...

Resources