Mouse handling: printing pixel location - visual-studio-2010

I've been trying to do some work with OpenCV in VS2010, specifically in the area of mouse handling. So far, I have this:
CV_EVENT_LBUTTONDOWN
:drawing_line = true;
cvLine( frame, cvPoint(x,y),cvPoint(350,500), CV_RGB(255,0,0), CV_AA, 15,0 );
fprintf( stdout, "Point found. %i, %i \n", object_x0, object_y0 );
break;
What I want it to do is return the location of the pixels that I clicked on but all it returns is "Point found. 0,0" instead of the actual location. Eventually, I would like to use the points with cvLine to draw a line but right now I would just like to get some values returned to me. Any suggestions would be much appreciated. Thanks!

You can obtain the position of a mouse-click by passing it as the parameter to the mouse callback function like so:
void onMouse(int evt, int x, int y, int flags, void* param) {
if(evt == CV_EVENT_LBUTTONDOWN) {
cv::Point* ptPtr = (cv::Point*)param;
ptPtr->x = x;
ptPtr->y = y;
}
}
int main() {
cv::Point2i pt(-1,-1);
cv::namedWindow("Output Window");
frame = cv::imread("image.jpg");
cv::imshow(winName, frame);
cv::setMouseCallback(winName, onMouse, (void*)&pt);
// Note that we passed '&pt' (a pointer
// to `pt`) to the mouse callback function.
// Therefore `pt` will update its [x,y] coordinates
// whenever user left-clicks on the image in "Output Window".
}

Points are passed in as arguments to the Mouse callback function.
void onMouse(int event, int x, int y, int flags, void* param)
You'll want to save those x, y into a global when you click down, then a different global when you click up, then draw a line between the two.

Related

Unreliable behaviour of GTK function gtk_tree_view_scroll_to_cell()

I need some help understanding why a bug is happening in some code using a GtkTreeView, and how to fix it.
In the program, the user opens an image and runs a function to detect stars in the image. This is all reliable and works fine. Data about each star is populated into a GtkTreeView.
The user then clicks on a star in the display which triggers a callback:
gboolean on_drawingarea_button_release_event(GtkWidget *widget, GdkEventButton *event, gpointer user_data)
that in turn calls the function shown below:
void set_iter_of_clicked_psf(double x, double y) {
GtkTreeSelection *selection = GTK_TREE_SELECTION(gtk_builder_get_object(gui.builder, "treeview-selection"));
GtkTreeView *treeview = GTK_TREE_VIEW(lookup_widget("Stars_stored"));
GtkTreeModel *model = gtk_tree_view_get_model(treeview);
GtkTreeIter iter;
gboolean valid;
gboolean is_as;
const double radian_conversion = ((3600.0 * 180.0) / M_PI) / 1.0E3;
double invpixscalex = 1.0;
double bin_X = gfit.unbinned ? (double) gfit.binning_x : 1.0;
if (com.stars && com.stars[0]) {// If the first star has units of arcsec, all should have
is_as = (strcmp(com.stars[0]->units,"px"));
} else {
return; // If com.stars is empty there is no point carrying on
}
if (is_as) {
invpixscalex = 1.0 / (radian_conversion * (double) gfit.pixel_size_x / gfit.focal_length) * bin_X;
}
valid = gtk_tree_model_get_iter_first(model, &iter);
while (valid) {
gdouble xpos, ypos, fwhmx;
gtk_tree_model_get(model, &iter, COLUMN_X0, &xpos, -1);
gtk_tree_model_get(model, &iter, COLUMN_Y0, &ypos, -1);
gtk_tree_model_get(model, &iter, COLUMN_FWHMX, &fwhmx, -1);
fwhmx *= invpixscalex;
gdouble distsq = (xpos - x) * (xpos - x) + (ypos - y) * (ypos - y);
gdouble psflimsq = 6. * fwhmx * fwhmx;
if (distsq < psflimsq) {
GtkTreePath *path = gtk_tree_model_get_path(model, &iter);
if (!path) return;
gtk_tree_selection_select_path(selection, path);
gtk_tree_view_scroll_to_cell(treeview, path, NULL, TRUE, 0.5, 0.0);
gtk_tree_path_free(path);
gui.selected_star = get_index_of_selected_star(xpos, ypos);
display_status();
redraw(REDRAW_OVERLAY);
return;
}
valid = gtk_tree_model_iter_next(model, &iter);
}
siril_debug_print("Point clicked does not correspond to a known star\n");
return;
}
The code steps through each iter in the GtkTreeView and checks whether the location clicked is close enough to the centre of a star: the key bit of the code then follows inside the conditional towards the end of the function. If the location matches a star, a GtkTreePath is declared and initialised based on the iter, changes the selection to that path and calls gtk_tree_view_scroll_to_cell() which is supposed to scroll the GtkTreeView so that the selected star is shown.
For some users the code works every time and causes the tree view to scroll to the selected star. For other users sometimes it works, sometimes it does not work. When it doesn't work I can tell the correct code path is followed because gui.selected_star is updated correctly (it triggers a drawing event in the subsequent redraw(REDRAW_OVERLAY) call. It is only gtk_tree_view_scroll_to_cell() that appears to function erractically.
Neither I nor one of my fellow developers can see why it works sometimes but not always, nor can we see why some users don't see the bug at all. So if any GTK gurus could enlighten me I'd be most grateful!

How do find out the width and height of the text without using surface in SDL2?

I wanted to create a separate function where I could just send a string and it will render the text appropriately so that I didn't need to copy-paste same stuff. The function I came up with is in the following.
void renderText(SDL_Renderer* renderer, char* text,
char* font_name, int font_size,
SDL_Color color, SDL_Rect text_area)
{
/* If TTF was not initialized initialize it */
if (!TTF_WasInit()) {
if (TTF_Init() < 0) {
printf("Error initializing TTF: %s\n", SDL_GetError());
return EXIT_FAILURE;
}
}
TTF_Font* font = TTF_OpenFont(font_name, font_size);
if (font == NULL) {
printf("Error opening font: %s\n", SDL_GetError());
return;
}
SDL_Surface* surface = TTF_RenderText_Blended(font, text, color);
SDL_Texture* texture = SDL_CreateTextureFromSurface(renderer, surface);
if (!texture) {
printf("error creating texture: %s\n", SDL_GetError());
TTF_CloseFont(font);
return;
}
SDL_RenderCopy(renderer, message, NULL, &text_area);
SDL_FreeSurface(surface);
SDL_DestroyTexture(texture);
TTF_CloseFont(font);
}
Now, sometimes I want to align the text with the window for which I need to know the height and width of the surface that contains the text so that I can use something like (WINDOW_WIDTH - surfaceText->w) / 2 or (WINDOW_HEIGHT - surfaceText->h) / 2. But there is no way to know the height and width of the surface containing the text without creating the surface. And if I end up needing to create the surface then the separation of this function would not live upto its objective.
How do I find out the height and width of the surface containing the text without actually creating the surface in SDL2_ttf library?
You can pass the string to the TTF_SizeText() function, which is defined:
int TTF_SizeText(TTF_Font *font, const char *text, int *w, int *h)
The documentation for this function states:
Calculate the resulting surface size of the LATIN1 encoded text rendered using font. No actual rendering is done, however correct kerning is done to get the actual width. The height returned in h is the same as you can get using 3.3.10 TTF_FontHeight.
Then, once you have the dimensions of the string, you can call your rendering function with the necessary information to align it.
There are also TTF_SizeUTF8() and TTF_SizeUNICODE() versions for different encodings.

RestoreCapture FMX - Win32

I took this example of subclassing a Form's HWND as a starting point and then added in jrohde's code from from here that is designed to let you drag a Form by clicking anywhere on it (not on the caption bar). This code fails on the ReleaseCapture()line with this message: E2283 Use . or -> to call '_fastcall TCommonCustomForm::ReleaseCapture()
If i comment that line out the code runs and i can move the form by left mouse don and drag, but i can't let go of it. The mouse gets stuck to the form like flypaper. If i replace the ReleaseCapture() with a ShowMessage i can break out but that is obviously not the way to go...
What do i need to do allow that RestoreCapture() to run? This is Win32 app.
BELOW IS THE CODE i added to the original switch(uMsg) block:
// two int's defined above the switch statement
static int xClick;
static int yClick;
// new case added to the switch
case WM_LBUTTONDOWN:
SetCapture(hWnd);
xClick = LOWORD(lParam);
yClick = HIWORD(lParam);
break;
case WM_LBUTTONUP:
//ReleaseCapture(); // This is the problem spot <------------------------
ShowMessage("Up");
break;
case WM_MOUSEMOVE:
{
if (GetCapture() == hWnd) //Check if this window has mouse input
{
RECT rcWindow;
GetWindowRect(hWnd,&rcWindow);
int xMouse = LOWORD(lParam);
int yMouse = HIWORD(lParam);
int xWindow = rcWindow.left + xMouse - xClick;
int yWindow = rcWindow.top + yMouse - yClick;
SetWindowPos(hWnd,NULL,xWindow,yWindow,0,0,SWP_NOSIZE|SWP_NOZORDER);
}
break;
thanks, russ
From the error message you can derive that the compiler resolves the function ReleaseCapture() to TCommonCustomForm::ReleaseCapture(). But you want to call the Win32 API function ReleaseCapture(). Use ::ReleaseCapture(); instead of ReleaseCapture(); to enforce this.

Touch doesnt work with Qt5.1.1 when used with Wayland & qtwayland

I want to make Qt 5.1.1 touch example application work with qtwayland module.
I get the window on display, and also I get the touch traces from Weston. I see qtwayland is also getting triggered with the callback function that is registered for touch-up, touch-down, touch-motion.
But, QT doesn't invoke the QPushButton handler in QT application.
Connect API I am using as below:
connect(ui->b_audio, SIGNAL(pressed()), this, SLOT(on_b_audio_clicked()));
Any clue why this could happen? Please suggest probable problems so that I can explore and debug.
Thanks in Advance.
Bhushan.
In QtWayland, QWaylandInputDevice::touch_frame() passes the touch points to Qt internal window system through QWindowSystemInterface::handleTouchEvent(). Weston does not send WL_TOUCH_FRAME event at all, so the buttons or the QML MouseArea never receive touch event.
You can add the following line to the end of evdev_flush_motion() in weston/src/evdev.c:
notify_touch(master, time, 0, 0, 0, WL_TOUCH_FRAME);
And rewrite the notify_touch() in weston/src/input.c:
WL_EXPORT void
notify_touch(struct weston_seat *seat, uint32_t time, int touch_id,
wl_fixed_t x, wl_fixed_t y, int touch_type)
{
struct weston_compositor *ec = seat->compositor;
struct weston_touch *touch = seat->touch;
struct weston_touch_grab *grab = touch->grab;
struct weston_surface *es;
wl_fixed_t sx, sy;
struct weston_touch *grab_touch = grab->touch;
/* Update grab's global coordinates. */
touch->grab_x = x;
touch->grab_y = y;
switch (touch_type) {
case WL_TOUCH_DOWN:
weston_compositor_idle_inhibit(ec);
seat->num_tp++;
/* the first finger down picks the surface, and all further go
* to that surface for the remainder of the touch session i.e.
* until all touch points are up again. */
if (seat->num_tp == 1) {
es = weston_compositor_pick_surface(ec, x, y, &sx, &sy);
weston_touch_set_focus(seat, es);
} else if (touch->focus) {
es = (struct weston_surface *) touch->focus;
weston_surface_from_global_fixed(es, x, y, &sx, &sy);
} else {
/* Unexpected condition: We have non-initial touch but
* there is no focused surface.
*/
weston_log("touch event received with %d points down"
"but no surface focused\n", seat->num_tp);
return;
}
grab->interface->down(grab, time, touch_id, sx, sy);
if (seat->num_tp == 1) {
touch->grab_serial =
wl_display_get_serial(ec->wl_display);
touch->grab_time = time;
touch->grab_x = x;
touch->grab_y = y;
}
break;
case WL_TOUCH_MOTION:
es = (struct weston_surface *) touch->focus;
if (!es)
break;
weston_surface_from_global_fixed(es, x, y, &sx, &sy);
grab->interface->motion(grab, time, touch_id, sx, sy);
break;
case WL_TOUCH_UP:
weston_compositor_idle_release(ec);
seat->num_tp--;
grab->interface->up(grab, time, touch_id);
break;
case WL_TOUCH_FRAME:
if (grab_touch->focus_resource) {
wl_touch_send_frame(grab_touch->focus_resource);
if (seat->num_tp == 0)
weston_touch_set_focus(seat, NULL);
}
}
}
Meanwhile, I find that weston does not handle multi-touch properly because its mt structure (below) uses an int value 'slot' which can only track the current slot number.
struct {
int slot;
int32_t x[MAX_SLOTS];
int32_t y[MAX_SLOTS];
} mt;
In multi-touch protocol type B, each slot associates with a finger contact and you get multiple slot events in a touch frame, for example, a touch_down frame;
ABS_MT_SLOT
ABS_MT_TRACKING_ID
ABS_MT_POSITION_X
ABS_MT_POSITION_Y
ABS_MT_SLOT
ABS_MT_TRACKING_ID
ABS_MT_POSITION_X
ABS_MT_POSITION_Y
EV_SYN
weston handles events from first slot event to the EV_SYN event, and it call notify_touch() if EV_SYN is encountered. Therefore, weston cannot send the two touch down events sequentially via notify_touch() with different slot number parameter.
In my reimplementation, I change the mt structure:
struct {
int current_slot;
uint32_t num_down_event;
uint32_t num_motion_event;
uint32_t num_up_event;
int slots[MAX_CONTACTS];
int32_t x[MAX_SLOTS];
int32_t y[MAX_SLOTS];
} mt;
num_down_event: track how many fingers touch down
num_motion_event: track how many fingers move
num_up_event: track how many fingers lift up
slots: track slot numbers in every touch frame
This is https://bugreports.qt-project.org/browse/QTBUG-36602 and we are planning to have a workaround in Qt 5.4.0. By the time that we did that, it seems that touch_frame is being sent for touch_motion but not for touch_up. (Testing with latest versions of wayland, weston and libinput)

MouseDown and then MouseUp doesn't work

I am trying to redirect mouse inputs on my Windows 7 application to some other window.
If I do this when I get WM_LBUTTONUP, it works (where MouseDown and MouseUp are SendInput functions in Win api):
SetForegroundWindow( other window );
SetCursorPos( somewhere on the window );
MouseDown();
MouseUp();
SetCursorPos( back );
SetForegroundWindow( main window );
But I don't want to only do mouse releases, I want to be able to capture all mouse stuff, including movements and dragging.
So this is next logical thing to do but it doesn't work:
WM_LBUTTONDOWN:
Do everything like before without MouseUp()
WM_LBUTTONUP:
Do everything like before without MouseDown()
This doesn't even work for regular clicks. I can't figure out why.
Can anybody help?
Mouse buttons are funky. When it gets an down then up, at some level those are converted to a click (and I think at some point the mouse up gets eaten, but I may not be recalling that correctly).
It could be any number of things, but if the other windows is (or is not) converting the buttondown/up to mouse clicks, it could be confusing your code.
I suggest you print a lot of debugging info and try to figure out exactly what the system is doing.
It might be worth looking at the SendMessage/PostMessage P/Invoke calls, and sending the messages directly to the window of the other application. You need to do some translation on the parameters so that the co-ordinates of mouse events tie up with what you want them to in the other application, but it's not a big deal to do that...
Edit -> I dug out some code where I have done this before... This is from a window which appears over the top of a tree view and replaces the default windows tooltip for that tree view.
private IntPtr _translate(IntPtr LParam)
{
// lparam is currently in client co-ordinates, and we need to translate those into client co-ordinates of
// the tree view we're attached to
int x = (int)LParam & 0xffff;
int y = (int)LParam >> 16;
Point screenPoint = this.PointToScreen(new Point(x, y));
Point treeViewClientPoint = _tv.PointToClient(screenPoint);
return (IntPtr)((treeViewClientPoint.Y << 16) | (treeViewClientPoint.X & 0xffff));
}
const int MA_NOACTIVATE = 3;
protected override void WndProc(ref Message m)
{
switch ((WM)m.Msg)
{
case WM.LBUTTONDBLCLK:
case WM.RBUTTONDBLCLK:
case WM.MBUTTONDBLCLK:
case WM.XBUTTONDBLCLK:
{
IntPtr i = _translate(m.LParam);
_hide();
InteropHelper.PostMessage(_tv.Handle, m.Msg, m.WParam, i);
return;
}
case WM.MOUSEACTIVATE:
{
m.Result = new IntPtr(MA_NOACTIVATE);
return;
}
case WM.MOUSEMOVE:
case WM.MOUSEHWHEEL:
case WM.LBUTTONUP:
case WM.RBUTTONUP:
case WM.MBUTTONUP:
case WM.XBUTTONUP:
case WM.LBUTTONDOWN:
case WM.RBUTTONDOWN:
case WM.MBUTTONDOWN:
case WM.XBUTTONDOWN:
{
IntPtr i = _translate(m.LParam);
InteropHelper.PostMessage(_tv.Handle, m.Msg, m.WParam, i);
return;
}
}
base.WndProc(ref m);
}

Resources