Drag and Drop in Windows Phone 7.1 - windows-phone-7

Windows Phone 7.1 (Mang), SL4, VS 2010/Expression Blend.
I have a UI element (an ellipse) that I've added the behavior MouseDragElementBehavior to so now I can drag my ellipse all over my other elements. Great. What I can't figure out is how to determine where (what other UI element, specifically in this case a canvas) it was dropped on. Imagine a game board where you click and drag a piece from one square to another.
How can I determine if it's been "dropped" on another element?

The VisualTreeHelper class contains a method FindElementsInHostCoordinates to which you can pass the Point or Rect location of your Ellipse and then get all the Controls that exist in that location and act accordingly.
You might find the remarks section in FindElementsInHostCoordinates useful.
I think you can use that method no problem in basic scenarios. I used it for a while in one of my application's then I decided to use a manual method where I just loop over the controls and checking whether an intersection (or Point containment) occurs. Then just get the first control that is hit.
Please tell me if you have reached a better solution because I'm looking for ideas better than what I have already done! Thanks.

Related

Appium: How to differentiate between two different iOS screens?

I am developing a testing algorithm for our iOS apps using Appium. To fully implement this algorithm i need to identify wither i have moved onto different screen or am still on the same screen after performing some action. I need to know, what makes every screen unique/different from other in terms of Appium?
Going through the pageSource of every screen, i found that most screens have xpath attribute in window element. Can i use value of xpath of window element to mark the screen as unique from others, or do i need to do a trivial string comparison between screen's pageSources to mark them different? Or is there any other better solution?
Not sure if xpath would be the best solution for this. Normally the UIAWindow would remain the same, and developers might use different containers within this UIAWindow to render different screens.
So to verify different screens, you might need to figure out what this container is and see if the container's properties change when you move to a new screen (ie a new container)
If you app user a different header for every new screen, then you can use this header to see if the screen is changed. Example: in WhatsApp, you would see a different persons at the top. So in this case, the person's name can be assumed as the header.
If this doesn't work then you can verify some of the other controls, or say list of all the UIAStaticText on the screen. During screen change the entire list of UIAStaticText might change. So this can indicate a screen change.
For our automation suite at work I've implemented a series of screen check steps. Every time we switch screens I do a find_element command for an element on that screen that is unique to that screen. That way if a button or option takes me to a new screen that is incorrect my test will fail as expected. If it does find the element we're expecting it adds minimal time to the test suite.
Anish Pillai made a good suggestion of using the header text if there is any. Otherwise a particular tab, menu text, resource_id, or whatever is unique about the page would suffice. All you would need to do is a find_element call and a failure message if it fails.

How to find absolute value of caret position in pixels using Cocoa in MacOS?

For mouse I'm using:
ourEvent = CGEventCreate(None);
currentpos = CGEventGetLocation(ourEvent);
What can I use for the caret?
First the bad news.
Not every app is Cocoa-based, and those that are neither Cocoa nor Carbon nor a straight mix of the two—i.e., those based on wxWidgets, Qt, or some other cross-platform framework—typically reimplement the entire GUI stack on top of raw event and drawing primitives.
That means that there is typically no way to get this information from those applications (unless they're scriptable and expose it that way).
The good news is, Cocoa apps and some Carbon apps may expose this via Accessibility.
The user will need to have assistive devices turned on in System Preferences. Once that condition is met, you can use the Accessibility framework to get the frontmost application, get its focused window, get its focused view, and get its selection ranges.
A text view with an insertion point has exactly one selection range, and that range is empty (length=0). The location is where the insertion point is.
Of course, those are character indexes, not on-screen bounds.
That's where parameterized attributes come in. There's one for converting ranges to bounds. That's the one you want.
Theoretically (I haven't tried this), you should be able to convert the empty range of the insertion point to an empty or nearly-empty rectangle whose location is somewhere within the vertical line of the insertion point.
Make sure you test this with text views that are in scroll views, particularly when the insertion point is scrolled partially or completely out of view.
You'll want to use the Accessibility Inspector to see for yourself where your application will need to look, and to test individual applications and investigate reported failures.
You can get it from the Developer Downloads page, in the “Accessibility Tools” disk image.
If you want to focus a window, forging a mouse event to click on it is a bad idea—anything can happen if you click on the wrong thing. Send the window an kAXRaiseAction action instead.
If you want to set a text view's insertion point (and are looking to find where you need to forge a mouse event to click to set it in the desired position), again, that's a bad way to do it. Set the view's kAXSelectedTextRangesAttribute attribute instead. Again, an insertion point is a single empty range.
Did you try like this below?
NSPoint p=[[NSApp currentEvent]locationInWindow];
CGFloat X=p.x;
CGFloat Y=p.y;
NSLog(#"%f %f",X,Y);

qt: is my window overlapped by other?

I know some solution in WINAPI with enum all visible window to check intersect with my window...
But I need crossplatform solution for Qt (3 or 4 - no metter), maybe someone can help me with it?
Thanks.
To simply check if your window is active/has the keyboard focus you can check whether the Qt::WindowState is Qt::WindowActive.
To check your window for overlappings/intersections with other windows (I think that was your question) I can only think of using a little work-around.
The QWidget class has a function QWidget::visibleRegion() which returns a QRegion. Basically this region is the space in which paint-events can occur, that means that this is the space not covered by anything else. You can check whether the size of this region roughly matches the size of your window to see if there is any space covered by something else.
I didn't test this, so I can't tell you if it works for all platforms you need it to work on.
Edit: According to your comment:
This is what I found in the qt 4.6 reference about visibleRegion():
Returns the unobscured region where
paint events can occur. For visible
widgets, this is an approximation of
the area not covered by other widgets.
So if the size of this unobscured region is approximately the size of your window, then your window isn't covered by anything.

Stacking widgets in Gtk+

Is there a way in Gtk+ to stack one widget on top of another -- not counting GtkFixed? GtkFixed doesn't work well for two reasons: 1) I need Z order, and 2) I need one widget to stretch and fill provided space.
I had this exact issue using a Gtk::Fixed (actually gtk.Fixed -- pygtk -- but I think it's all the same underneath) and I was able to handle it quite easily by manipulating each widget's window.
In my case, the widgets already are EventBox instances, and I just needed to make sure that the one I was dragging around was on top, because otherwise it slid underneath others, which looked quite wrong. The solution was as simple as calling "widget.window.raise_()" to raise the widget's underlying window when the widget was clicked to begin the drag.
So I'm basically just reaffirming that the previous answer works, but I wanted to point out that it's actually pretty easy. It sounds like you may need to create some EventBoxes to hold your widgets, but after that it should just work.
You can see the code I was working on at http://github.com/divegeek/BlockHead
I don't think there is a proper container in standard GTK. I would subclass Gtk::Fixed... it is still the closest one you can get, and if you use gtkmm then subclassing shouldn't be very difficult¹. Then you can control the dimensions of all widgets, stretching one selected child to fill space.
To control Z-axis you will probably need to manipulate widget's X windows--check GDK documentation on topic of GDK windows². I remember that in PyGTK each widget has gtk.Widget.window property, I guess that the same is for gtkmm. This assumes that all your child widgets have X windows, so f.e. you'll need to wrap Gtk::Label inside Gtk::EventBox.
¹ http://www.gtkmm.org/docs/gtkmm-2.4/docs/tutorial/html/chapter-customwidgets.html
² http://www.gtkmm.org/docs/gtkmm-2.4/docs/reference/html/classGdk_1_1Window.html#6eef65b862344ad01b01e527f2c39741
I was able to do exactly that -
display one widget filling the entire space, and then another on top at a fixed location, both visible at the same time
in gtk2 by using GtkFixed, working out a suitable insertion order (first inserted - bottom, last inserted - top), and - most importantly!! - forcing GtkFixed to have it's own window:
`f := gtk_fixed_new();
gtk_widget_set_has_window(f, 1);`
If you don't do set_has_window(f, 1), all children windows will be inserted into some parent widget/window (see gtk_fixed_realize() in gtkfixed.c), and that might create z-order mess.

How do I detect which Window is obscuring another?

If I have the handles to two windows, how can I tell whether one is obscuring the other? Obviously I can easily do a collision test, but how do I test / find out their "z order"? The windows are from totally different apps.
I am probably missing something fairly obvious..
WindowFromPoint, (use a point bounded by one window, and see if you get back that window's handle, or the other one).
For partial obscuration, you can use the clipping system. I discuss this in more detail on my website here
This page talks about the Z ordering of windows. It doesn't mention a function to get the Z order directly, but it does point at GetNextWindow(), which given one window can return the next (or previous, don't let the name fool you) in the Z order. Using that, you should be able to figure it out.

Resources