Move controls with the mouse in Cocoa - cocoa

I want to build a simple forms designer in Cocoa and need to move controls around on a form using the mouse (click, hold, move around, release).
Do I need to inherit new classes from all control classes to intercept those events ? Is there a way to implement it generically for any control?

One way might be to have a single large custom view that fills all the space the controls will be in. Implement the necessary methods to implement mouse events in this view, doing hit detection on the control views and moving them around. This approach requires only 1 custom subclass of NSView, and you can use any views or controls you want to move around without subclassing them.

Write a custom view to contain the controls. Override -hitTest: to ignore the controls and return self instead. Then when you receive mouse events, figure out which control they apply to and move as appropriate.

Related

Programming custom GUI OPenGL

I am creating my own GUI in OpenTK.
I want to fire a mouse event when the cursor is, for example, in one of the GUI controls. How can I do that? Because now I'm just iterating through a list of items in the main class, and in the Opentk´s window´s MouseMove event I'm just checking if the mouse coordinates are within the "region" of the component I'm drawing.
This works for now, but I think it could be done in a better way. This way my code is unordered and in the main class, and I would rather have it in the specific component class.
What I would like is to have an event attached to each component of my GUI, so that I can define many events for one component.
I mean, I would like to have for example a button component where I can override or just use a method that fires when an event occurs. Same as OpenGL´s window where you can override events.
This is not a complete answer, because your question is quite broad, but I hope it helps.
In order to implement such a system, here are the core components for a potential design:
UI Components: Some kind of standard interface where different component types can define logic for interactions. Depending on the language, the most common approach is probably something like a parent class Component, with methods to be overridden. These would probably include things like:
Mouse Hover
Mouse Click (press / release)
Click drag
It will also likely need some additional associated information:
Some way to determine the component's location. Could be providing a bounding box, or perhaps a method that tests if a given point is within this component or not.
Information or functionality for drawing the component.
Display and layering settings (is it visible or hidden, should it draw on top of other components or behind).
UI Context: The context is a structure that defines the set of components that are existent in the UI. This could be something like a list structure of Components. In order to build your UI, you would add components to this context. The context will define some behaviour:
Managing components (add / remove / modify).
How to draw the entire context (for example, looping over each component and executing the draw functionality for each).
Handling of events (see next section)
Event Dispatch: To make your UI usable, you can insert an "adapter" layer that handles events from your windowing library (OpenTK) then translates them into usable events for your components and dispatches them. Here is an example of how this might work for a "click" event (pseudo-code):
function TK_Event_ClickPressed(point) {
for component in context {
if component.ContainsPoint(point) {
component.EventClickPressed()
}
}
}
This is actually the more tricky part of the design, in my opinion, because there are some tricky conventions around how component based UI works. You don't necessarily have to follow them, but they're important to be aware of at least because it is probably how people expect your UI to work:
After click press, click drag continues to occur until click release, even if the cursor leaves the component area.
"Actions" occur on click release.
Click release only takes action if the corresponding click press occurred on the same component.
The click release doesn't take any action if the cursor is no longer inside the component (leaving and re-entering the component before release still does the action, though).
You can only be actively clicking one component at a time (the one shown on top), even if multiple components overlap at that spot.
Assuming that you follow these conventions, this means that dispatching events is actually a bit more complicated than just checking if the event point was in a given component or not. You need to maintain some kind of state to keep track of whether the context is currently in a click or not, and which component, if any, is "consuming" the current click. That is, which component should be given the click release and drag events if they occur.
With these systems in place, you just need to create a window, create a UI context, register the adapter layer to the window to act on that context, set up the window to draw the context on frame, then use the context to add / remove / modify components in your program.

using drag events to switch rectangles in a grid in windows phone 7.5

I've run in a bit of a pickle with my puzzle game for windows phone.
I want to change between two adjutant rectangles, both on the same grid.
The tap event was easily implemented, but implementing drag seems to be a really big pain.
I'm also using a custom user control to get the rectangles on the grid, so i need to create custom delegates before attaching events to my rectangle matrix.
I am currently using the manipulation completed and manipulation started events to implement the drag gesture, but there are a couple of problems:
1) i have to tell the difference between tap and actual drag, both which are covered by the manipulation completed event. This is the way I do it right now:
if (e.TotalManipulation.Translation.X == 0 && e.TotalManipulation.Translation.Y == 0)
{
}
else
{do drag stuff here}
however, the do drag stuff here part does not seem to work, even if the transitions are different from 0; It always executes the tap event.
I am currently stacked in using manipulation events, because, as i said, I am using a custom control as an object prototype for my rectangle matrix, and i need custom delegates for that, and apparently, the GestureListener has no constructors for its event classes.
So, any suggestion on how to do this?
I figured the answer just after posting this question.
You can actually attach a gesture listener to a custom control and create custom delegates, by sending the drag gesture event parameter from the gesture listener drag event to the delegate you create and it works.

Drag and drop within a view?

I've been experimenting with the drag-and-drop support in Cocoa - draggingEntered:withInfo:, draggedImage:beganAt:, etc. It looks like OS X only triggers "drag" events when you drag something out of one view and into another.
I have a very large view which I draw stuff inside, and I'm looking for a way to drag objects within it; the objects never leave the view, so the above messages don't seem to be generated, and no drag starts. Is there a way to do "drag and drop within a view", or do I have to implement it myself?
I'm pretty sure you can't do that with drag and drop. If the things you're trying to drag are objects (like NSBezier paths) you can do a hit test on them and then use mouseDown: and mouseDragged: to implement changing the origin of your object, but it's all up to you.

Implementing NSTextInputClient without NSView

I have an application with custom widgets and custom event handling model (I'm rendering in OpenGL). I would like to implement a text edit view taking advantage of Cocoa text input structures, but I don't know how to generate NSEvent objects to pass to NSTextInputContext. In particular, I'm having problems providing a window number, graphic context and mouse cursor coordinates (since i have to provide them in the window's coordinate system). Probably graphic context isn't needed but mouse coordinates are necessary to handle mouse selection events.
Is there any way I can solve this?

Scroll Gestures not Passed to IScrollInfo implementing panel in Windows Phone 7 CTP

I am using a custom panel as a ItemsPanel for a ItemsControl in a with a custom template that provides for a scroll viewer. (See Xaml below.) So long as my panel does not implement IScrollInfo, scrolling works in this scenerio.
I implement IScrollInfo and update my viewport and extent sizes in measure override. The scroll bar shows the correct relative size, and if I call the IScrollInfo methods directly, scrolling works as expected. However, the drag and flick gestures no longer scroll the content. Putting a breakpoint on the input of every IScrollInfo method shows that drag and pick are not calling the interface. Removing the IScrollInfo interface declaration restores the scroll on drag and flick behavior.
Is there a simple way to restore the flick and pan gestures to ItemControls with panels that implement IScrollInfo?
An unfortunate answer I received from Eric Sink, A MSFT forum moderator.
I believe that what is happening is that, when you inherit from
IScrollInfo, your panel takes over all of the scroll functionality but
we use an internal interface, as martin mentioned, to control the
flick animation. Since your object does not inherit from this
interface the underlying code will bypass this functionality.
I think that you should still be able to override the OnManipulation*
events and setup your own storyboard animation.
It sounds like if you want to do IScrollInfo, you're on your own for the manipulation.

Resources