What methods should TEdit descendant override in order to react to the OnKeyPressed event itself, instead of some external function?
For clarification: my TEdit have certain symbols that can be typed in, but not all.
The most simple solution is to define OnKeyPressed in the form code, but I want to have an object that takes care of it itself, not delegating it to the TForm descendant.
Related
I need to make a Timage item hide or disappear after a certain key is being pressed, while the program is still running.
I know that onKeyPress method can't really make changes to the Graphic items such like Timage, so is there any other alternative ways which I could use for my situation?
Set KeyPreview of the form to true and write an OnKeyPress|Down|Up event for the form, where you hide that TImage.
I am creating my own GUI in OpenTK.
I want to fire a mouse event when the cursor is, for example, in one of the GUI controls. How can I do that? Because now I'm just iterating through a list of items in the main class, and in the Opentk´s window´s MouseMove event I'm just checking if the mouse coordinates are within the "region" of the component I'm drawing.
This works for now, but I think it could be done in a better way. This way my code is unordered and in the main class, and I would rather have it in the specific component class.
What I would like is to have an event attached to each component of my GUI, so that I can define many events for one component.
I mean, I would like to have for example a button component where I can override or just use a method that fires when an event occurs. Same as OpenGL´s window where you can override events.
This is not a complete answer, because your question is quite broad, but I hope it helps.
In order to implement such a system, here are the core components for a potential design:
UI Components: Some kind of standard interface where different component types can define logic for interactions. Depending on the language, the most common approach is probably something like a parent class Component, with methods to be overridden. These would probably include things like:
Mouse Hover
Mouse Click (press / release)
Click drag
It will also likely need some additional associated information:
Some way to determine the component's location. Could be providing a bounding box, or perhaps a method that tests if a given point is within this component or not.
Information or functionality for drawing the component.
Display and layering settings (is it visible or hidden, should it draw on top of other components or behind).
UI Context: The context is a structure that defines the set of components that are existent in the UI. This could be something like a list structure of Components. In order to build your UI, you would add components to this context. The context will define some behaviour:
Managing components (add / remove / modify).
How to draw the entire context (for example, looping over each component and executing the draw functionality for each).
Handling of events (see next section)
Event Dispatch: To make your UI usable, you can insert an "adapter" layer that handles events from your windowing library (OpenTK) then translates them into usable events for your components and dispatches them. Here is an example of how this might work for a "click" event (pseudo-code):
function TK_Event_ClickPressed(point) {
for component in context {
if component.ContainsPoint(point) {
component.EventClickPressed()
}
}
}
This is actually the more tricky part of the design, in my opinion, because there are some tricky conventions around how component based UI works. You don't necessarily have to follow them, but they're important to be aware of at least because it is probably how people expect your UI to work:
After click press, click drag continues to occur until click release, even if the cursor leaves the component area.
"Actions" occur on click release.
Click release only takes action if the corresponding click press occurred on the same component.
The click release doesn't take any action if the cursor is no longer inside the component (leaving and re-entering the component before release still does the action, though).
You can only be actively clicking one component at a time (the one shown on top), even if multiple components overlap at that spot.
Assuming that you follow these conventions, this means that dispatching events is actually a bit more complicated than just checking if the event point was in a given component or not. You need to maintain some kind of state to keep track of whether the context is currently in a click or not, and which component, if any, is "consuming" the current click. That is, which component should be given the click release and drag events if they occur.
With these systems in place, you just need to create a window, create a UI context, register the adapter layer to the window to act on that context, set up the window to draw the context on frame, then use the context to add / remove / modify components in your program.
I've created a custom UIControl. The closest comparison is a UIStepper but it is a subclass of UIControl because its wholly custom.
For most UIControls you can create target actions with primaryActionTriggered to avoid needing to know which action matters. I want the same convenience for my custom UIControl. So how do I map UIControl.Event.valueChanged to UIControl.Event.primaryActionTriggered?
I think if you are listening for the valueChanged event then you just need to manually call sendActions(for: .primaryActionTriggered) when that event fires
Gwt HTML5 Canvas wrapper can responds to mouse and keyboard events, it binds to 5 - 6 types of different events, my question is, it is possible to define entirely new event system such as CanvasEvent (and related handler CanvasEventHandler extends to GwtEvent etc), bind this to canvas object and then handle all events differently using a handler interface methods will be something like onDraw(), onDrag(), onMove(), onSelect() etc.
I dont have good clarity of GWT event system but i want to know is this possible without individuality attaching separate event handlers to form a logic for handling my problem, can I access all possible event as one consolidated object and fire custom event based on conditions. What would be the best way to do it, there are threads with GWT custom events but they include senders, whereas in my case sender is already present (Canvas)
Thanks much
Certainly - remember that all of the GwtEvent objects are completely artificial and are based off of events fired from the native JavaScript. That JavaScript event object, (which comes in via onBrowserEvent) gets wrapped up as a ClickEvent or a KeyDownEvent based on the details of the original object, and then fired off via the internal HandlerManager instance. In the same way, you could listen for a MouseDownEvent, set some state, then if a MouseMoveEvent occurs, fire your own CanvasDrag event. You'd probably stop listening to those move events once a MouseUpEvent happens, then you would issue something like a CanvasDropEvent. If isntead the MouseUpEvent occurred right away with no move, you could trigger a CanvasSelectEvent (or you might have something else in mind for that select event).
Each of these new event types you declare then might contain specifics about whatever is going on. For example, while a MouseMoveEvent just has the element that the mouse is over and the x/y coords, you might be indicating what is being dragged around the page. That might be in the form of the shape that was last clicked, or even the data it represents.
Yes, the 'sender', or source, is already there, but it'll be easier to use if you expose some kind of a method to add handlers like addCanvasDragHandler, etc. This is not required - all users of your code could just use addHandler, but it does make it a little ambiguous about whether or not the widget supports the event in question. You would then call fireEvent on the canvas object to make all handlers of that event type listen.
Or, you could make a new class that contains an internal Canvas widget - possibly a Composite object or just something that implements IsWidget (Canvas has a private constructor, so you can't subclass it). This would enable you to add your own particular handlers, and keep your own HandlerManager/EventBus to track the events you are concerned with.
In a UIView I have a nav button with an IBAction & method in the top-level view controller.
In the IBAction code, I flip a boolean so that when execution returns to the UIView, there's some new setup prior to drawRect: repainting the view.
If all this were in the ViewController, I could put the new setup code in something like ViewDidAppear so it executes each time the button is pressed. However, there's no such method at the UIView level. There is initWithCoder, but this only seems to be executed once (when the storyboard/nib loads).
So my question is - either, is there a way to call the initiWithCoder method explicitly from my IBAction at the VC level (I've tried [self initWithCoder:nil] but the breakpoint at the UIView level doesn't trigger) or is there a method that runs when execution returns to the UIView level, a la ViewDidAppear?
Thanks
Image of goal:
Unless you really know what you're doing (I mean really know), don't call -initWithCoder: yourself. You're meant to implement it just as you implement -drawRect: and let the system call it. If you ever find yourself calling something like this directly and you can't explain the deep technical reasons why there's no other way, then it's the wrong approach. Read and follow the documentation (not just the method's doc) to make sure you understand whatever method you're using. It'll tell you.
That said, what you're wondering is if there's a point in a view's lifecycle where you can "do something" (check a BOOL and perform some work if YES/NO) any time the view "appears". The answer is yes, and -willMoveToSuperview "can" work.
BUT
That's the "wrong" approach, IMO. The BOOL property ('draw a twiddle next time I'm asked to draw) can and probably should live in the UIView, but its state should be set in its controller since this is specific to your app. Views are supposed to be (highly) reusable; controllers are supposed to implement your app's specific logic and drive the views according to the model state and user (or system) actions.
So: when you want to enable the "draw a twiddle" operation, your view controller should set the view instance's drawTwiddle flag then probably flag the view for drawing. Your view will then have -drawRect: called at some point you shouldn't try to control and, when it does, it sees that self.drawTwiddle == YES and draws the twiddle along with whatever other drawing it does.
At that point, you might be tempted to have the view set its own drawTwiddle flag to NO since the behavior is intended to fire once. Don't do this. BEWARE: Other user actions or system events may call -drawRect: at any time so the twiddle may not actually be seen by the user (it may appear and disappear faster than is visible). 'So', the right thing to do is to make the controller (via some direct action, system event, or timer) responsible for setting and unsetting the drawTwiddle flag, then flagging the view for redisplay.
Adding
It's also unusual to put an IBOutlet or an IBAction in a UIView. Most of the time, unless you're creating some compound control whose parts aren't intended to be accessed and managed individually, your architecture is clearer (and more closely follows the spirit of the MVC design pattern) by letting the controller manage/own the outlets and actions.