Examples of using body_entered and body_exited in Godot 3D (version 3.4.4) - collision

I am trying to make a teleport part, so I need to detect when Body collides with area. I tried to use body_enter/body_entered and body_exit/body_exited, but I do not know how exactly they work and where do I need to insert them. Can I have example codes of using it?
My nodes path:
RigidBody
├ CollisionShape
├ MeshInstance
├ Area
├ ├ CollisionShape

Scene tree structure
First of all, if you want two things to collide you probably want them to be siblings. Not one Node child of the other. This is because Spatials are placed relative to their parent. Said another way: when an Spatial moves, its children move with it.
Enable monitoring
Second, you want to make sure that the Node that will detect the collision is able to do it. In the case of a RigidBody and an Area, the Area will detect the RigidBody, and for that it must have monitoring to be true This is the default. Thus, unless you have been messing with it, you need to do nothing.
Layers and masks
Third, you want the layers and masks to overlap. In particular, you want the collision_mask of the Area to have bits in common with the collision_layer of the RigidBody. By default, they come with the first bit set, so this is also satisfied by default unless you have been messing with it.
SIGNALS
Fourth, you need to connect the "body_entered" signal of the Area. You have two ways to do that:
Using the editor:
Have the Area selected in the Scene panel (on the left by default), and then go to the Node panel (on the right by default) and then to the Signals tab.
There you should find "body_entered(body:Node)". Double click it. Or select it and then click the "Connect…" button at the bottom, or press enter.
Godot will ask you to select the Node you will connect it to. You can connect it to any Node on the same scene that has a script attached beforehand. I suggest to connect it to the Area itself.
Godot will also allow you to specify the name of the method that will handle it (by default the name will be _on_Area_body_entered).
Under "advanced" you can also add extra parameters to be passed to the method, whether or not the signal can/will wait for the next frame ("deferred"), and if it will disconnect itself once triggered ("oneshot").
By pressing the Connect button, Godot will connect the signal accordingly, creating a method with the provided name in the script of the target node if it does not exist.
From code:
To connect a signal from code you use the connect method. See also disconnect and is_connected.
So you can, use the connect method to connect the "body_entered" signal, like this:
func _ready() -> void:
connect("body_entered", self, "_on_Area_body_entered")
func _on_Area_body_entered(body:Node) -> void:
pass
This code is intended to work in a script attached to the Area itself. It is also possible to call connect from a Node to another. Please note that we are passing the name of the method we want to connect as parameter to the connect method, and thus it can be whatever you want.
The connect method has optional parameters to specify extra parameters, or if you want the connection to be deferred or oneshot.
To reiterate: simply adding a method won't work, you need to connect the signal to it. And there is no special name for the method, it can have any name you choose.
With that done, the method should execute when the RigidBody overlaps the Area.

Related

How do you open a GUI when you touch a brick? (with Filtering Enabled)

I'm trying to make a Shop enclosure when you touch a brick, it'll open the Shop Gui,
Now the main problem is that I do not know how to make the GUI open since using scripts whilst filtering enabled just won't cut it.
Does anyone have a solid explanation?
First of all, in order to execute any action when a brick is touched, you will need to use the .Touched attribute of your brick. Your brick has this attribute because it is a data type called a Part.
Second of all, I am not sure how you want the GUI to open, but the most basic way is to enable it using the .Active attribute of your GUI element. This will simply make it appear on screen. You GUI element has this attribute because it is a GuiObject, whether it be a Frame, TextButton, or whatever else.
The code will look something like this:
brick = path.to.part.here
gui = path.to.gui.here
function activateGui() --shorthand for "activateGui = function()"
gui.Enabled = true
end
brick.Touched:connect(activateGui)
Notice that .Enabled is a boolean (true or false). Also, notice that .Touched is a special object with a :connect(func) function. This is because .Touched is actually an Event. All Events have a special :connect(func) function that take an argument of another function func which is to be executed when the event occurs. In this case, we asked the brick's .Touched event to execute activateGui when it occurs.
Also, .Enabled is set to true by default, so in order for this method to work, make sure you set it to false in ROBLOX Studio by unchecking .Enabled in the Properties tab for the GUI element. Note that you do not have to do this for every single element of the GUI; if you set .Enabled to false on a certain element, all of its children will also automatically be hidden, so you only have to do it on the parent element.
Finally, you must do this in a Local Script. Because the GUI is unique for every single player, it is actually handled by each player's computer, not the ROBLOX server itself. Local Scripts are scripts that are specifically handled by a player's computer and not the server, so it is crucial that you do not try to do this with a regular Script, which is handled by the server.
For your information, the above code can be condensed to this if you would like:
brick = path.to.part.here
gui = path.to.gui.here
brick.Touched:connect(function()
gui.Enabled = true
end)
This is because you do not have to create a function, name it, and then give that name to .Touched; instead, you can just create it right on the spot.

How to tell NSTextView which side to extend selection on?

If I have an NSTextView which is in this state:
How can I tell the textview that if a user presses shift+right, that rather than extending right towards the 'o', it instead de-selects the 'e'? I though this had to do with the affinity, but I have tried setting it to both NSSelectionAffinityUpstream and NSSelectionAffinityDownstream via the following code:
[self setSelectionRange: NSMakeRange(9,6)
affinity: x
stillSelecting: NO];
But that made no different. Hitting shift+right still selected the 'o'.
NSTextView knows how to do this SOMEHOW, because if you cursor position between 'w' and 'o', then hit shift+left until it matches the screenshot, then hit shift+right, it matches the behaviour I mentioned.
I'm ok to override the shift+arrow code and roll my own, but I would rather allow NSTextView to do its own thing. Anyone know if I am missing anything?
I'm not sure what you're trying to do, since this is the default text movement/selection modification behavior. Are you trying to override this to always do this one thing or are you trying to add another keyboard shortcut that overrides this behavior? In either case, some background:
Selection affinity doesn't quite work the way it sounds (this is a surprise to me after researching it just now). In fact there seems to be a disconnect between the affinity and the inherited NSResponder actions corresponding to movement and selection modification. I'll get to that in a moment.
What you want to look at are the responder actions like -moveBackwardAndModifySelection:, -moveWordBackwardAndModifySelection:, and so on. Per the documentation, the first call to such selection-modifying movement actions determines the "end" of the selection that will be modified with subsequent calls to modify selection in either direction. This means that if your first action was to select forward (-moveForwardAndModifySelection:), the "forward end" (right end in left-to-right languages; left end in right-to-left, automagically) is what will be modified from that point forward, whether you call -moveForward... or -moveBackward....
I discovered the disconnect between affinity and -move...AndModifySelection: by looking at the open source Cocoatron version of NSTextView. It appears the internal _affinity property isn't consulted in any of the move-and-select methods. It determines whether the selection range change should be "upstream" or "downstream" based on _selectionOrigin, a private property that is set when the selection ranges are modified (via the -setSelectedRange(s)... method or a mouse down / drag). I think the affinity property is just there for you to consult. Overriding it to always return one value or another doesn't change any behavior, it just misreports to outsiders. The _selectionOrigin seems only to be modified via -setSelectedRanges... method if the selection is zero-length (ie, just cursor placement).
So, with all this in mind, you might have to add one step to your manual selection modification: First set an empty selection with a location where you'd like the selection origin to be (forward end if you want backward affinity; backward end if you want forward affinity), then set a non-zero-length selection with the desired affinity.
Roundabout and ridiculous, I know, but I think that's how it has to be, given the CocoaTron source code.

Looking for a specific control (sketch included)

I am looking for a control many of us probably know, but I don't know it's name and don't have a real screenshot by hand, just this sketch:
In the left box one can select an operation or whatever, which then is moved to the right side. With the up/down arrows on the right, one can move this operation (or whatever kind of meaning the entry has) up or down in the order of execution.
How is this kind of control called? Or is it normally build by developers out of single controls? Is this control available in JavaFX 2? If not, I don't need exactly this control, but a control with the following features:
User can select multiple operations (duplicates allowed) out of all available operations
The user can arrange their order of execution
Thanks for any hint :-)
You need to use multiple controls to build up your interface. Use two ListViews with a MultipleSelectionModel for each (or at least the left one) and add a couple of buttons, that copy selected items from one list to the other and another couple of buttons which modify the position of selected items in the right list view by modifying the view's underlying item list.
listView.getSelectionModel().setSelectionMode(SelectionMode.MULTIPLE);

Detecting SetCursorPos()?

You can probably figure out why I am asking this question. Even if not, it's very simple.
My question is whether it is possible to detect the use of SetCursorPos() on one's own application, without scanning other running applications for any calls to this API.
For example, if I have my cursor in a window and I call SetCursorPos(), can this window in anyway know that the cursor placement is not directly from the mouse (raw input)?
I am not oblivious to the fact that you can 'know' whether a mouse input is raw simply by checking how the position alters; for example, if the position changes from 100(X) & 100(Y) to 500(X) and 500(Y), without moving through each individual location between these two, then with certainty, something has altered the mouse position.
If anyone of you know of a way to produce 'raw mouse input', without any application being able to tell the difference between the output from a function, and that from a mouse--if there is such a difference--then that'd suffice, too.
Of course, whenever I move my mouse, the operating system I am using detects this and then appropriately moves the cursor accordingly. In practice, I should be able to alter this low level functionality as to my will?
There is no way for a window to directly determine how the mouse was moved. External applications could be using SetCursorPos(), but they could also be using lower level functions like mouse_event() or SendInput() instead. By the time the notification reach the target window, the OS has already normalized the data and any source information is lost If you really needed to detect use of SetCursorPos() or other functions, you would have to directly hook into those functions in every running process. Alternatively, you might try registering for "Raw Input" via RegisterRawInputDevices() and see if you get a corresponding notification from the mouse hardware directy, assumine those simulating functions do not trigger Raw notifications as well.

How to design a user user interface for high latency conditions?

Working on an application that controls a remote robot where there is the potential for significant delay between pressing a button and that action actually happening. Furthermore, there is the potential that the command did not successfully reach the intended recipient after all (due to network unreliability, etc.). Additionally, there are variables in play whose changes are not instantaneous. For instance, there is a variable both for commanded speed as well as current speed; changing the commanded speed will not immediately make the current speed match that value.
The question is, how do I make the application reflect both the current states the remote robot is reporting, as well as acknowledging to the user that his command was understood by the application, but the system has not yet received notification from the robot that it has been acknowledged? (Popups are an absolute no-go.)
Some ideas that have been discussed:
Disable Buttons
When a command button is pressed, start a timer for some reasonable number of seconds and disable the button during that time. Don't update corresponding label directly, but instead wait for a response from the robot. (e.g. if you press a Speed + button, and to the right is a text label showing current speed, don't immediately change the label but instead wait for a response from the robot). Once this response occurs, or when the timer expires, re-enable the button.
Pros: No additional control widgets needed on page. Labels always reflect current state of the robot.
Cons: If you wanted to send two speed updates in a row, would have to wait until first had been received and acknowledged. Would feel sluggish and unresponsive.
Logging Info
Have a log that users can view that shows a textual representations of all the actions the user has taken, timestamped, and with a history clearly visible. Could be color coded based on user preferences.
Pros: User has immediate feedback that his command was understood, as it appears in the log
Cons: Does not resolve problem of what to do with button (especially radio button) behavior.
Does anyone have experience with building UIs for environments in which there is significant latency between action and response? I would appreciate any and all input.
I would not go for a Log: your main focus is with the Widgets. There are several techniques for reporting status for different components, I will discuss a simple one here:
A button has an status-icon next to it show it's status. Use different colors to denote the latency. Green means "ready", when the user clicks the button, the icon changes to "orange" and that indicates busy. When the user clicks again, the color changes to "red", which means queued. When the queue is empty, the color changes back to orange. If the action was executed, the color is changed to green.
A slider can be used for floating values: use two "sliders". The first is better visible, and can be dragged. The second, and a layer below the first, is the "actual reported value", which shows the latency.
Textual input can also use the green/orange status-icons. While editing, the color changes to orange. If your queue/networking protocol supports canceling editing actions, you can resend the new string every time the user presses a key. If not: change the icon to orange upon a change, send the string, and wait for a status report. The status report should contain the actual value, and if this actual value is equal to the value in the component then change the icon to green. If the the actual is not equal to the value in the component then resend the value in the component.
Radiobuttons/Checkboxes should have a double-display. One editable, one uneditable. The first is for user-input and the second is for actual reported status. Same behavior as the slider component.
These require custom components, or widgets, to be made. You can extend original components or recreate them from scratch.
If your robot can also be "steered":
Create a rectangle which can be dragged upon. The rectangle has a small cross painted on, of the current value. While you drag, you see the latency of the cross. You can use interpolation and time values to smooth out robot control. A user will notice the lag because the cross "follows" his mouse-pointer. (Often used in space shooter simulation games when controlling virtual ships, Allegiance for instance)
It sounds like you have the following information that needs to be communicated to the user:
Current state or value of an attribute of the robot.
Target (i.e., received) value the robot is seeking.
Commanded value sent by the user to the robot.
Status of each command (pending, received, achieved, timed-out).
There also a couple other considerations:
Continuous or discrete feedback. Do you have continuous real-time feedback of the current value from robot? Or is it discrete feedback, where the robot sends the current value only after achieving a target? Obviously continuous is better for the user, since it allows the user to distinguish between the robot being slow and being stuck, but if you don’t have it, you have find a way to live without it.
Synchronous or asynchronous command-sending. If the commands are sent synchronously (i.e., no new one is sent out until the last one is known to be received), then the user may need a means to (a) force out the next command without waiting for a reply from the previous (in case the reply was lost), and (b) cancel a queued command in case conditions change between when a command was created and sent.
Robot with or without conflict resolution. Does the robot have the logic to look ahead in the list of commands it has received and resolved conflicting commands? For example, if it’s at a stop and its received-commands queue includes a command to go 5 m/s followed by a command to go 2 m/s, is it smart enough to delete the 5 m/s command? Or will it first attempt to accelerate to 5 m/s then 2 m/s, possibly resulting in an overshoot? Will it wait until it achieves 5 m/s before it even “looks at” the 2 m/s command? A lack of conflict resolution complicates your UI because the users may have to track all commands sent to understand why the robot is behaving like it is.
Integrated Information with Position-coding Controls
Let’s assume that the robot has conflict resolution and asynchronous command-sending. Rather than have separate controls for commanded, targeted and current values, I recommend you integrate them all in a single control to make it easy for the user to compare current, target, and commanded values, and see discrepancies. Perhaps the best way to do this is by representing values by positions in the window. Such position-coding of values is unmatched in showing the relations among things. There are two standard GUI controls, radio buttons and sliders, that accomplish such position coding. However, you’ll have to augment them to fulfill for your purposes. The best usability could require custom-made position-coding controls where you schematically represent the robot and maybe the environment, and allow the user to control it through direct manipulation. However, I'm assuming you need a simple-to-develop implementation, and a well-laid-out combination of sliders and radio buttons may get you pretty close to this ideal.
Use radio buttons for setting a categorical value and a slider for setting a numeric value. The slider may include a text box to allow the user to fine-tune the value. You augment each of these controls so that they show the commanded, target, and current values at the same time. The “usual” indicator (the dot for the radio button and the handle for the slider) represents the commanded value while separate graphic pointers indicates the current and target values. Discriminate the current from the target by making the current more prominent. I’d design them such that they merge into a single pointer when they are at the same value in order to minimize clutter for the usual state of things. If your users are untrained on the system you may want to include text labels on the pointers (“current,” and “target” when target is different than current).
Using these position-coding controls makes it easy for the user to compare current, target, and commanded values and see discrepancies. The status is implicit in the relative positions of the indicators. When the target pointer moves to the commanded position, the user knows the command was received. When current pointer is at the target, the robot has achieved the commanded value. This is especially good for continuous feedback of numeric values because the users can not only see the difference between the current and target value on a slider, they can estimate how long it will take for it achieve the target by seeing how fast the pointer is closing on the target. For the radio buttons, you can include “X% Done” text by the Target pointer to indicate when the target will be achieved (if this information is available).
For the most responsive UI, changing a value of a slider or radio buttons should send an immediate command. There is no “Apply” button. Users can re-send a commanded value at any time by re-clicking on the appropriate slider position or radio button. I think you'll find this is a natural human tendency anyway when confronted with an apparently unresponsive control (consider elevator buttons).
The descrepency between the target pointer and commanded indicator may be too subtle to signal a lack of reception of a commmand if responses are commonly slow (over a few seconds, such that user attention has likely shifted elsewhere). If that is the case, you may want to include a modeless alert after a time-out period that almost certainly indicates the command was lost in transmission. A modeless alert may include a text annunciator beside the control and/or graphically highlighting the commanded-target descrepancy. Depending on criticality, you may want to use a audible alert like a beep or animation to speed capturing user attention. The modeless alert disappears automatically when the target value matches the commanded value for whatever reason.
Separate Controls for Commanded and Current
If sliders and radio buttons take too much space for your purposes (or have other issues), you can go with separate non-position-coding controls for commanded and current values, as implied by your Disable Buttons design. However, overall, this is a more challenging design with more issues to resolve.
I would favor field controls like text boxes, check boxes, and dropdown menus, rather than command buttons so that the commanded value is clearly shown. Continuous numeric attributes may include spinner buttons with the text box if it doesn’t end up clogging the queue with incremental commands. As with the above option, changing a value sends an immediate command.
You’re right to be concerned about the using a timer and disabling. In addition to the problem of making the system sluggish, it means you gray out the commanded state. That can make it hard to read, and also requires some mental gymnastics by the user to interpret (“it’s unavailable, so that means I already selected it”). The interpretation can also be ambiguous because often disabled means Not Applicable (e.g., the Speed control is disabled because the robot has lowered stabilizers for fixed-base operation).
The solution is to use some other graphic feedback than disabling. I’d stay away from color coding. Color coding tends to be arbitrary and thus confusing (e.g., does red mean queued or timed-out?). This may be one of the (rare) good places to use animation since animation is intuitive for representing an on-going process. A flashing or throbbing border (or other feature) for the commanded-value control can indicate a sent command is awaiting reply. A flashing/throbbing border for the current-value control indicates the command is received and the robot is seeking the target value. If animation would be distracting in this situation (like it is for most other situations), then consider a dashed border (versus solid) to indicate awaiting reply or seeking target; dashed suggests a tentative or transitory state.
The target value and status are implied by what is animated. If the commanded-value border is animated, the value inside is a pending –the reply is yet to be received. If the current border alone is animated, the value inside the commanded control is the target value. If both borders are animated, then the robot is seeking one (unspecified) target value, while another is pending. If you think it’s problematic to leave the target unspecified in such circumstances, then you may need three controls to discriminate commanded, target, and current. However, if this is an edge case, it may be better to display the target value on mouse-over of current value control or with a drop-down button.
If feedback is continuous, you can also update the current value of numeric attributes at regular increments (about every 200 to 500 ms) so this animation provides an additional feedback of the robot seeking the target. For any attribute, if it takes 10-15 seconds or more for the robot to reach common targets and the robot has conflict resolution, you may want to also display a progress bar within or beside the current attribute control so the user can judge when the robot will achieve the target value.
To re-send the command, users can always re-select a value, or hit Enter while focus is on the commanded control. That’s a little odd and awkward for some controls (e.g., checkboxes), so I’d also consider a modeless notification (not a popup) that appear near the control if the command times out for a reply. The notification includes a button to resend the command.
If your users are untrained on the system you may want to include redundant text under the animation (e.g., “Sending” when waiting for a reply and “Seeking” when moving towards the target value).
Log Table
The logging approach is probably best if commands are sent synchronously and/or the robot lacks conflict resolution. This way the user can track the command queue for either sending commands or the commands received by the robot in order to predict robot behavior. However, I wouldn’t make it a read-only text box, but rather a table that can be manipulated. While the table is sorted by default by timestamp, there will be separate sortable fields for the attribute, the commanded value, the status (pending, seeking, achieved, timeout). If feedback is continuous, then the status should indicate progress towards achieving the target value (e.g., percent, or a progress bar).
If there is synchronous command-sending, then users can edit the commanded values of pending commands or force forward, move, or delete pending commands. In any case, commands can be copied and re-inserted in order to resend any command from any time. Maybe even provide a means to save selected commands and retrieve and insert them later –now you have macro facility.
If the robot tends to be is especially cranky (frequent loss of communications, slow responses), then you may want to have this log table beside the controls for creating commands and viewing the current values. The controls should be set up to make the creation of a discrete command clear to the user. With a cranky computer-to-robot interface, spurious commands are costly, so each command should be well-planned and deliberate. Likely this means a set of field controls like text boxes and drop-down lists to set values of various attributes and a button that generates the command(s) for those values. Awkward, yes, but that’s an accurate representation of the communication link with the robot.
Alternatively, if typically the queues are nearly empty, then you may want to make this table available in a separate window for experts to troubleshoot problems with robot behavior. Normally then the users use one of the other two options I gave above.
Maybe you can use a variation of the command pattern. Each action by the user generates a command, which goes into a queue. The queue is visible to the user in a printout on the screen. So you do not disable the button, but allow the user to press the button multiple times but show the user that the command is queued. At the same time you do not update the labels showing current state of robot until you receive the state from the robot.
In the queue the command could show its status somehow, maybe text and colour. And maybe you should allow the user to delete a command before it is processed by the robot(if that is possible).
So the queue might look like this:
Command Status Result of Action
speed+5 pending speed will increase to 200 (Delete This)
speed+5 pending speed will increase to 205 (Delete This)
speed-5 pending speed will decrease to 200 (Delete This)
and so on.

Resources