I'm trying to implement a drag and drop system in the extension I'm developing but I'm running into a problem.
As far as I can tell, dnd is implemented by making a draggable object like this
let draggable = DND.makeDraggable(this.actor)
where this.actor is the Clutter actor I want to drag and drop, and then implementing the necessary callbacks. However, when I do this, Gnome Shell immediately crashes when I start to drag and leaves output on stderr like this
(gnome-shell:15279): St-ERROR **: st_widget_get_theme_node called on the widget [0x2b3c000 StBoxLayout.window-list-item-box:focused ("extension.js (~/Source/js/Botto...gmail.com) - GVIM")] which is not in the stage.
However, using the Looking Glass to call the get_theme_node method on that specific widget does work perfectly!
Do I have to explicitly add actors to the stage? And how could get_theme_node fail somewhere deep inside the belly of Gnome Shell, but not from the Looking Glass?
It is also necessary to implement a getDragActor and getDragActorSource method on the delegate of the actor you're trying to drag.
Here is a simple implementation that just drags around a clone of the actor.
getDragActor: function() {
return new Clutter.Clone({source: this.actor,
reactive: false,
width: this.actor.get_width(),
height: this.actor.get_height()});
},
getDragActorSource: function() {
return this.actor;
}
Related
I am designing a GUI using C, Glade, and Gtk.
I have some signals configured in glade to update the labels of various widgets, mainly GtkButton and GtkLabel. The overall functionality is that when a certain radio button is clicked, all button and labels change in response (language selection).
I am using the function gtk_label_set_label(...) in the widgets _draw() function and it works as expected (text changes, g_print occurs (once)).
gboolean on_lblMyLabel_draw(GtkLabel *label, gpointer *user_data) {
gtk_label_set_label(label, "custom text");
g_print("%s\n", "custom text");
return FALSE;
}
However, when I attempt the same from a button,
gboolean on_btnMyButton_draw(GtkButton *button, gpointer *user_data) {
gtk_button_set_label(button, "custom text");
g_print("%s\n", "custom text");
return FALSE;
}
The text does not update, but dissappears, and the g_print() statement prints forever (as if the draw is recursively calling itself).
Funnily, if I move the button code from _draw to _click, it works as expected, however, I need the GUI to redraw itself, so updating on click is impractical.
Is there a way, using _draw() to prevent this?
Is there a better way to do this?
thx!
Is there a way, using _draw() to prevent this?
No, and you shouldn’t be using the draw signal for this either. It has an entirely different purpose, and will be called each time a widgets redraws itself. That’s also the reason why your button is going into an infinite recursion: you changed its label so it figures it needs to be redrawn; that redraw leads to your callback being called, which again changes the label, etc etc
Is there a better way to do this?
Yes, and you mention it yourself already: make sure you do the logic of changing the widgets in the appropriate place (for example, on a click event), and let the GTK widgets take care of redrawing themselves.
Unless you’re doing something very exotic (like not running an event loop, which you automatically get with GtkApplication), this will all work fine.
I am trying to implement a drag and drop functionality(I was able to see there is an open issue regarding this) with the currently available mouse actions but so far not able to do that. so i am looking for a work around for that, is there anyway we can implement drag and drop in playwright python. below is the code that i am trying to use.
await page.mouse.move(472, 399)
await page.mouse.down()
await page.mouse.move(991, 313)
await page.mouse.up()
Thank you
I assume in your case the HTML5 drag and drop is not working.
Unfortunately at the time of writing this the current playwright python (1.10.x) won't trigger dragstart and drop events through the mouse.down, mouse.move and mouse.up API.
The following code however should work (using sync_api),
# This element should have the draggable attribute value as true
src_elem = page.query_selector('div.foo')
# This element should be the element as the drop target
dest_elem = page.query_selector('div.bar')
# Create a data transfer JSHandle instance
data_transfer = page.evaluate_handle('() => new DataTransfer()')
src_elem.dispatch_event('dragstart', { 'dataTransfer': data_transfer })
dest_elem.dispatch_event('drop', { 'dataTransfer': data_transfer })
# Now check whether the dropped effect is achieved
dest_elem.wait_for_selector('ENTER SELECTOR AFTER DROP EFFECT')
While programming my own minesweeper game, I have come to a stage(kind of final one) where I have to introduce the concept of Flags. Currently, I am using mousePressed() to open up any cell that might be a mine. But I cannot figure out a way how to flag any cell, as I tried to use doubleClicked() but it does not work in this case. Does anyone have any hint for this, or any built in p5.js tool that might simply flag a cell?
EDIT:
https://github.com/abj54/minesweeper
My code is in the above repo for anyone who might want to go through it. In terms of flag, it is a basic indicator of letting user guess which of the given cell may be a mine.
Listening to booth events on the same object is problematic because of the event change which is called for a dblclick:
mousedown
mouseup
click
mousedown
mouseup
click
dbclick
P5.js checks the click/dblclick event of the window so you should not use both functions (click and dblclick).
But you can use the click event with a Timeout to solve this problem.
var clicked=false, clickTimeout=300;
function mouseClicked(){
if(!clicked){
clicked=true;
setTimeout(function(){
if(clicked){
console.log("single click");
clicked=false;
//single ClickStuff
}
},clickTimeout);
}else{
clicked=false;
console.log("double click");
//double click Stuff
}
}
So you are waiting the in clickTimeout defined amount of Time if a second click is called and react to.
I have been working on figuring out what is going on with my game's UI for at least two days now, and no progress.
Note that this is a mobile game, but I was asked to build for Windows for visualization and presentation purpose.
So the problem is that when I run my game on the Unity Editor, Android, iOS and Mac platforms the UI works just perfect, but then when I run the game on Windows the UI still works fine UNTIL I load a specific scene.
This specific scene is a loading screen (between main menu and a level) when the level finished async loading, a method called MoveObjects is called in a script in the loading screen, to move some objects that where spawned in the loading screen scene into the level scene (this is not the issue though, since I already try without this method and the problem on the UI persist).
Once the logic of this MoveObjects method is done, a start button is enabled in the loading screen, for the player to click and start playing (I did try moving the start button to the level scene, since maybe it not been a child of the currently active scene could be the issue, but the problem still persist). Is at this point that the UI is partially broken, what I mean with this is, that I can see buttons (and some other UI elements like a scrollbar) changing color/state when the mouse moves over them, but I cannot click on them anymore (the button wont even change to the pressed state).
Also note that I tried creating a development build to see if there was any errors in the console, and I notice that this problem is also affecting the old UI system, so I was not able to interact with the development console anymore.
Also also, note that if I grab and drag the scrollbar before this issue appear, and I keep holding down on the scrollbar until this happens, the mouse gets stuck on the scrollbar, meaning that I cannot interact with the UI anymore, but the scrollbar will still move with the mouse.
I already check that this things are not the source of the problem:
Missing EventSystem, GraphicRaycaster or InputModule.
Another UI element blocking the rest of the UI.
Canvas is Screen Space - Overlay so there is no need for a camera reference.
I only have one EventSystem.
Time.timeScale is 1.
I am not sure what else I could try, so if anyone has any suggestions, I would appreciate it. Thanks.
P.S: I am sorry to say that I cannot share any code or visual material or examples due to the confidentiality.
A major source for a non-working UI for me has always been another (invisible) UI object blocking the raycast (a transparent Image, or a large Text object with raycast on).
Here's a snippet I put together based on info found elsewhere, I often use it to track objects that are masking the raycast in complex UI situations. Place the component on a text object, make sure it's at least few lines tall, as the results will be displayed one under another.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.EventSystems;
using UnityEngine.UI;
[RequireComponent(typeof(Text))]
public class DebugShowUnderCursor : MonoBehaviour
{
Text text;
EventSystem eventSystem;
List<RaycastResult> list;
void Start()
{
eventSystem = EventSystem.current;
text = GetComponent<Text>();
text.raycastTarget=false;
}
public List<RaycastResult> RaycastMouse(){
PointerEventData pointerData = new PointerEventData (EventSystem.current) { pointerId = -1, };
pointerData.position = Input.mousePosition;
List<RaycastResult> results = new List<RaycastResult>();
EventSystem.current.RaycastAll(pointerData, results);
return results;
}
void Update()
{
list= RaycastMouse();
string objects="";
foreach ( RaycastResult result in list)
objects+=result.gameObject.name+"\n";
text.text = objects;
}
}
I'm kind of new to this so sorry if I'm writing in the wrong place - let me know and I'll move / delete this comment.
I'm currently having issues detecting controller input while using VRTK.
For example, when I have a collision between two objects, I want to be able to detect what buttons are being pressed on the controllers but can't seem to work out how I can do this.
Also, I have implemented the Interact Use functionality but I'm struggling to work out how to make two buttons do different actions.
For example:
once I grab an object with the simple pointer I want one button to
bring the object closer and another to move it away, but I've only
managed to implement one or the other.
Any suggestions? I've looked everywhere in the docs, examples and Google and can't seem to find anything. Any help would be MUCH appreciated! Pulling my hair out here!
You could utilise the Grabbedmethod on the InteractableObject: https://vrtoolkit.readme.io/docs/vrtk_interactableobject#section-grabbed-1
Or you could use the ControllerGrabInteractableObject event on The InteractGrab script: https://vrtoolkit.readme.io/docs/vrtk_interactgrab#section-class-events
Or you could have an Update routine and check the grabbed status on the controller doing GetGrabbedObject() != null (which checks to see if the controller has an object grabbed if it's null then one isn't grabbed).
Then you can use the ControllerEvents button bools to do something on a button press. So a script with this in that sits on the controller script alias gameobject next to the interact grab script:
void Update() {
if (GetComponent<VRTK_InteractGrab>().GetGrabbedObject != null) {
var controllerEvents = GetComponent<VRTK_ControllerEvents>();
if (controllerEvents.IsButtonPressed(VRTK_ControllerEvents.ButtonAlias.Trigger_Press) {
//Do something on trigger press
}
if (controllerEvents.IsButtonPressed(VRTK_ControllerEvents.ButtonAlias.Grip_Press) {
//Do something on grip press
}
}
}