just want to know if that is a normal behaviour and if there is anything that could prevent it.
I am using OpenGL and TaoClassic (might not be relevant to the issue but well) on a Windows Form.
At the moment, I have a very basic square that moves around the window and when reaching the edges of the window, then the velocity is inverted (x when left/right, y when top/bottom).
So far nothing fancy.
Where it gets fancy is as the object moves along, if I click on the border of the form, then the application stops, new drawing are not happening but the application partially keeps updating. I say partially because from the code below the first two lines keep on running while the "collision" detection is not
transform.position.x += velocity.x * xSign * Time.deltaTime;
transform.position.y += velocity.y * ySign * Time.deltaTime;
if (penguin.transform.position.x + penguin.sprite.texture.width / 2f > Engine.Screen.Width ||
penguin.transform.position.x - penguin.sprite.texture.width / 2f < 0) xSign *= -1;
if (penguin.transform.position.y + penguin.sprite.texture.height / 2f > Engine.Screen.Height ||
penguin.transform.position.y - penguin.sprite.texture.height / 2f < 0) ySign *= -1;
no genius trick. When I release the mouse button, then the object jumps to the position it should be meaning the update kept running on the back. But if I keep pressing long enough, the object gets out of bound but does not bounce. When I release, I get my square out of window and starting to jitter at the borders.
I got this:
protected override void OnClientSizeChanged(EventArgs e)
{
base.OnClientSizeChanged(e);
Engine.Screen.Width = this.ClientSize.Width;
Engine.Screen.Height = this.ClientSize.Height;
// This is just to set the window size and the position of origin
App.SetProjection2D(Engine.Screen.Width, Engine.Screen.Height, App.Projection);
Gl.glViewport(0, 0, Engine.Screen.Width, Engine.Screen.Height);
}
and here is PeekMessage of the application:
private void OnApplicationEnterIdle(object sender, EventArgs e)
{
Message msg;
while (!PeekMessage(out msg, IntPtr.Zero, 0, 0, 0))
{
_timer.SetTime();
Input.Update();
_callback();
}
}
I would guess the problem lies in one of those. The Update and Draw methods are called from the _callback(); method
Just for info, when clicking on the form, it does not stop the application so it has to do with the event based on clicking on borders.
Any idea?
It is normal behaviour. Moving and resizing events are blocking (that's the way default WndProc works) and OnApplicationEnterIdle isn't called.
AFAIK the only way to continue updating when window is resized is to use different thread for rendering. In that case, only main thread will block on move and resize events, while rendering thread will continue updating viewport.
Related
In a C# Windows Forms app, I have a Form with a Panel with a PictureBox on it. The PictureBox is twice as wide as the Panel and has a graphics drawing in it. On scaling etc. I set AutoScrollPosition to keep the section of interest in the middle of the Panel: no problem. My problem is: when the app starts I want the Panel to show a section in the middle of the drawing, rather than the left hand side.
In the Form constructor I have:
panel1.AutoScroll = true;
panel1.AutoScrollPosition = new Point(100, 0);
textBox1.Text = panel1.AutoScrollPosition.ToString();
But on starting the app, the TextBox shows (0, 0) and the initial scroll position is at the left.
So, for test, I added a button which when pressed also executes:
panel1.AutoScrollPosition = new Point(100, 0);
textBox1.Text = panel1.AutoScrollPosition.ToString();
The TextBox then shows (100, 0) and the panel is scrolled as expected.
I makes no difference whether or not the AutoScrollPosition line is included in the constructor.
What must I do to initialise the scroll position without user interaction?
Finally solved it: you override OnLoad and set AutoScrollPosition in that, e.g.:
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
int yScroll = yToCentre - panel1.Height / 2;
panel1.AutoScrollPosition = new Point(0, yScroll);
}
I have a task on Unity 3d that if the player doesn't move for 5 seconds, a pop-up shows on the center of the screen and if the player moves, the pop-up disappears. How can I write the logic for this task please ?
Thanks
Here is code that will check the mouse position of the user and see if it has moved in the last 5 seconds. If it has not, then the popup window will show up. If it's hard to read here with the comments (I kind of think it is) copy and paste this code into Visual Studio so the colors will help distinguish code from comments.
[SerializeField] GameObject popupWindow = null;
float totTime;
float timeBeforePause = 5f;
Vector3 updatedMousePosition;
private void Update()
{
// Add the time delta between frames to the totTime var
totTime += Time.deltaTime;
// Check to see if the current mouse position input is equivalent to updateMousePosition from the previous update
// If they are equivalent, this means that the user hasn't moved the mouse
if (Input.mousePosition == updatedMousePosition)
{
// Since the user hasn't moved the mouse, check to see if the total Time is greater than the timeBeforePause
if (totTime >= timeBeforePause)
{
// Set the popup window to true in order to show the window (instantiate instead it if if doesn't exist already)
popupWindow.SetActive(true);
}
}
// If the user has moved the mouse, set the totTime back to 0 in order to restart the totTime tracking variable
else
{
totTime = 0;
}
// Check to see if the popup window is visible (active)
if (popupWindow.activeSelf == true)
{
// Check to see if the user has pressed the Esc button
if (Input.GetKeyDown(KeyCode.Escape))
{
// Hide the window
popupWindow.SetActive(false);
}
}
// Update the updatedMousePosition before the next frame/update loop executes
updatedMousePosition = Input.mousePosition;
}
If you want to track different user input (key presses) you can use a similar method. Also you will have to implement some sort of button on the popup window that will allow the user to exit out from the popup window once they return. Hope this helps!
My app has a main window which creates and opens an instance of a subclass of a QML Window {} using createObject(). This window has its flags: set to be a borderless window (I've added code so that it can be grabbed and dragged around).
When I attach a monitor to my laptop and set its font scale factor to 125% (or 150%), when I drag my main window over to the second monitor, you can see it suddenly "snap" to the larger size when it reaches the halfway point. Likewise, when I drag it back to my laptop screen it again "snaps" to the smaller size when I get halfway over (this behavior is what I want, so no problems here).
My problem is that when I drag my created borderless window over into the monitor, it keeps the original 100% scale factor and does not "snap" to a larger size. If I drag my main window over to the monitor, it gets larger but the borderless window remains at the smaller scale; only when I grab the borderless window and move it slightly does it suddenly "snap" to the larger scale size. The same thing happens in reverse - if I then drag the borderless window back onto the laptop, it remains at the larger size until I drag the main window back over and then move the borderless window slightly (at which point it suddenly "snaps" to the smaller size).
So it appears that this created Window uses the scale factor of the screen that the parent window window that created it is currently in, even if it is in a different screen itself.
Is this happening because the Window is borderless? (I'm about to test this but my build process is incredibly slow) Or is there any way to set this borderless Window up so that it detects that it is crossing into a new screen and re-scales itself (in the same way that my main window does)?
Update: I just ran a test giving my Window a native titlebar, and with a titlebar the window instantly adopts ("snaps to") the scale factor of whichever screen it happens to be in, just like my main window (and independent of the main window's scale factor).
So is there any way to duplicate this auto-scaling window behavior with a borderless window? Some flag I need to call, or some method(s) I need to call to get the OS to rescale the window?
Update 2: I tried out Felix's SetWindowPos solution. It does move the window, but this does not fix the scaling problem - the behavior of the frameless window is the same and it still does not correctly pick up the scaling factor of the screen it is in.
I am running a test using MoveWindow instead of SetWindowPos to see if that affects anything [edit: MoveWindow does not work, either - same problem]. Then I'm going to try SendMessage or PostMessage along with NoBugz' suggestion of the WM_DPICHANGED message.
Any other suggestions are welcome.
Update 3: I just created a quick C# app (winforms) to see if the same problem occurs with that, and it doesn't - when a borderless form in the C# app is dragged over into the other monitor, it immediately picks up the scale factor change. So it appears this is a Qt problem.
Update 4: See my answer below for a working solution to this problem (if a bit of a hack).
So as far as I understand, your current goal is to move the window via the WIN-API.
You will have to do so via C++. The approach would be:
Pass the QML Window to a C++-Method exposed to QML as a QQuickWindow (The QML window instanciates that type, as seen in the documentation)
Use QWindow::winId to get the native HWND
Call the SetWindowPos WIN-API method to move it
Code sample (C++-part):
// the method
void moveWindow(QQuickWindow *window, int x, int y) {
HWND handle = (HWND)window->winId();
Q_ASSERT(handle);
::SetWindowPos(handle, HWND_TOP,
x, y, 0, 0,
SWP_NOSIZE | SWP_NOZORDER);
}
// assuming "moveWindow" is a member of "MyClass"
qmlEngine->rootContext()->setContextProperty("mover", new MyClass(qmlEngine));
Code sample (QML-part):
// call this method as soon as the drag has finished, with the new positions
mover.moveWindow(idOfWindow, xPos, yPos);
Note: I would recommend you try out calling this only after the drag was finished (and move the window as you do right now until then). If that works, you can try out what happens if you call this during the drag instead of changing the x/y of the window.
I figured out a relatively simple way to fix this problem. Since a frameless window in Qt gets its scaling factor from the window that created it, the trick is to create another window (that has a titlebar but is not visible to the user) and create the frameless window there, and then add code to the frameless window to keep the hidden window positioned underneath it as the user drags it. When the frameless window is dragged into another screen, the hidden window goes with it, picks up the new scale factor (since it has a titlebar) and then the frameless window immediately gets the new screen's scale factor as well.
Here is sample solution code:
// HiddenWindow.qml
Window {
id: hiddenWindow
// note: just making window visible: false does not work.
opacity: 0
visible: true
flags: Qt.Tool | Qt.WindowTitleHint | Qt.WindowTransparentForInput |
Qt.WindowStaysOnTopHint // Qt.Tool keeps this window out of the
// taskbar
function createVisibleWindow() {
var component = Qt.createComponent("VisibleWindow.qml")
if (component.status === Component.Ready) {
var win = component.createObject(hiddenWindow)
return win
}
}
}
// VisibleWindow.qml
Window {
id: visibleWindow
property var creatorWindow: undefined
flags: Qt.FramelessWindowHint
onXChanged: {
creatorWindow.x = x
}
onYChanged: {
creatorWindow.y = y
}
onWidthChanged: {
creatorWindow.width = width
}
onHeightChanged: {
creatorWindow.height = height
}
}
And then to use these classes from your main window QML:
property var hiddenWindow: undefined
property var visibleWindow: undefined
Component.onCompleted: {
var component = Qt.createComponent("HiddenWindow.qml")
if (component.status === Component.Ready) {
hiddenWindow = component.createObject(null)
}
visibleWindow = hiddenWindow.createVisibleWindow()
visibleWindow.creatorWindow = hiddenWindow
visibleWindow.show()
}
You need to resize window when window move to other screen
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.LeftButton
onPressed: {
movePos = Qt.point(mouse.x, mouse.y)
isDoubleClicked = false
lastWindowWidth = mainWindow.width
lastWindowHeight = mainWindow.height
}
onPositionChanged: {
if (!isDoubleClicked) {
const delta = Qt.point(mouse.x - movePos.x, mouse.y - movePos.y)
if (mainWindow.visibility !== Window.Maximized) {
mainWindow.x = mainWindow.x + delta.x
mainWindow.y = mainWindow.y + delta.y
mainWindow.width = lastWindowWidth
mainWindow.height = lastWindowHeight
}
}
}
}
I am trying to write some code that will allow the user to draw on the touch screen.
When using either GestureService or ManipulationStarted/Delta, there's a "pause" that occurs when the user starts moving their finger - only when the finger is far enough from the point in which it started, only then do you start getting ManipulationDelta events (and like I said, same deal is true for GestureService).
What can I do to avoid this threshold? It really does not work well with drawing code.
Just blogged about it as I have come across similar questions on AppHub Forum.
https://invokeit.wordpress.com/2012/04/27/high-performance-touch-interface-wpdev-wp7dev/
Manipulation Delta and Gesture services are high level touch interfaces. If you want performance, consider using low level interfaces: Touch and an event called TouchReported. I tend to use them mostly (for drawing / position detection) in many of my projects
The event you want to plug in to detech touch is
Touch.FrameReported += Touch_FrameReported;
You can do this in Loaded event. Here's the implementation of the Touch_FrameReported handler. WorkArea is Canvas in this. I have also used this in conjugation with WritableBitmap
private void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
try
{
// Determine if finger / mouse is down
point = e.GetPrimaryTouchPoint(this.workArea);
if (point.Position.X < 0 || point.Position.Y < 0)
return;
if (point.Position.X > this.workArea.Width || point.Position.Y > this.workArea.Height)
return;
if (this.lbLetter.SelectedIndex == -1)
return;
switch (point.Action)
{
case TouchAction.Down:
draw = true;
old_point = point;
goto default;
case TouchAction.Up:
draw = false;
break;
default:
Draw();
break;
}
}
catch
{
MessageBox.Show("Application encountered error processing last request.");
}
}
This works lot better than high level interfaces.
On request I have implemented support for moving an OS X window by dragging it using an area within the content part of the window, i.e replicating the drag and move functionality of the title bar but in another area.
The problem I have yet to resolve is the fact that if the user drags the mouse quickly it can leave the window area and then no more mouse move events are received.
On windows this type of problem can simply be fixed by calling the win32 method SetCapture(), what's the corresponding OSX method?
This application is a cross platform C++ application using Carbon for the OS X specific parts. (And yes, I know all about the Cocoa benefits but this is an older code base and there no time nor money for a Cocoa port at this point in time.)
I have found Carbon API methods like for example TrackMouseLocation() but can't really see how I could use them for this application. In listing 2-7 here http://developer.apple.com/legacy/mac/library/documentation/Carbon/Conceptual/Carbon_Event_Manager/Tasks/CarbonEventsTasks.html
the mouse is captured but the problem is that TrackMouseLocation() blocks waiting for input. Blocking is something this application can not do since it also host a flash player that must be called many times per second.
The protototype I have assembled when trying to figure this out basically looks like this:
switch(GetEventKind(inEvent))
{
case kEventMouseDown:
// A silly test to make parts of the window border "draggable"
dragging = local_loc.v < 25 || local_loc.h < 25;
last_screen_loc = screen_loc;
break;
case kEventMouseUp:
dragging = false;
break;
case kEventMouseMoved:
break;
case kEventMouseDragged:
if (dragging) {
Rect rect;
GetWindowBounds (windowRef, kWindowContentRgn, &rect);
int dx = screen_loc.h - last_screen_loc.h;
int dy = screen_loc.v - last_screen_loc.v;
rect.left += dx;
rect.right += dx;
rect.top += dy;
rect.bottom += dy;
SetWindowBounds (windowRef, kWindowContentRgn, &rect);
}
last_screen_loc = screen_loc;
break;
Any ideas appreciated?
I feel you should track mouse in Window as well as out of window. Following code should solve your problem,
EventHandlerRef m_ApplicationMouseDragEventHandlerRef;
EventHandlerRef m_MonitorMouseDragEventHandlerRef;
{
OSStatus ErrStatus;
static const EventTypeSpec kMouseDragEvents[] =
{
{ kEventClassMouse, kEventMouseDragged }
};
ErrStatus = InstallEventHandler(GetEventMonitorTarget(), NewEventHandlerUPP(MouseHasDragged), GetEventTypeCount(kMouseDragEvents), kMouseDragEvents, this, &m_MonitorMouseDragEventHandlerRef);
ErrStatus = InstallApplicationEventHandler(NewEventHandlerUPP(MouseHasDragged), GetEventTypeCount(kMouseDragEvents), kMouseDragEvents, this, &m_ApplicationMouseDragEventHandlerRef);
return true;
}
//implement these functions
OSStatus MouseHasDragged(EventHandlerCallRef inCaller, EventRef inEvent, void *pUserData){}
Hope it helps!!
I Hope It Help´s you too:
// Get Mouse Position --> WAY 1
printf("Get Mouse Position Way 1\n");
HICoordinateSpace space = 2;
HIGetMousePosition(space, NULL, &point);
printf("Mouse Position: %.2f %.2f \n", point.x, point.y);
// Get Mouse Position --> WAY 2
printf("Get Mouse Position Way 2\n");
CGEventRef ourEvent = CGEventCreate(NULL);
point = CGEventGetLocation(ourEvent);
printf("Mouse Position: %.2f, y = %.2f \n", (float)point.x, (float)point.y);
I´m looking for the way to get a WindowPart Reference at a certain location (over all windows of all aplications)
Certain methods in Carbon doesn´t work, always return 0 as a windowRef... Any ideas?
You could also try just calling DragWindow in response to a click in your window's content area. I don't think you need to implement the dragging yourself.