I find that Microsoft's decision to provide cursor updates in the AcquireNextFrame loop causes a huge performance hit when the cursor is being moved around.
When I turn on a millisecond-level clock in the browser, AcquireNextFrame can capture 60 frames of data, and the interval between each frame is very average, but if the mouse moves frequently, the captured data will be less, and the interval between each frame is very uneven.
I found that someone asked this question earlier than me on https://www.reddit.com/r/VFIO/comments/f22k6u/windows_capture_performance_dxgi_bug/
Is there any way to make the function AcquireNextFrame work around this performance issue?
Related
I'm trying to understand why there's a long green frame of 872.9ms when it appears that not much is going on. All the little tasks are around 4ms. I'm expecting to see each of those wrapped in their own frame. That last task is big, and that's what I'm working on.
I enabled the FPS meter to make sure that it's not a Performance Monitor bug. FPS drop to 1, so I think the Perf Mon is working correctly.
What is the preferred way of synchronizing with monitor refreshes, when vsync is not an option? We enable vsync, however, some users disable it in driver settings, and those override app preferences. We need reliable predictable frame lengths to simulate the world correctly, do some visual effects, and synchronize audio (more precisely, we need to estimate how long a frame is going to be on screen, and when it will be on screen).
Is there any way to force drivers to enable vsync despite what the user set in the driver? Or to ask Windows when a monitor rerfesh is going to happen? We have issues with manual sleeping when our frame boundaries line up closely to vblank. It causes occasional missed frames, and up to 1 extra frame of input latency.
We mainly use OpenGL, but Direct3D advice is also appreciated.
You should not build your application's timing on the basis of vsync and exact timings of frame presentation. Games don't do that these days and have not do so for quite some time. This is what allows them to keep a consistent speed even if they start dropping frames; because their timing, physics computations, AI, etc isn't based on when a frame gets displayed but instead on actual timing.
Game frame timings are typically sufficiently small (less than 50ms) that human beings cannot detect any audio/video synchronization issues. So if you want to display an image that should have a sound played alongside it, as long as the sound starts within about 30ms or so of the image, you're fine.
Oh and don't bother trying to switch to Vulkan/D3D12 to resolve this problem. They don't. Vulkan in particular decouples presentation from other tasks, making it basically impossible to know the exact time when an image starts appearing on the screen. You give Vulkan an image, and it presents it... at whatever is the next most opportune moment. You get some control over how that moment gets chosen, but even those choices can be restricted based on factors outside of your control.
Design your program to avoid the need for rigid vsync. Use internal timings instead.
I am builing an interactive application using the latest wxWidgets(3.0.1) and OpenGL(3.3) on Windows. I have gotten to the point where I have a wxGLcanvas rendering onto a wxPanel and it works fine. The rendering is done in the paint event for that GLcanvas. However, I now want to perform simulations with an accurate delta time between updates.
So essentially I would like some sort of functionality that would allow me to have a method like
void Update(float dt)
{
// Update simulation with accurate time-step
}
I have come across timers but I'm not sure how I could incorporate it into my application to get an accurate dt. Some have mentioned creating a separate thread for that panel to update independently from the rest of the application. Is this an option and roughly how is it implemented if that is the case?
Timer precision should be good enough to achieve at least ~50 FPS, so they should be good enough for your purpose. You need to use a timer to get regular calls to your timer event handler which can then use a more accurate method (as timers are not guaranteed to be perfectly regular) to determine whether an updated is needed, e.g. using wxDateTime::UNow(), and queue a Refresh() in this case.
Notice that you still will not be able to guarantee regular window refreshes using normal GUI frameworks.
What operating system are you using? There is no way in Windows to guarantee an accurate time for an event. Unix, however is a real time operating system ( RTOS ) and it is possible to guarantee event times.
What range of delta times do you care about? The distinction between RTOS's and Windows only matters for times less than 2 to 3 milliseconds ( unless windows becomes ridiculously overloaded ) Since any worthwhile 3D scene will take more that 3 milliseconds to render, this is probably a non-issue!
Of most significance in this kind of question is the response time of human vision. Watching a dynamic diagram such as a plot, people are unaware of any refresh rate faster than 300 ms. If you want to attempt a video realistic effect, you might need a refresh rate approaching 25 ms - but then your problem will not be the accuracy of your timer, but the speed of your scene rendering. So, once again, this is a non-issue.
I'm a newbie game developer and I'm having an issue I'm trying to deal with. I am working in a game for Android using JAVA.
The case is I'm using deltaTime to have smooth movements and so on in any devices, but I came across with a problem. In a specific moment of the game, it realizes a quite expensive operation which increments the deltaTime for the next iteration. With this, this next iteration lags a bit and in old slow devices can be really bad.
To fix this, I have thought in a solution I would like to share with you and have a bit of feedback about what could happen with this. The algorythm is the following:
1) Every iteration, the deltatime is added to an "average deltatime variable" which keeps an average of all the iterations
2) If in an iteration the deltaTime is at least twice the value of the "average variable", then I reasign its value to the average
With this the game will adapt to the actual performance of the device and will not lag in a concret iteration.
What do you think? I just made it up, I suppose more people came across with this and there is another better solution... need tips! Thanks
There is a much simpler and accurate method than storing averages. I dont believe your proposal will ever get you the results that you want.
Take the total span of time (including fraction) since the previous frame began - this is your
delta time. It is often milliseconds or seconds.
Multiply your move speed by delta time before you apply
it.
This gives you frame rate independence. You will want to experiment until your speeds are correct.
Lets consider the example from my comment above:
If you have one frame that takes 1ms, and object that moves 10 units
per frame is moving at a speed of 10 units per millisecond. However, if
a frame takes 10ms, your object slows to 1 unit per millisecond.
In the first frame, we multiply the speed (10) by 1 (the delta time). This gives us a speed of 10.
In the second frame, our delta is 10 - the frame was ten times slower. If we multiply our speed (10) by the delta (10) we get 100. This is the same speed as object was moving in the 1ms frame.
We now have consistent movement speeds in our game, regardless of how often the screen updates.
EDIT:
In response to your comments.
A faster computer is the answer ;) There is no easy fix for framerate consistency and it can manifest itself in a variety of ways - screen tearing being the grimmest dilemma.
What are you doing in the frames with wildly inconsistent deltas? Consider optimizing that code. The following operations can really kill your framerate:
AI routines like Pathing
IO operations like disk/network access
Generation of procedural resources
Physics!
Anything else that isn't rendering code...
These will all cause the delta to increase by X, depending on the order of the algorithms and quantity of data being processed. Consider performing these long running operations in a separate thread and act on/display the results when they are ready.
More edits:
What you are effectively doing in your solution is slowing everything back down to avoid the jump in on screen position, regardless of the game rules.
Consider a shooter, where reflexes are everything and estimation of velocity is hugely important. What happens if the frame rate doubles and you halve the rotation speed of the player for a frame? Now the player has experienced a spike in frame rate AND their cross-hair moved slower than they thought. Worse, because you are using a running average, subsequent frames will have their movement slowed.
This seems like quite a knock on effect for one slow frame. If you had a physics engine, that slow frame may even have a very real impact on the game world.
Final thought: the idea of the delta time is to disconnect the game rules from the hardware you are running on - your solution reconnects them
Given that the standard number of ticks for a cycle in a WP7 app is 333,333 ticks (or it is if you set it as such), how much of this time slice does someone have to work in?
To put it another way, how many ticks do the standard processes eat up (Drawing the screen, clearing buffers, etc)?
I worked out a process for doing something in a Spike (as I often do) but it is eating up about (14 ms) of time right now (about half the time slice that I have available) and I am concerned about what will happen if it runs past that point.
The conventional way of doing computationally intensive things is to do them on a background thread - this means that the UI thread(s) don't block while the computations are occurring - typically the UI threads are scheduled ahead of the background threads so that the screen drawing continues smoothly even though the CPU is 100% busy. This approach allows you to queue as much work as you want to.
If you need to do the computational work within the UI thread - e.g. because its part of game mechanics or part of the "per frame" update/drawing logic, then conventionally what happens is that the game frame rate slows down a bit because the phone is waiting on your logic before it can draw.
If your question is "what is a decent frame rate?" Then that depends a bit on the type of app/game, but generally (at my age...) I think anything 30Hz and above is OK - so up to 33ms for each frame - and it is important that the frame rate is smooth - i.e. each frame length takes about the same time.
I hope that approximately answers your question... wasn't entirely sure I understood it!