How to enable vsync in DirectX10 - windows-7

I'm working on a apps that's based on DirectX10 by using SlimDX. I would like to enable vsync similar to DirectX9, but the fps doesn't seems to lock to 60Hz(which happens if I'm using Direct9). I'm setting vsync by using this
SwapChain.Present(1, PresentFlags.None);
Did I do something wrong?
Btw, I'm running Win7 with ATI HD5570 video card. After some googling, I gather that ATI can force vsync on certain games. So I wonder if that's related.
Reference for code to C++ will do as well. I'll translate it myself.
Thanks

First argument of SwapChain.Present is syncInterval. 0 indicates that presentation should occur immediately, without synchronization. Any other value indicates that presentation should be synchonized with the specified next vertical blank.
So use it like this:
SwapChain.Present(0, PresentFlags.None);

You can try to force vsync using Catalyst Control Center

Related

SDL-OpenGL Unable to disable V-Sync on Samsung Galaxy Note 4

Using double buffer and the buffer swap function to end the draw loop, SDL_GL_SwapWindow, when I set the Vsync to Off through:
SDL_GL_SetSwapInterval(0); //returns 0 so the vsync option is set correctly
Looks like the VSync is still on on this device.
I've tested the same code in iOS, other android devices including tablets, pcs and mac with a very simple scene and all of them go from about 60 fps with VSync to +400 without it.
The only device that seems to keep the VSync is the Note 4 because the fps are the same.
This is why I'm asking if there any reason for this. I've looked for the device specifications and checked the display and developer options in case there was some kind of VSync locked option there but I found nothing related to this.
EDIT:
Same behaviour with a Samsung Galaxy S4 (VSync won't turn off)
As clarified in comments and documentation, there are drivers and hardware setups that limit the framerate regardless of the vsync configuration/framerate specific management.
In particular, the framerate is limited in most new Android devices.

Video Mixing Renderer 9 quality issue

I have a fairly simple question. Or at least I thought I would be simple to solve but couldn't find any answers online.
Anyway
I used this example form MSDN to play a file using DirectShow:
How To Play a File.
It's really simple with only a few lines of code and it works.
Then after some research I managed to create a VMR9 filter and add it to the graph. This also worked.
There's just one thing
When I play a video file using the VMR9 filter the quality looks worse in VMR9.
I tried changing it with IVMRMixerControl9::SetMixingPrefs but nothings seems to change. IVMRMixerControl9::SetMixingPrefs does actually return S_OK.
dwPrefs &= ~MixerPref9_FilteringMask;
dwPrefs |= MixerPref9_BiLinearFiltering;
Or am I using the wrong filter?
edit: problem solved
I just did a comparison with Media Player Classic by putting it on VMR9 (windowed) mode. It gave me the same quality. So if I want better quality am gonna have to use EVR (enhanced video rendering) instead of VMR9 (Video Mixing Renderer 9).
VMR-7/VMR-9 quality issues are a long standing problem:
Poor Picture Quality with VMR9 as renderer
VMR9 scaling issues on Vista
EVR is suggested to be used instead, to get proper/nicer scaling and visual picture quality.
In Windows Vista and later, applications should use the EVR if the hardware supports it. Otherwise, fall back to the VMR-9 or VMR-7. The EVR offers better performance and better video quality than previous renderers. Also, it is designed to work with the Desktop Window Manager (DWM).
Better performance is questionable, and EVR is sadly having its own issues though, but when output quality is in question EVR is the answer.

How to detect hardware acceleration for Core Image?

In my app I'm using a "CIMotionBlur" CIFilter during CALayer animation. The problem is that the filter does not work properly when hardware acceleration is not available:
In OS X Safe Mode the layer becomes invisible during animation;
When using VMWare Fusion the animation is unbearably slow and makes testing the app harder;
Animation works fine without the filter. I'd like to apply filter only when hardware acceleration is available.
What's the highest level API that would let me know when to disable the filter?
I'm going to look for clues in IOKit.
I found the answer in Technical Q&A QA1218
"How do I tell if a particular display is being hardware accelerated by Quartz Extreme?"
It's as simple as this:
NSNumber* curScreenNum = [self.window.screen.deviceDescription objectForKey:#"NSScreenNumber"];
if (CGDisplayUsesOpenGLAcceleration(curScreenNum.integerValue))
{
// Do accelerated stuff
}
Works as expected in my cases.
https://developer.apple.com/library/ios/documentation/graphicsimaging/Conceptual/CoreImaging/ci_concepts/ci_concepts.html#//apple_ref/doc/uid/TP30001185-CH2-TPXREF101
I guess you could go with this to determine whether your desired filters are available. They probably shouldn't be available if Core Image is not available.
If filterNamesInCategory returns the CIMotionBlur filter's name even when it is not available then you should file a bug report with Apple's bug reporter.
As a further test, you could call filterWithName and check for a non-nil result. If you get back nil, you know the filter is available.
Or are you saying that on platforms without hardware acceleration, you get back a filter but it doesn't work? If that's the case then again, time to file a bug report.
Or, are you saying that on non-hardware-accellerated platforms, you get back a filter that works, but it is too slow to be useful?
Poking around in the docs I don't see any way to tell if Core Image is hardware-accellerated for a given platform or not.
However, CI filters are built on top of OpenGL, and you CAN query the OpenGL driver to see if it's hardware-accellerated or not. My guess is that the result of such a query would also tell you if CIFilters were hardware-accellerated or not.

d3d11 alt-tab on fullscreen gives strange result

First of all after 4 hours of debugging I have no problem with my code. But I'm curious why I had issue that I had.
I created fullscreen window with d3d11 rendering. Problem occurred when I tried to alt-tab window and didn't have Present() in my loop (I simply found this issue before implementing rendering function). In that case after minimizing window Red and Blue channels on my screen were swapped (yes, literally).
It took me long time to find because I suspected my swap chain or window itself (sdl). Can you help me find the reason of this bug- for educational purposes?
This usually is due to a graphics driver bug with RGBA swap chains. You can try updating your driver (run Windows Update). But to improve compatibility you can change your swap chain surface format to BGRA (specifically, B8G8R8A8_UNORM). As long as you are just doing normal rendering (and not doing anything fancy like UpdateSubresource directly to the back buffer), you should be able to leave everything else as-is and it will render correctly.

Resetting GPU and driver after CUDA error

Sometimes, bugs in my CUDA programs cause the desktop graphics to break (in Windows). Typically, the screen remains somewhat readable, but when graphics change, such as when dragging a window, lots of semi-random colored pixels and small blocks appear.
I have tried to reset the GPU and driver by changing the desktop resolution, but that doesn't help. The only fix I have found is to reboot the computer.
Is there a program out there or some trick I can use to get the driver and GPU to reset without rebooting?
Because the same problem occurs sometimes on unix and google forwarded me to this thread, I hope this helps somebody else..
On ubuntu unloading and reloading the nvidia kernel module solved the problem for me:
sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm
Edit:
If you are on Tesla hardware on Linux and can run nvidia-smi, then you can reset the GPU using
nvidia-smi -r
or
nvidia-smi --gpu-reset
Here is the man output for this switch:
Resets GPU state. Can be used to clear double bit ECC errors or
recover hung GPU. Requires -i switch to target specific device.
Available on Linux only.
Otherwise...
The way to truly reset the hardware is to reboot.
What you describe shouldn't happen. I recommend testing with different hardware and let us know if it still occurs.
To reset the graphics stack in Windows, press Win+Ctrl+Shift+B.
I have a GeForce GTX 260 over NVDIA GPU SDK 4.2 and I am experiencing the some problems.
Sometimes developing I have bugs in the programs. This causes the screen to show the random colored pixels described in this post.
As stated here, if I change resolution they do not disappear. Moreover, if I only change the COLOUR DEPTH from 32 to 16 bits, the random colored pixels disappear, but going back to 32 bits (without rebooting) make them appear again.
Last bug that caused this behaviour was using __constant__ memory but passing it as a pointer:
test<<<grid, threadsPerBlock>>>( cuda_malloc_data, cuda_constant_data );
If I do not pass cudb_constant_data, then there is no bug (and consequently, the random coloured pixels do not appear).
from "device manager", under Display adapters tab, find the driver
disable it
press win + ctrl +shift + B (monitor will blink)
enable the driver
there you go.
ps -ef
find something like root 4066644 1 99 08:56 ? 04:32:25 /opt/conda/bin/python /data/
kill 4066644

Resources