Using double buffer and the buffer swap function to end the draw loop, SDL_GL_SwapWindow, when I set the Vsync to Off through:
SDL_GL_SetSwapInterval(0); //returns 0 so the vsync option is set correctly
Looks like the VSync is still on on this device.
I've tested the same code in iOS, other android devices including tablets, pcs and mac with a very simple scene and all of them go from about 60 fps with VSync to +400 without it.
The only device that seems to keep the VSync is the Note 4 because the fps are the same.
This is why I'm asking if there any reason for this. I've looked for the device specifications and checked the display and developer options in case there was some kind of VSync locked option there but I found nothing related to this.
EDIT:
Same behaviour with a Samsung Galaxy S4 (VSync won't turn off)
As clarified in comments and documentation, there are drivers and hardware setups that limit the framerate regardless of the vsync configuration/framerate specific management.
In particular, the framerate is limited in most new Android devices.
Related
I'm attempting to mute the mic on my Logitech C920 webcam (OSX 10.8.5), but both implementations that I have tried do not work 100%. I would really appreciate it if someone who has experience with Apple's CoreAudio could take a look.
Here is what I have tried:
Setting mute via AudioObjectSetPropertyData() using:
address.mScope = kAudioDevicePropertyScopeInput;
address.mElement = kAudioObjectPropertyElementMaster;
address.mSelector = kAudioDevicePropertyMute;
This works, I can successfully mute/unmute but eventually I am able to get into a state where the mic is no longer receiving audio. It seems to be triggered by switching the default input to the internal mic while the C920 is in the muted state and switching back to the C920 mic. The only way that I have found to get the C920 mic back into a good state is to pull the USB cable and plug it back in.
Setting the volume to 0.0f via AudioObjectSetPropertyData() using:
address.mScope = kAudioDevicePropertyScopeInput;
address.mElement = kAudioObjectPropertyElementMaster;
address.mSelector = kAudioDevicePropertyVolumeScalar;
This almost works. The OSX UI input volume slider moves all the way to the left, but the mic is still picking up a little bit of audio. Uhhhhggg so close!
Opening the "Audio MIDI setup" app shows the C920 mic. When the volume value is set to zero, the dB value is 20. When the volume value is set to 1, the db value is set to 50. This is different from the built in mic that looks like it has a dB range of -12 to 12. Not sure if this matters.
When setting mute or the volume, I've tried fetching the individual channels and setting them as well. Doesn't seem to have an impact. I think with both input devices setting the Master channel is working fine.
I was wondering if maybe this is a hardware issue. I should note that the Logitech C920 isn't officially supported on the Mac (although a ton of people use it). I'm able to control the internal mic without any issues. Hopefully I'm just overlooking something :-)
When setting volume kAudioDevicePropertyVolumeScalar to zero. You should also set mute to true using kAudioDevicePropertyMute. When adjusting volume of inputs, be careful to check the mute status.
For example:
Adjust input volume in system audio panel to 0
Adjust input volume using kAudioDevicePropertyVolumeScalar to 100
Look at the db indicator, it stay at zero but the slider is in 100. No sound from microphone can be captured. Use kAudioDevicePropertyMute to get mute value, we can get mute = 1
Conclusion:
There is a "Mute" status for the input in the system, but no shown in the audio setting panel. System will auto set mute to true when adjust volume to zero and disable mute when set volume larger than zero.
I need to play a short sound repeatedly (simulating metronome) while recording sound.
What I did for the metronome was basically setting a DispatcherTimer with specific Interval, and every tick firing a SoundEffect. For the recorder I call the XNA's FrameworkDispatcher.Update method every 33 milisec (also using DispatcherTimer for that).
I run the metronome, it works fine, and then when I begin to record - there's a short break in playing sound (hard to say if it delays the Interval or just mutes the sound), and after a while (when already recording), the metronome continues to tick, but with more 'flatten' sound.
Is this a hardware limitation, or am I doing something wrong?
I thinkt this is connected with hardware. I was making an app to modify sound when it is captured. When I was using headset (with mic) connected to device there was big echo on playback. When I was using only headphones (and device mic) everything was ok. It was tested on HTC and Nokia - same results but HTC was little bit better :)
Sometimes, bugs in my CUDA programs cause the desktop graphics to break (in Windows). Typically, the screen remains somewhat readable, but when graphics change, such as when dragging a window, lots of semi-random colored pixels and small blocks appear.
I have tried to reset the GPU and driver by changing the desktop resolution, but that doesn't help. The only fix I have found is to reboot the computer.
Is there a program out there or some trick I can use to get the driver and GPU to reset without rebooting?
Because the same problem occurs sometimes on unix and google forwarded me to this thread, I hope this helps somebody else..
On ubuntu unloading and reloading the nvidia kernel module solved the problem for me:
sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm
Edit:
If you are on Tesla hardware on Linux and can run nvidia-smi, then you can reset the GPU using
nvidia-smi -r
or
nvidia-smi --gpu-reset
Here is the man output for this switch:
Resets GPU state. Can be used to clear double bit ECC errors or
recover hung GPU. Requires -i switch to target specific device.
Available on Linux only.
Otherwise...
The way to truly reset the hardware is to reboot.
What you describe shouldn't happen. I recommend testing with different hardware and let us know if it still occurs.
To reset the graphics stack in Windows, press Win+Ctrl+Shift+B.
I have a GeForce GTX 260 over NVDIA GPU SDK 4.2 and I am experiencing the some problems.
Sometimes developing I have bugs in the programs. This causes the screen to show the random colored pixels described in this post.
As stated here, if I change resolution they do not disappear. Moreover, if I only change the COLOUR DEPTH from 32 to 16 bits, the random colored pixels disappear, but going back to 32 bits (without rebooting) make them appear again.
Last bug that caused this behaviour was using __constant__ memory but passing it as a pointer:
test<<<grid, threadsPerBlock>>>( cuda_malloc_data, cuda_constant_data );
If I do not pass cudb_constant_data, then there is no bug (and consequently, the random coloured pixels do not appear).
from "device manager", under Display adapters tab, find the driver
disable it
press win + ctrl +shift + B (monitor will blink)
enable the driver
there you go.
ps -ef
find something like root 4066644 1 99 08:56 ? 04:32:25 /opt/conda/bin/python /data/
kill 4066644
I am attempting to grab frames and preview the video from a Bodelin Proscope HR USB microscope. I have a simple Cocoa app using an AVCaptureSession with an AVCaptureDeviceInput for the Proscope HR and a AVCaptureVideoPreviewLayer displaying the output.
All of this works fine with the built-in iSight camera, but the output from the Proscope HR is garbled beyond recognition.
Using the bundled Proscope software, I sometimes see the same garbling when trying to use the higher resolutions. My suspicion is that the hardware used is rather under-spec'd, and this is bolstered by the fact that at the lowest 320x200 resolution the bundled software grabs at 30fps, but when you bump up the resolutions the frame rates drop dramatically, down to 15fps at 640x480, all the way down to 3.75fps at the maximum resolution of 1600x1200.
EDIT: I originally thought that perhaps the frame rate being attempted by the AVCaptureSession was too high, but I have since confirmed that (at least in theory) the capture session is requesting the frame rate advertised by the AVCaptureDevice.
I should note that I have already tried all of the standard AVCaptureSessionPreset* constant presets defined in the headers, and none of them improved the results from the Proscope HR. (They did however appear to affect the built-in iSight in approximately the expected manner.)
Here is a screen capture showing the garbled output from the ProScope HR:
And just for comparison, the output from a generic WebCam:
According to the documentation you should configure AVCaptureDevice rather than AVCaptureSession.
EDIT:
The AV framework is developed on top of IOKit and it fully relies on the fact that you have no problems with hardware. In your case, it looks like the root of your problem is hardware-related so you should consider using IOKit directly.
I'm working on a apps that's based on DirectX10 by using SlimDX. I would like to enable vsync similar to DirectX9, but the fps doesn't seems to lock to 60Hz(which happens if I'm using Direct9). I'm setting vsync by using this
SwapChain.Present(1, PresentFlags.None);
Did I do something wrong?
Btw, I'm running Win7 with ATI HD5570 video card. After some googling, I gather that ATI can force vsync on certain games. So I wonder if that's related.
Reference for code to C++ will do as well. I'll translate it myself.
Thanks
First argument of SwapChain.Present is syncInterval. 0 indicates that presentation should occur immediately, without synchronization. Any other value indicates that presentation should be synchonized with the specified next vertical blank.
So use it like this:
SwapChain.Present(0, PresentFlags.None);
You can try to force vsync using Catalyst Control Center