Increase accelerometer sampling rate in smartwatch by modifying device driver - linux-kernel

Normal accelerometer sampling rate available through API is 100 Hz, but I wanted a higher sampling rate. I found a research paper, in which the linux kernel is modified to get 4000 hz sampling rate. This link to an image gives an idea of the implementation.
As i have limited experience with writing/modifying kernel drivers, can someone provide any help/guidance/resources?

Related

Hardware H264 encoding ID3D11Texture2D with Media Foundation

I am working on a project which captures screen and encodes it. I can already capture screen using desktop duplication API (Win8+). Using the API I can get ID3D11Texture2D textures and transfer them from GPU to CPU and then use libx264 to encode them.
However, pulling textures from GPU to CPU can be a bottle neck which potentially can decrease the fps. Also libx264 takes up CPU cycles (depending on quality) to encode frames. I am looking for encoding ID3D11Texture2D textures in GPU itself instead of using CPU for encoding as an optimization.
I have already checked the documentation and some sample codes but I have had no success. I would appreciate if someone could point me to some resource that does exactly what I want reliably.
Video encoders, hardware and software, might be available in different form factors. Windows itself offers extensible APIs with choice of encoders, and then additional encoders might be available as libraries and SDKs. You are already using one of such libraries (x264). Hardware encoders are typically vendor-specific and depend on available hardware, which is involved directly in the process of encoding. If you are interested in solution for specific hardware, it might make sense to check for respective SDK.
Otherwise, typical generic interface for hardware backed video encoding in Windows is Media Foundation Transform (MFT). Microsoft provides stock software only H.264 video encoder which is unlikely to give any advantage over x264, except the fact that it shares MFT interface with other options. Video hardware drivers, however, would often install additional MFTs for the hardware available, which add more MFTs backed by hardware implementation. Examples of such are:
IntelĀ® Quick Sync Video H.264 Encoder MFT
NVIDIA H.264 Encoder MFT
AMDh264Encoder
Offered by different vendors, they offer similar functionality and your using these MFTs to encode H.264 is a good way to take advantage of hardware video encoding with a wide range of hardware.
See also:
Registering and Enumerating MFTs
Basic MFT Processing Model
You have to check if sharing texture between GPU encoder and DirectX is possible.
I know it's possible to share texture between Nvidia Decoder and DirectX, because i've done it successfully. Nvidia has some interop capacity, so first, look if you can share texture to do all things in GPU.
With Nvidia you can do this : Nvidia Decoding->DirectX Display in GPU.
Check if DirectX Display->Nvidia Enconding is possible (knowing nvidia offers Interop)
For Intel and ATI, i don't know if they provide interop with DirectX.
The main thing to know is to check if you can interop your DirectX texture with GPU encoder texture.

Compression Algorithm for Guaranteed Compression Ratio?

Can anyone offer me pointers or tips on finding/creating a data compression algorithm that has a guaranteed compression ratio? Obviously this couldn't be a loss-less algorithm.
My question is similar to the one here, but there was no suitable answer:
Shrink string encoding algorithm
What I am trying to do is stream live audio over a wireless network (my own spec, not WiFi) with a tight bandwidth restriction. Let's say I have packets 60 bytes in size. I need an algorithm to compress these to, say, 35 bytes every time without fail. Reliable guaranteed compression to a fixed size is key. Audio quality is less of a priority.
Any suggestions or pointers? I may end up creating my own algorithm from scratch, so even if you don't know of any libraries or standard algorithms, I would be grateful for brilliant ideas of any kind!
It is good that you mentioned your use case: live audio.
There are many audio CODECs (COder-DECoder) that work exactly this way (constant bit-rate). For example, take a look at Opus. You can select bitrates from 6 kb/s to 510 kb/s, and frame sizes from 2.5 ms to 60 ms.
I used it in the past to tranfer audio over RF datalinks. You'll probably need to implement a de-jitter buffer as well (see more here). Also note that the internal clock of many sound cards is not accurate, and there may be a "drift" between the source and target rate (e.g. a 30mSec buffer may be played in 29.9mSec or 30.1mSec) and you may need to compensate for this as well.

Saving highest quality video from video-capture card

I have a machine with 2x3 3ghz dual-core xeon and 4x10krpm scsi 320 disks in raid0.
The capture card is an osprey 560 64 bit pci card.
Operating system is currently Windows Server 2003.
The video-stream that I can open with VLC using direct show is rather nice quality.
However, trying to save this video-stream without loss of quality has proven quite difficult,
using the h264 codec I am able to achieve a satisfying quality, however, all 4 cores jump to 100% load after a few second and then it start dropping frames, the machine is not powerful enough for realtime encoding. I've not been able to achieve satisfying mpeg1 or 4 quality, no matter which bitrate I set..
Thing is, the disks in this machine are pretty fast even by todays standard, and they are bored.. I don't care about disk-usage, I want quality.
I have searched in vain for a way to pump that beautiful videostream that I see in VLC onto the disk for later encoding, I reckon the disks would be fast enough, or maybe something which would apply a light compression, enough that the disks can keep up, but not so much as to loose visible quality.
I have tried FFMPEG as it seems capable of streaming a yuv4 stream down to the disk, but ofcause FFMPEG is unable to open the dshow device ( same error as this guy Ffmpeg streaming from capturing device Osprey 450e fails )
Please recommend a capable and (preferably) software which can do this.
I finally found it out, it is deceptively simple:
Just uncheck the "transcode" option in the window where you can select an encoding preset.
2 gigabytes per minutes is a low price to pay for finally getting those precious memories off of old videotapes in the quality they deserve.

GPU benchmark for nvidia Optimus cards

I need a GPGPU benchmark which will load the GPU so that I can measure the parameters like temperature rise, amount of battery drain etc. Basically I want to alert the user when the GPU is using a lot of power than normal use. Hence I need to decide on threshold values of GPU temperature, clock frequency and battery drain rate above which GPU will be using more power than normal use. I have tried using several graphics benchmark but most of them don't use GPU resources to the fullest.
Please provide me a link to such GPGPU benchmark.

What is the latency (or delay) time for callbacks from the waveOutWrite API method?

I'm having a debate with some developers on another forum about accurately generating MIDI events (Note On messages and so forth). The human ear is pretty sensitive to slight timing inaccuracies, and I think their main problem comes from their use of relatively low-resolution timers which quantize their events around 15 millisecond intervals (which is large enough to cause perceptible inaccuracies).
About 10 years ago, I wrote a sample application (Visual Basic 5 on Windows 95) that was a combined software synthesizer and MIDI player. The basic premise was a leapfrog-buffer playback system with each buffer being the duration of a sixteenth note (example: with 120 quarter-notes per minute, each quarter-note was 500 ms and thus each sixteenth-note was 125 ms, so each buffer is 5513 samples). Each buffer was played via the waveOutWrite method, and the callback function from this method was used to queue up the next buffer and also to send MIDI messages. This kept the WAV-based audio and the MIDI audio synchronized.
To my ear, this method worked perfectly - the MIDI notes did not sound even slightly out of step (whereas if you use an ordinary timer accurate to 15 ms to play MIDI notes, they will sound noticeably out of step).
In theory, this method would produce MIDI timing accurate to the sample, or 0.0227 milliseconds (since there are 44.1 samples per millisecond). I doubt that this is the true latency of this approach, since there is presumably some slight delay between when a buffer finishes and when the waveOutWrite callback is notified. Does anyone know how big this delay would actually be?
The Windows scheduler runs at either 10ms or 16ms intervals by default depending on the processor. If you use the timeBeginPeriod() API you can change this interval (at a fairly significant power consumption cost).
In Windows XP and Windows 7, the wave APIs run with a latency of about 30ms, for Windows Vista the wave APIs have a latency of about 50ms. You then need to add in the audio engine latency.
Unfortunately I don't have numbers for the engine latency in one direction, but we do have some numbers regarding engine latency - we ran a test that played a tone looped back through a USB audio device and measured the round-trip latency (render to capture). On Vista the round trip latency was about 80ms with a variation of about 10ms. On Win7 the round trip latency was about 40ms with a variation of about 5ms. YMMV however since the amount of latency introduced by the audio hardware is different for each piece of hardware.
I have absolutely no idea what the latency was for the XP audio engine or the Win9x audio stack.
At the very basic level, Windows is a multi threaded OS. And it schedules threads with 100ms time slices.
Which means that, if there is no CPU contention, the delay between the end of the buffer and the waveOutWrite callback could be arbitrailly short. Or, if there are other busy threads, you have to wait up to 100ms per thread.
In the best case however... CPU speeds clock in at the GHz now. Which puts an absolute lower bound on how fast the callback can be called in the 0.000,000,000,1 second order of magnitude.
Unless you can figure out the maximum number of waveOutWrite callbacks you can process in a single second, which could imply the latency of each call, I think that really, the latency is going to be orders of magnitude below preception most of the time, unless there are too many busy threads, in which case its going to go horribly, horribly wrong.
To add to great answers above.
Your question is about the latency Windows neither promised not cared of. And as such, it might be quite different depending on OS version, hardware and other factors. WaveOut API, and DirectSound too (not sure about WASAPI, but I guess it is also true for this latest Vista+ audio API) are all set for buffered audio output. Specific callback accuracy is not required as long as your are on time queuing next buffer while current is still being played.
When you start audio playback, you have a few assumptions such as no underflows during playback and all output is continuous, and audio clock rate is exactly as you expect is, such as 44,100 Hz precisely. Then you do simple math to schedule your wave output in time, converting time to samples and then to bytes.
Sadly, effective playback rate is not precise, e.g. imagine real hardware sampling rate may be 44,100 Hz -3%, and in long run the time-to-byte math might be letting you down. There has been attempt to compensate for this effect, such as making audio hardware the playback clock and synchronizing video to it (this is how players work), and rate matching technique to match incoming data rate to actual playback rate on hardware. Both these make absolute time measurements and latency in question quite a speculative knowledge.
More to this, the API latencies 20 ms, 30 ms, 50 ms and so on. Since long ago waveOut API is a layer on top of other APIs. This means that some processing takes place before data actually reach the hardware and this processing requires that you put your hands off the queued data well in advance, or the data won't reach the hardware. Let's say if you attempt to queue your data in 10 ms buffers right before playback time, the API will accept this data but it will be late itself passing this data downstream, and there will be silence or comfort noise on the speakers.
Now this is also related to callbacks that you receive. You could say that you don't care about latency of buffers and what is important to you is precise callback time. However since the API is layered, you receive callback at the accuracy of inner layer synchronization, such second inner layer notifies on free buffer, and first inner layer updates its records and checks if it can release your buffer too (hey, those buffers don't have to match too). This makes callback accuracy expectations really weak and unreliable.
Provided that I have not been touching waveOut API for quite some time, if such question of synchronization accuracy would come up, I would probably first of all thought of two things:
Windows provides access to audio hardware clock (I am aware of IReferenceClock interface available through DirectShow, and it probably comes from another lower level thing which is also accessible) and having that available I would try to synchronize with it
Latest audio API from Microsoft, WASAPI, provides special support for low latency audio with new cool stuff there like better media thread scheduling, exclusive mode streams and <10 ms latency for PCM - this is where better sync is to be looked at

Resources