I am prototyping the driver for an 8 bit parallel image sensor on an ARM device with a built-in ISP. I will spare the details, as I seek for general guide on how to approach this particular problem I am having.
Simply put, when I load the ISP driver (not my prototype camera driver) with dyndbg=+pt flag, the camera driver usually grabs images (about ~8 out of 10 attempts). If I remove the flag, and load the ISP driver without any options, my camera driver rarely finishes its job (about 1 out of ~100 attempts). The system gets stuck saying the device has timed out.
I suspect loading the driver with debug flag is somehow altering the timing, resulting more stable interaction between the ISP and the image sensor. I mostly spend my hours debugging electrical aspects of embedded boards, and rarely delve into a deep software stack such as ISP or Video4Linux. Hence my conjecture may be completely off.
Therefore some pointers will be much appreciated. The kernel is 3.18.
You haven't provided a lot of details for us to work with here, but if enabling debug is making your device work, my suspicion would be that the debug output is introducing a delay which is required for your device to work properly. I'd read through your device datasheets carefully to see if there are any timing requirements you might not be respecting.
Related
Please, make it once more clear the technical difference between these three things around MS Windows systems. First is Timer Resolution you may set and get via ntdll.dll non-exported functions NtSetTimerResolution and NtQueryTimerResolution or use the Sysinternals' clockres.exe tool.
One of the scandalous trick used by the Chrome browser some time ago to perform better across the web. (They left high resolution trick for Flash plugin only at the moment). https://bugs.chromium.org/p/chromium/issues/detail?id=153139
https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
In fact Visual Studio and SQL Server in some cases do the trick as well. I personally feel like it performs the whole system better and crisp, not slow down as many people warn out there.
What is the difference between the timer resolution and application I/O and memory priority (realtime/high/above normal/normal/low/background/etc.) you may set via Task Manager except the fact that the timer resolution sets up for the whole system, not a single application?
What is the difference between them and Processor scheduling option you can adjust from CMD > SystemPropertiesPerformance.exe -> Advanced tab. By default, the users' OS versions (like XP/Vista/7/8/8.1/10) set the performance of programs, the servers' versions (2k3/2k8/2k12/2k16) do care of background services. How this option interacts with those two above?
timeBeginPeriod() is the documented api to do this. It is documented to affect the accuracy for Sleep(). Dave Cutler probably did not enjoy implementing it, but allowing Win 3.1 code to port made it necessary. The multi-media api back then was necessary to keep anemic hardware with small buffers going without stuttering.
Very crude, but there is no other good way to do it in the kernel. The normal state for a processor core is to be stopped on a HLT instruction. Consuming (almost) no power, the only way to revive it is with a hardware interrupt. Which is what it does, it cranks up the clock interrupt rate. Normally ticks 64 times per second, you can jack it up to 1000 with timeBeginPeriod, 2000 with the native api.
And yes, pretty bad for power consumption. The clock interrupt handler also activates the thread scheduler, an fairly unsubtle chunk of code. The reason why a Sleep() call can now wake up at (almost) the clock interrupt rate. Tinkered with in Win8.1 btw, the only thing I noticed about the changes is that it is not quite as responsive anymore and a 1 msec rate can cause up to 2 msec delays.
Chrome is indeed notorious for ab/using the heck out of it. I always assumed that it provided a competitive edge for a company that does big business in mobile operating systems and battery-powered devices. The guy that started this web site noticed something was wrong. The more responsible thing to do for a browser is to bump up the rate to 10 msec, necessary to get accurate GIF animation. Multi-media playback does not need it anymore.
This otherwise has no effect at all on scheduling priorities. One detail I did not check is if the thread quantum changes correspondingly (the number of ticks a thread may own a core before being evicted, 3 for a workstation). I suspect it does.
I am hoping this is a relatively simple answer. Ive always been interested in ar, and I've been debating about tinkering with a possibly ar driven ui for mobile.
I guess the only real question would be having the camera continuously turned on, how much battery would that use? i.e. would it be too much for something like this to be worth doing?
Battery drain is one of the biggest issues in the smartphones nowadays. I'm not a specialist in power consumption or battery life or whatever but anyone having and using a smartphone (not only for calls of course) would not be wrong by saying this. There are many tips on the internet teaching you how to increase the battery life. In fact processes running on your device need energy and that energy is provided by the battery.
To answer your question, I've been using the smartphones' cameras for AR applications since quite long time now. It's a heavy process and indeed it drains the battery faster than other processes. On the other hand you also have to consider the other processes running on your device while your AR application is used. For example your app might use the device's sensors (gyroscope, GPS, etc); these processes are draining the battery also. A simple test that you might do is to charge your device, start the camera and leave it until the battery dies. Well that's exactly how much the camera would drain the battery (you can even measure the time). Of course you might want to turn off everything else running on the device.
To answer your second question, it depends how the application is created (many things can be optimized a lot!) and how it's going to be used. If the goal of the application is to be used continuously for hours and hours then you need to wait for some other kind of technology being discovered (joking..I hope) or having extra power supply attached to your device. I think it's worth doing the application and optimize it on the fly and also in the end when everything is up an running. If the camera is the only issue then I'm sure it's worth trying!
I am trying to write a linux driver for a PCIe device - the Adlink PCIe 7300A High-Speed digital-IO card.
The driver works fine for normal memory transfer, but attempting to use the card's bus-mastering capabilities to initiate DMA transfer of a buffer from CPU memory to the device's output FIFO buffer simply does not work.
I have been trying to solve this problem on the order of weeks, not on the order of days.
Any insight at all would really really be appreciated.
Driver code -- https://github.com/sbrookes/timing_driver_sdarn/blob/master/kernel_land/timing.c
Device Datasheet -- http://www.acceed.com/manuals/adlink/P7300A%20Manual.PDF
PLX 9080 PCI Interface chip Datasheet -- http://www.der-ingo.de/bin/milanhelp/PLX9080.pdf
I can not explain how much I would appreciate any bit of insight.
Thank you,
Scott
I seem to have solved the problem. It seems like there was an incorrect condition in the interrupt handler that was aborting the DMA transfer at the wrong time, never letting the transfer even begin.
A serious "duh" moment, but it took serious struggle to find it.
As per the comments, sorry if I polluted SO with my desperation. Still learning how to be a good citizen.
Not sure if the code linked above will remain static as my project changes or whether that link will reflect the most current version. Basically just be careful not to abort your transfer at the wrong time.
I'm developing an application in LabView on Windows. Starting a week ago, one test machine (a ToughBook, no less) was freezing up completely once every couple days: no mouse cursor, taskbar clock frozen. So yesterday it was retired. But just now, I've seen it on another machine, also a laptop.
This is a pretty uncommon failure mode for PC's. I don't know much about Windows, but I'd expect it to indicate that the software stopped running so completely and suddenly that the kernel was unable to panic.
Is this an accurate assessment? Where do I begin to debug this problem? What controls the cursor in the Windows architecture — is it all kernel mode or is there a window server that might be getting choked by something? Would an unstable third-party hardware driver cause this, rather than a blue screen?
EDIT: I should add that the freezes don't necessarily happen while the code is running.
I'd certainly consider hardware and/or drivers as a possibility - perhaps you could say what hardware is involved?
You could test this by adding a 'debug mode' for each piece of hardware your LabVIEW code talks to, where you would use e.g. a case structure to skip the actual I/O calls and return dummy data to the rest of the application. Make sure it's a similar amount of data to what the real device returns. You'll find this much easier if you've modularised your code into subVI's with clearly defined functions! If disabling I/O calls to a particular bit of hardware stops the freezes it would suggest the problem might be with that hardware or its driver.
Hard to say what the problem is. Base on the symptoms I would check for a possible memory leak (see if your LabVIEW app memory usage is growing overtime using "windows task manager").
I'd like to write a packet sniffer and editor for Windows. I want to able to see the contents of all packets entering and leaving my system and possibly modify them. Any language is fine but I'd like it to run fast enough that it won't burden the system.
I've read a little about WinPcap but the documentation claims that you can't use WinPcap to create a firewall because it can't drop packets. What tools will help me write this software?
Been there, done that :-) Back in 2000 my first Windows program ever was a filter hook driver.
What I did was implementing the filter hook driver and writing a userspace application that prepared a filter table on what to allow and what to disallow. When you get around your initial set of blue screens (see below for my debug tip in kernel mode) the filter mode driver is quite easy to use ... it gives each packet to a function you wrote and depending on the return code drops it or lets it pass.
Unfortunatley packets at that level are QUITE raw, fragments are not reassembled and it looks more like the "network card" end of things (but no ethernet headers anymore). So you'll have quite a bad time decoding the packets to filter with that solution.
There also is the firewall hook driver, as discussed in this codeproject article.
If you are on Vista or Server 2008 you'd better have a look at WFP (Windows Filtering Platform) instead, that seems to be the mandated API of the day for writing firewalls.
I don't know about it other than google turing it up some minutes ago when I googled for the filter hook driver.
Update: Forgot the debug tip:
Sysinternals DbgView shows kernel-mode DbgPrint output, and more important - it can also read them from the dump file your last blue screen produced. So sprinkle your code with dbgprint and if it bluescreens just load the dump into dbgview to see what happened before it died ... VERY useful. Using this I managed without having a kernel debugger.
I'm pretty sure you'd need to write a filter driver. http://en.wikipedia.org/wiki/Filter_driver I don't know much more than that :). It would definitely be a C/C++ Win32 app and you'd likely being doing some kernel side work. Start by downloading the DDK and finding some of the sample filter drivers.
If you just want to monitor what goes in and out of IIS, consider an ISAPI filter. Still C/C++ in Win32, but relatively easier than writing a device driver.
C# code to do this is here
I actually did this, several years ago. I'm hazy on the details at this point, but I had to develop a filter/pass-thru/intermediate driver using the Windows DDK. I got a lot of good information from pcausa. Here's a url which points to their product that does this: http://www.pcausa.com/pcasim/Default.htm
If you're doing this for practical reasons, and not just for fun, then you should take a look at Microsoft Network Monitor. The home page talks about the version 3.3 beta, but you can download version 3.2 from the Downloads page. There is also an SDK for NM, and the ability to write parsers for your own network protocols.
There's a question you need to ask which you don't know you need to ask; do you want to know which applications sockets belong to? or are you happy to be restricted to the IP:port quad for a connection?
If you want to know applications, you need to write a TDI filter driver, but that makes handling the receive almost impossible, since you can't block on the receive path.
If you're happy with IP:port, go in at the NDIS level, and I believe you can block on receive to your hearts content.
A word of warning; if you have no prior kernel experience, writing either of these drivers (although TDI is significantly harder) will take about two years, full time.
this:
TdiFw is a simple TDI-Based Open Source Personal Firewall for Windows NT4/2000/XP/2003
http://tdifw.sourceforge.net/
may help you