Framebuffer objects are not supported by this hardware (or driver) - processing

I'm trying to run OpenKinect using Processing. I'm pretty new to Processing and I'm not really sure what is happening. I followed the codes from this page but I seem to be getting the error below.
java.lang.RuntimeException: Framebuffer objects are not supported by this hardware (or driver) Read http://wiki.processing.org/w/OpenGL_Issues for help.
at processing.opengl.PJOGL.init(PJOGL.java:428)
at processing.opengl.PSurfaceJOGL$DrawListener.init(PSurfaceJOGL.java:889)
at jogamp.opengl.GLDrawableHelper.init(GLDrawableHelper.java:644)
at jogamp.opengl.GLDrawableHelper.init(GLDrawableHelper.java:667)
at jogamp.opengl.GLAutoDrawableBase$1.run(GLAutoDrawableBase.java:431)
at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1291)
at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:1147)
at com.jogamp.newt.opengl.GLWindow.display(GLWindow.java:759)
at com.jogamp.opengl.util.AWTAnimatorImpl.display(AWTAnimatorImpl.java:81)
at com.jogamp.opengl.util.AnimatorBase.display(AnimatorBase.java:452)
at com.jogamp.opengl.util.FPSAnimator$MainTask.run(FPSAnimator.java:178)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
How do I know which driver I need to install?(if there's any that I need to)

Related

Making a virtual IOPCIDevice with IOKit

I have managed to create a virtual IOPCIDevice which attaches to IOResources and basically does nothing. I'm able to get existing drivers to register and match to it.
However when it comes to IO handling, I have some trouble. IO access by functions (e.g. configRead, ioRead, configWrite, ioWrite) that are described in IOPCIDevice class can be handled by my own code. But drivers that use memory mapping and IODMACommand are the problem.
There seems to be two things that I need to manage: IODeviceMemory(described in the IOPCIDevice) and DMA transfer.
How could I create a IODeviceMemory that ultimately points to memory/RAM, so that when driver tries to communicate to PCI device, it ultimately does nothing or just moves the data to RAM, so my userspace client can handle this data and act as an emulated PCI device?
And then could DMA commands be directed also to my userspace client without interfering to existing drivers' source code that use IODMACommand.
Thanks!
Trapping memory accesses
So in theory, to achieve what you want, you would need to allocate a memory region, set its protection bits to read-only (or possibly neither read nor write if a read in the device you're simulating has side effects), and then trap any writes into your own handler function where you'd then simulate device register writes.
As far as I'm aware, you can do this sort of thing in macOS userspace, using Mach exception handling. You'd need to set things up that page protection fault exceptions from the process you're controlling get sent to a Mach port you control. In that port's message handler, you'd:
check where the access was going to
if it's the device memory, you'd suspend all the threads of the process
switch the thread where the write is coming from to single-step, temporarily allow writes to the memory region
resume the writer thread
trap the single-step message. Your "device memory" now contains the written value.
Perform your "device's" side effects.
Turn off single-step in the writer thread.
Resume all threads.
As I said, I believe this can be done in user space processes. It's not easy, and you can cobble together the Mach calls you need to use from various obscure examples across the web. I got something similar working once, but can't seem to find that code anymore, sorry.
… in the kernel
Now, the other problem is you're trying to do this in the kernel. I'm not aware of any public KPIs that let you do anything like what I've described above. You could start looking for hacks in the following places:
You can quite easily make IOMemoryDescriptors backed by system memory. Don't worry about the IODeviceMemory terminology: these are just IOMemoryDescriptor objects; the IODeviceMemory class is a lie. Trapping accesses is another matter entirely. In principle, you can find out what virtual memory mappings of a particular MD exist using the "reference" flag to the createMappingInTask() function, and then call the redirect() method on the returned IOMemoryMap with a NULL backing memory argument. Unfortunately, this will merely suspend any thread attempting to access the mapping. You don't get a callback when this happens.
You could dig into the guts of the Mach VM memory subsystem, which mostly lives in the osfmk/vm/ directory of the xnu source. Perhaps there's a way to set custom fault handlers for a VM region there. You're probably going to have to get dirty with private kernel APIs though.
Why?
Finally, why are you trying to do this? Take a step back: What is it you're ultimately trying to do with this? It doesn't seem like simulating a PCI device in this way is an end to itself, so is this really the only way to do what greater goal you're ultimately trying to achieve? See: XY problem

How does FILE_FLAG_NO_BUFFERING interact with handles opened to communication devices?

Just as the title says, I am writing a networking program where I open a handle to a network driver using CreateFile, and I have been experimenting with the NO_BUFFERING flag.
Most documentation won't even mention this being used with communication devices, and the ones that do (AKA the MSDN reference, etc), simply mention that you can.
Does anyone have any idea how this may affect communication with the device?
It is a device driver implementation detail, options you specify in the CreateFile() call are passed in the IRP_MJ_REQUEST request. The one I linked is the one for file systems, it is very fancy one. Click through the IrpSp->Parameters.Create.Options link to IoCreateFileSpecifyDeviceObjectHint()'s Options argument to see FILE_NO_INTERMEDIATE_BUFFERING.
The documentation for the IRP_MJ_REQUEST for serial ports is here. Very simple one, no arguments at all :) In general, the winapi to device driver interface for communication ports is a very straight-forward. There's an (almost) direct mapping between the documented winapi function and its underlying IOCTL. The winapi function doesn't do much beyond basic error checking, then quickly passes the job to the driver.
So there isn't any way to pass the FILE_FLAG_NO_BUFFERING option you specify so it simply doesn't get used.
Otherwise the logical conclusion, serial port I/O is interrupt driven, the driver must buffer in order to not lose bytes and keep an acceptable transfer rate. You can technically tinker with the buffer sizes through SetupComm() but, as documented, it is only a recommendation with pretty high odds that the driver simply ignores very low values.

Event using FTD2XX_NET.DLL

I am using a FT232RL chip with FTD2XX_NET.dll I've made a program which writes and reads data to/from AVR atmega32 mcu. First writes data, then reads data as answer.
Now, i want to make an event which indicated me if there's available unreaded data, only when AVR sends data to FTDI buffer and ONLY then. Whithout forcing my program to making loops for checking available data. For my purpose, i want to do the mcu to sends data only when he wants, and the PC must to knows when there's new data in FTDI buffer's chip.
I know that It's impossible for the pc to know when AVR sending data to the FTDI. But this which I mean it's that I need some way for my program to know if FTDI have New unreaded data to it's own buffer.
I don't won't to running read operator over and over in an infinity loop as I do now.
You should create a read thread which does your reading in the background. Then from that thread you can signal an even to notify another part of your application when you have data. I'm not sure what language you are using but you should easily be able to find an example of threading and event notification with a Google search.

irp processing and windows message generating

I' m new in drivers. So excuse me for possible inaccuracies.
msdn such as some books about driver design give our some directions how to use wdm api. But i can find some literature or recources where i could get solid description of converting isr to final windows message.
for example we have keyboard. and device interrupt raised. I/O manager create irp and start to pass it downward along driver stack. every filter or functional driver can modify irp which they have just recieved. But what sould to be happened in the end of this process. But what layer or driver get some kind of parsed irp, transform it to windows message and put into input queue of OS?
Raw input thread (data received from the driver):
Overview of how Windows processes keyboard input:
Keyboard Input Model:

Converting glReadBuffer() / glDrawBuffer() calls into OpenGL ES

I'm having trouble understanding how to port glReadBuffer() & glDrawBuffer() calls into Open GL ES 1.1. Various forum posts on the internet just say "use VBOs," without going into more depth.
Can you please help me understand an appropriate conversion? Say I have:
glReadBuffer(GL_FRONT);
followed by
glDrawBuffer(GL_BACK_LEFT);
state->paint(state_id, f);
How can I write the pixels out?
glReadBuffer and glDrawBuffer just set the source and target for subsequent drawing operations. Assuming you're targeting a monoscopic device, such as the iPhone or an Android device, and have requested two buffers then you're already set for drawing to the back buffer. The only means of reading the colour buffer in GL ES is glReadPixels, which will read from the same buffer that you're drawing to.
All of these are completely unrelated to VBOs, which pass off management of arrays of data to the driver, often implicitly allowing them to be put into the GPU's direct address space.

Resources