I would like to write a 3D program on XCB and I would like to put my "render_frame" function into the frame throttle callback. However, I don't know what is the frame throttle callback in XCB. Is there any equivalent of “WM_PAINT”(Win32) / “wl_surface::frame”(wayland) in XCB?
You might be looking for the Present extension. Google just gave me https://cgit.freedesktop.org/xorg/proto/presentproto/tree/presentproto.txt, but this describes version 1.0 of the extension. The current version is 1.2, so somewhere there should be a newer documentation. I do not know what the additions since version 1.0 are and whether you need them.
Related
I have an event from the realtime world, which generates an interrupt. I need to register this event to one of the Linux kernel timescales, like CLOCK_MONOTONIC or CLOCK_REALTIME, with the goal of establishing when the event occurred in real calendar time. What is the currently recommended way to do this? My google search found some patches submitted back in 2011 to support it, but the interrupt-handling code has been heavily revised since then and I don't see a reference to timestamps anymore.
For my intended application the accuracy requirements are low (1 ms). Still, I would like to know how to do this properly. I should think it's possible to get into the microsecond range, if one can exclude the possibility of higher-priority interrupts.
If you need only low precision, you could get away with reading jiffies.
However, if CONFIG_HZ is less than 1000, you will not even get 1 ms resolution.
For a high-resolution timestamp, see how firewire-cdev.c does it:
case CLOCK_REALTIME: getnstimeofday(&ts); break;
case CLOCK_MONOTONIC: ktime_get_ts(&ts); break;
case CLOCK_MONOTONIC_RAW: getrawmonotonic(&ts); break;
If I understood your needs right - you may use getnstimeofday() function for this purpose.
If you need the high precision monotonic clock value (which is usually a good idea) you should look at ktime_get_ts() function (defined in linux/ktime.h). getnstimeofday() suggested in the other answer returns the "wall" time which may actually appear to go backward on occassion, resulting in unexpected behavior for some applications.
maybe I'm getting something wrong but if I send the RollCommand it will only move a little after a sec. it will stop.
In the QuickStart guide there is an information that it will roll until it hits something or it gets out of range.
What I'm not getting here?
Do I have to repeat my function every second?
-(void)rollforward{
[RKRollCommand sendCommandWithHeading:90 velocity:0.5];}
This is called Motion Timeout. It's one of five option flags which can be set since Firmware 1.20. These flags keep their state even after sphero is switched off.
More information on how to set and get the flags can be found on these sample projects:
iOS: https://github.com/orbotix/Sphero-iOS-SDK/tree/master/samples/OptionFlags
Android: https://github.com/orbotix/Sphero-Android-SDK/tree/master/samples/OptionFlags
The documentation says this flag is off by default but maybe they changed this in a prior firmare update.
This wiki page on the OpenGL website claims that OpenGL 1.1 functions should NOT be loaded via wglGetProcAddress, and the wording seems to imply that some systems will by design return NULL if you try:
http://www.opengl.org/wiki/Platform_specifics:_Windows#wglGetProcAddress
(The idea being that only 1.2+ functions deserve loading by way of wglGetProcAddress).
The page does not tell us who reported these failed wglGetProcAddress calls on 1.1 functions, which I've never personally seen. And google searches so next to no information on the issue either.
Would wglGetProcAddress() actually return NULL for 1.1 functions for enough users such that I should actually care? Or does it just fail for a select few unlucky users with really broken GPU drivers (in which case I don't much care).
Has anybody else come across this?
The question you should be asking yourself is whether it matters to you at all and whether you should care.
Loading the OpenGL 1.1 functions manually would mean that you have to use different function names, or they will collide with the declarations in gl/gl.h. Or, you must define GL_NO_PROTOTYPES, but in this case you will also not have OpenGL 1.0 functionality.
So, in any case, doing this would mean extra trouble for no gains, you can simply use 1.1 functionality without doing anything.
Having said that, I've tried this once because I thought it would be an ingenious idea to load everything dynamically (when I sobered up, I wondered what gave me that idea), and I can confirm that it does not (or at least, did not, 2 years ago) work with nVidia drivers.
Though, thinking about it, it's entirely justifiable, and even a good thing, that something that doesn't make sense doesn't work.
I technically answered this on the discussion page of that Wiki article, but:
Would wglGetProcAddress() actually return NULL for 1.1 functions for enough users such that I should actually care?
It will return NULL for all users. I have tried it on NVIDIA and ATI platforms (recent drivers and DX10 hardware), and all of them do it.
How does one go about setting the GLSL version on Mac? Is this even possible? I'm running a fragment shader and would like to create an array of vec3s, but the shader compiler is producing an error indicating that I need to use a higher GLSL version. The specific error is
'array of 3-component vector of float' : array type not supported here in glsl < 120
Thanks for the help.
Although I have no Mac experience, you can specify the lowest required version of your shader (which is 1.10 by default, I think) by using something like
#version 120 //shader requires version 1.20
as first line in your shader. But of course the specified version also has to be supported by your hardware and driver, which you can check for with glGetString(GL_SHADING_LANGUAGE_VERSION).
EDIT: I confirmed this with a look into the GLSL spec, which also says that all shaders that are linked together should target the same version, although I'm quite sure I myself have once successfully violated this, but this may be due to my forgiving nVidia driver. So if it still complains when linking, add the same #version tag to the vertex shader, too.
I'm looking for an OSX (or Linux?) application that can recieve data from a webcam/video-input and let you do some image processing on the pixels in something similar to c or python or perl, not that bothered about the processing language.
I was considering throwing one together but figured I'd try and find one that exists already first before I start re-inventing the wheel.
Wanting to do some experiments with object detection and reading of dials and numbers.
If you're willing to do a little coding, you want to take a look at QTKit, the QuickTime framework for Cocoa. QTKit will let you easity set up an input source from the webcam (intro here). You can also apply Core Image filters to the stream (demo code here). If you want to use OpenGL to render or apply filters to the movie, check out Core Video (examples here).
Using theMyMovieFilter demo should get you up and running very quickly.
Found a cross platform tool called 'Processing', actually ran the windows version to avoid further complications getting the webcams to work.
Had to install quick time, and something called gVid to get it to work but after the initial hurdle coding seems like C; (I think it gets "compiled" into Java), and it runs quite fast; even scanning pixels from the webcam in real time.
Still to get it working on OSX.
Depending on what processing you want to do (i.e. if it's a filter that's available in Apple's Core Image filter library), the built-in Photo Booth app may be all you need. There's a comercial set of add-on filters available from the Apple store as well (http://www.apple.com/downloads/macosx/imaging_3d/composerfxeffectsforphotobooth.html).