SPI_GETMOUSE
Retrieves the two mouse threshold values and the mouse acceleration. The pvParam parameter must point to an array of three integers that receives these values.
And more information about the two mouse thresholds in MSDN is :
The system applies two tests to the specified relative mouse motion when applying acceleration. If the specified distance along either the x or y axis is greater than the first mouse threshold value, and the mouse acceleration level is not zero, the operating system doubles the distance. If the specified distance along either the x- or y-axis is greater than the second mouse threshold value, and the mouse acceleration level is equal to two, the operating system doubles the distance that resulted from applying the first threshold test. It is thus possible for the operating system to multiply relatively-specified mouse motion along the x- or y-axis by up to four times.
What are the specified relative mouse motion , specified distance and relatively-specified mouse motion here as no distance or no motion is prespecified before the mouse is moving ? And how they are determined ?
The mouse_event function that you linked to is used to simulate mouse motion and button clicks. So you can call this in a program to move the mouse and/or click mouse buttons without the user doing so.
There is a set of flags passed to the mouse_event function, along with x and y values (and other stuff not relevant to your question). One of the flag values - MOUSEEVENTF_ABSOLUTE - specifies that the x and y values you pass to mouse_event are absolute. If that flag is not set, then the x and y are relative values.
So in the case you're asking about - the specified relative mouse motion, for example - is just the xy values passed to mouse_event when the MOUSEEVENTF_ABSOLUTE flag is not set.
Related
I am using two cameras without lens or any other settings in webot to measure the position of an object. To apply the localization, I need to know the focus length, which is the distance from the camera center to the imaging plane center,namely f. I see the focus parameter in the camera node, but when I set it NULL as default, the imaging is still normal. Thus I consider this parameter has no relation with f. In addition, I need to know the width and height of a pixel in the image, namely dx and dy respectively. But I have no idea how to get these information.
This is the calibration model I used, where c means camera and w means world coordinate. I need to calculate xw,yw,zw from u,v. For ideal camera, gama is 0, u0, v0 are just half of the resolution. So my problems exist in fx and fy.
First important thing to know is that in Webots pixels are square, therefore dx and dy are equivalent.
Then in the Camera node, you will find a 'fieldOfView' which will give you the horizontal field of view, using the resolution of the camera you can then compute the vertical field of view too:
2 * atan(tan(fieldOfView * 0.5) / (resolutionX / resolutionY))
Finally, you can also get the near projection plane from the 'near' field of the Camera node.
Note also that Webots cameras are regular OpenGL cameras, you can therefore find more information about the OpenGL projection matrix here for example: http://www.songho.ca/opengl/gl_projectionmatrix.html
Is it possible to get the position of a ltk (Common Lisp basic GUI library) window (one of its corners), in pixels from the top left screen corner?
I'm trying to use mouse movement to control an applet I'm making (details here), but I can only find the mouse's position relative to the window, and I can only set it relative to the screen itself. I want to hide the cursor and return it to a fixed point after every move, noting how it has moved. I need to know the window position to correct for the different measurements.
The manual gives several options for manipulating the toplevel, such as moving the window around or finding its position and dimensions. The particular expression needed here is (geometry *tk*), which returns a list of the window's x and y position and size.
I have 2 IMU's (Inertial measurement units) and I want to calculate their relative rotation. Unfortunately, the output of the IMU's gives me both quaternions relative to global (I'm assuming that's how quaternions work). However, I need measurements of the rotation of one of the sensors relative to the other. All the while, these two sensors have been rotated from their initial orientation in the global axis.
For example: I have one sensor attached to the chest and the other attached the arm. Both sensors are calibrated to the global axis. If I maintain that orientation, I can calculate the rotations just fine. However, when I rotate my body to a different orientation (90 degrees to the right) and perform the same movement, the sensors are rotating around their local axis but outputting quaternions relative to the global axis (a rotation about the sensors y axis is output as a rotation around the x global axis).
I want the same movements to produce the same quaternions (and thus show the same rotations) regardless of my orientation (laying down, facing left, right, front or backwards)
Basically, I want to have one sensor be the rotating "reference" axis and I want to measure rotational changes with the other sensor relative to the reference sensor (rotating reference axis).
Transformation from one Q1 to other Q2 frame is simply done like
Q1intoQ2 = q1.inversed() * q2;
// check it, q2 = q1 * Q1intoQ2 == q1 * q1.inversed() * q2 == q2
I'm trying to build a stock chart with zooming functionality using D3.js
I'm looking to start with this example here and attempt to make the zoom feel more natural for a stock chart. A perfect example is this. So the difference as far as I understand is that zoomng and panning are both locked on the Y-axis, and the only way the Y-axis moves is to autmatically fill the price range of the currently visible data.
Another noticeable difference is that zooming does not zoom into the current position of the mouse like it does in the first example.
How can the example be adjusted to work more closely as the other chart? What is the pertitent code, how should it be changed?
Setting the zoom behaviour to not affect the y-axis is simple: just don't attach your y-scale to the zoom behaviour.
In the sample code you linked to, the zoom functionality is added in this line:
this.plot.call(d3.behavior.zoom()
.x(this.x)
.y(this.y)
.on("zoom", this.redraw() )
);
That creates a zoom behaviour object/function, links it to the graphs x and y scales, and tells it to call the function returned by this.redraw() after every zoom event. The zoom behaviour automatically changes the domain of the scales on zoom, and then the redraw function uses the modified zoom. If you don't give it a y scale to modify, then zooming won't affect the y domain.
Getting the y scale to automatically adjust to the given extent of the data is a little trickier. However, remember that the zoom behaviour will have automatically adjusted the domain of the x scale to represent the extent of visible data horizontally. You'll then have to get the corresponding slice of your data array and figure out the extent of y values from it, set your y domain accordingly, and then call the redraw function (remembering that this.redraw() just returns the redraw function, to call it within another function you'll need to use this.redraw()() ).
To have the zoom be independent of the mouse position, set a center point for the zoom behaviour.
I have a profile of a mountain in my game and need Corona to be able to discern between the user pressing (touch event) on the mountain, and pressing on the valley in between the peaks (alpha channel used to create shape). It seems that Corona treats a display object in this sense as a rectangle, thus my need cannot be satisfied by any means I have found.
However, Corona physics functionality allows you to create complex polygons to mimic arbitrary shapes for collision handling, but I have found no similar method for buttons.
Any ideas?
It's not automatic, but here's a solution you can try that involves a little setup and code. Shouldn't be too difficult.
Test the location of the touch in your event listener by inspecting the event.x and event.y parameters. You could make this efficient by creating a table that has values for the left-most x and right-most x value for each strip of, say 10 pixels from the top to the bottom of your object. For example, consider this mountain:
Use the y coordinate of bottom of each light blue rectangle as the index into the table, and load the left x and right y values into that entry, e.g.:
hitTable[120] = {245,260}
hitTable[130] = {230,275}
and so on...
Then, in the touch event listener, force the event.y parameter to one of your table indices, either with a function or just testing to see which it's closest to. Then, using that table entry, see if event.x is between the x coordinates you've specified for that y coordinate. If not, just ignore the touch.
You can even build the table and assign it as a property of the image itself like this:
hitTable = {}
hitTable[120] = {245,260}
hitTable[130] = {230,275}
... and so on, then ...
myMountain.hitTable = hitTable
Once you've done that, you can access the table in the touch event listener as event.target.hitTable.
Could you create the mountain peaks with a 90 degree angle tip. Then if you split the mountain peaks up and rotated them 45 degrees they would then fit into a square shape. Once you exported them each, you import them into Corona and then rotate them back 45 degrees. I havent tested this but im imagining it might work :)