I am currently coding a little AR-Game for myself on the iPhone 3GS ;-)
I want to use the accelerometer & compass data for rotating my camera in opengl. The camera has a fixed position and can only rotate due to the accelerometer. The iPhone is initially rotated 90° to have a bigger widescreen ;-) So the axes are switched...
When I hold the iPhone straigt in front of me I get these values:
accel.x = 1
axxel.y = 0
accel.z = 0
When move the iPhone straight to the top of me (over my head) I get these values:
accel.x = 0
accel.y = 0
accel.z = 1
So the values are between :
x: 1 straight ahead and 0 over my head
y: 0 straight ahed and 0 over my head
z: 0 straight ahead and 1 over my head
I want to use x, y, z for my camera world coordinates. E.g. accel.x = 0.5 and accel.z = 0.5
The camera should change the centerX, centerY and centerZ values based on the values I get from the accelerometer.
How can I manage this?
Thanks ;-)
If you're getting X=1 with the phone straight ahead of you, it sounds like you're holding the phone in landscape, with the button to the right, is that correct?
The accelerometer reports gravitational force, so X=1 means that you're going DOWN the X-axis on the screen.
This means that your +X direction (pixels) on the screen is really the +Y direction (gl drawing) in your OpenGL world, which corresponds to a -X accelerometer reading.
In this orientation, moving the phone to your left & right will give +Y and -Y accelerometer readings, respectively. How strong the readings are depend on with how much force you move the phone. If you move slowly, the reading may be indistinguishable from normal noise in the data. If you move at about the speed that the phone would fall if you dropped it, your reading will be close to "1" ("1 G-force" or "1 gravity.") If you move it faster than that, you will get higher readings.
NOTE: if you just turn, then stop moving, the accelerometer reading will revert to 0. Gravity is an measure of acceleration ("the acceleration due to gravity"!), not facing. This is why the compass became important.
If you tilt the phone screen-down or screen-up, the accelerometer's Z-reading will change, as gravity pulls "into" or "away from" the screen. With the screen perfectly vertical (as holding the phone straight in front of your face, any orientation), the accelerometer Z-reading should be 0.
How you translate that orientation data to your camera depends on your app. Just keep in mind that "steering" the camera by moving the phone will be tricky to get right, as you have to account for both acceleration AND deceleration of the user's movements!
To help you visualize this, check out XCode's sample-project "Accelerometer." Just search that word in the documents, and you'll see a sample project you can build and play with to get an idea of what each axis means.
Luck!
Related
I am building an application in AFrame and I want to constrain the viewers movement, that is I want to limit where the camera can go in the scene. For example I have a a-plane that is the floor and I want the camera to stop moving when it reaches 0 on the Z axis to stop the camera from going through the floor or stop again if it reaches 20 on Z axis. I also wish to limit the movement in x,y directions. There are no obstacles in the scene besides the a-plane. Is creating a navigation mesh my only option or is there an easier way to constrain movement? Thanks!
I don't know of built in tools to do this, but you could do it with programming (this sounds pretty easy). You could create a custom component, attached to the camera, with a tick handler, that records the position of the camera in world space and stores in in a variable (camPosPrevFrame). Then create a function to test if the current position is outside of the bounds. If so, set the camera coordinate on the axis that has exceeded its limit, to the previously recorded boundary (camPosPrevFrame). If you are simply testing whether the camera is on one side of an orthagonal plane (say the world space xy plane), that is pretty simple math (camera.getWorldPosition.x>someAmount). If you have a more complex situation, there are ways to test if a point is on either side of any arbitrary plane (it involves the dot product).
I'm using Tango motion tracking and it is very easy to get the pose of the device relative to the TANGO_START_OF_SERVICE. For the translation that works fine for me, but I'd like my orientation to be aligned with gravity, so that the yaw and roll angles are aligned with gravity rather than with the arbitrary position at which the Tango service started. I'm fine with an arbitrary azimuth angle.
I can do this by using the accelerometer data to get the absolute orientation at one point in time and then use that going forward, but is there an easier way?
I think the Z axis of TANGO_COORDINATE_FRAME_CAMERA_DEPTH frame is always aligned with gravity.
I am making a simple JavaScript game using Phaser and it is going pretty well. I am currently having a few issues with the rotation of the tank, however.
Issue 1:
When shooting the bullet (left click) the speed is dependent on the distance from the tank to the mouse. The code I am currently using to get the rotation of the tank involves subtracting the mouse x and y from the tank x and y. The bullet receives the velocity from these values. How would I go about keeping the velocity consistent no matter what the distance of the mouse is?
Phaser has a native method for this I did not know about.
Issue 2:
If you click to shoot a bullet and turn quickly, you can see the bullet under the barrel of the tank. This is more a a visual "glitch", but I would like to have the bullet spawn right at the edge of the barrel of the tank. Is there a method to get the correct spawn location based on both the rotation and barrel position?
Currently, I'm taking each corner of my object's bounding box and converting it to Normalized Device Coordinates (NDC) and I keep track of the maximum and minimum NDC. I then calculate the middle of the NDC, find it in the world and have my camera look at it.
<Determine max and minimum NDCs>
centerX = (maxX + minX) / 2;
centerY = (maxY + minY) / 2;
point.set(centerX, centerY, 0);
projector.unprojectVector(point, camera);
direction = point.sub(camera.position).normalize();
point = camera.position.clone().add(direction.multiplyScalar(distance));
camera.lookAt(point);
camera.updateMatrixWorld();
This is an approximate method correct? I have seen it suggested in a few places. I ask because every time I center my object the min and max NDCs should be equal when their are calculated again (before any other change is made) but they are not. I get close but not equal numbers (ignoring the negative sign) and as I step closer and closer the 'error' between the numbers grows bigger and bigger. IE the error for the first few centers are: 0.0022566539084770687, 0.00541687811360958, 0.011035676399427596, 0.025670088917273515, 0.06396864345885889, and so on.
Is there a step I'm missing that would cause this?
I'm using this code as part of a while loop to maximize and center the object on screen. (I'm programing it so that the user can enter a heading an elevation and the camera will be positioned so that it's viewing the object at that heading and elevation. After a few weeks I've determined that (for now) it's easier to do it this way.)
However, this seems to start falling apart the closer I move the camera to my object. For example, after a few iterations my max X NDC is 0.9989318709122867 and my min X NDC is -0.9552042384799428. When I look at the calculated point though, I look too far right and on my next iteration my max X NDC is 0.9420058636660581 and my min X NDC is 1.0128126740876888.
Your approach to this problem is incorrect. Rather than thinking about this in terms of screen coordinates, think about it terms of the scene.
You need to work out how much the camera needs to move so that a ray from it hits the centre of the object. Imagine you are standing in a field and opposite you are two people Alex and Burt, Burt is standing 2 meters to the right of Alex. You are currently looking directly at Alex but want to look at Burt without turning. If you know the distance and direction between them, 2 meters and to the right. You merely need to move that distance and direction, i.e. right and 2 meters.
In a mathematical context you need to do the following:
Get the centre of the object you are focusing on in 3d space, and then project a plane parallel to your camera, i.e. a tangent to the direction the camera is facing, which sits on that point.
Next from your camera raycast to the plane in the direction the camera is facing, the resultant difference between the centre point of the object and the point you hit the plane from the camera is the amount you need to move the camera. This should work irrespective of the direction or position of the camera and object.
You are playing the what came first problem. The chicken or the egg. Every time you change the camera attributes you are effectively changing where your object is projected in NDC space. So even though you think you are getting close, you will never get there.
Look at the problem from a different angle. Place your camera somewhere and try to make it as canonical as possible (ie give it a 1 aspect ratio) and place your object around the cameras z-axis. Is this not possible?
We are developing a real time strategy varient for WP7. At the momment, we require some direction/instruction on how to build an effective camera system. In other words, we would like a camera that can pan around a flat surface (2d or 3d level map). We have been experimenting with a 2d tile map, while our unit/characters are all 3d models.
At first glance, it appears we need to figure out how to calculate a bounding box around the camera and its entire view perspective. Or, restrict the movement of the camera to what it can see, to the bounds of a 2d map.
Any help would be greatly appreciated!!
Cheers
If you're doing true 2D scrolling, it's pretty simple:
Scroll.X must be between 0 and level.width - screen.width
Scroll.Y must be between 0 and level.height - screen.height
(use MathHelper.Clamp to help you with this)
As for 3D it's a little trickier but almost the same principle.
All you really need is to define TWO Vector3 points, one is the lower left back corner and the other the upper right front (or you could do upper left front / lower right back, etc., up to you). These would be your bounding values.
The first one you could define as a readonly with just constant values that you tweak the camera bounds exactly as you wish for that corner. There IS a way of computing this, but honestly I prefer to have more control so I typically choose the route of value tweaking instead.
The second one you could start off with a "base" that you could manually tweak (or compute) just like before but this time you have to add the map width and length (to X and Z) so you know the true bounds depending on the map you have loaded.
Once you have these values, clamp them just as before:
//pans the camera but caps at bounds
public void ScrollByCheckBounds(Vector3 scroll, Vector3 bottomLeftFront, Vector3 topRightBack)
{
Vector3 newScroll = Scroll + scroll;
//clamp each dimension
newScroll.X = MathHelper.Clamp(newScroll.X, topRightBack.X, bottomLeftFront.X);
newScroll.Y = MathHelper.Clamp(newScroll.Y, topRightBack.Y, bottomLeftFront.Y);
newScroll.Z = MathHelper.Clamp(newScroll.Z, bottomLeftFront.Z, topRightBack.Z);
Scroll = newScroll;
}