XNA game studio4.0 using c# - xna-4.0

Vector2 firstSquare = new Vector2(camera.location.X / Tile.tilewidth, camera.location.Y / Tile.tileheight);
int firstX = (int)firstSquare.X;
int firstY = (int)firstSquare.Y;
Vector2 squareOffset = new Vector2(camera.location.X % Tile.tilewidth, camera.location.Y % Tile.tileheight);
int offsetX = (int)squareOffset.X;
int offsetY = (int)squareOffset.Y;
this code is in tile engine tutorial from xna resources.com site.
In this code how can I know about camera location and these vector object values?
and also I don't have knowledge on the camera view and world view with respective to 2d games.

Imagine you are handling a real camera, and you are aiming to a door,
if you move the camera to the right... the camera will show to you that the door is moving to the left...
door is not moving, you are only moving the camera.
This action is handled by the view transform of a camera.
Xna provides a way to create this View transform through Matrix.CreateLookAt
Though for a 2D camera is used this:
View = Matrix.CreateTranslation( new Vector3( -_position, 0 ) )
* Matrix.CreateRotationZ( _rotation )
* Matrix.CreateScale( new Vector3( _scale, _scale, 1 ) )
* Matrix.CreateTranslation( new Vector3( ViewportScreen.X + ViewportScreen.Width * 0.5f, ViewportScreen.Y + ViewportScreen.Height * 0.5f, 0 ) );
This View handle camera rotation, zoom, and is centered into position.

Related

Bounding box that is parallel to the camera

My problem is how to define the camera location, given a lookAt vector, when the camera is not on the z axis, so it captures all objects according to its fov and aspect.
I think I need to get a bounding box of my objects that is perpendicular to the camera's lookAt and top and bottom front and back edges are parallel to the xz plane. Then the back of the bounding box is the 'far' plane and I can calculate the distance from it (or fov) and set the camera accordingly.
My question is, how to get such a bounding box (Box3 instance), given some objects on the scene and the lookAt vector ?
My question is, how to get such a bounding box (Box3 instance), given some objects on the scene and the lookAt vector ?
Instances of THREE.Box3 are axis-aligned bounding boxes. No matter how the camera is rotated, it is not possible to generate a different bounding box for a given set of 3D objects.
Maybe you can use a quite common approach 3D viewers which ensures to always display an imported 3D object in the viewport. Exemplary code from the open source glTF viewer looks like this:
const aabb = new THREE.Box3().setFromObject( object );
const center = aabb.getCenter( new THREE.Vector3() );
const size = aabb.getSize( new THREE.Vector3() ).length();
// centering object
object.position.x += ( object.position.x - center.x );
object.position.y += ( object.position.y - center.y );
object.position.z += ( object.position.z - center.z );
// update camera
camera.near = size / 100;
camera.far = size * 100;
camera.updateProjectionMatrix();
camera.position.copy( center );
camera.position.x += size / 2.0;
camera.position.y += size / 5.0;
camera.position.z += size / 2.0;
camera.lookAt( center );

How to convert world rotation to screen rotation?

I need to convert the position and rotation on a 3d object to screen position and rotation. I can convert the position easily but not the rotation. I've attempted to convert the rotation of the camera but it does not match up.
Attached is an example plunkr & conversion code.
The white facebook button should line up with the red plane.
https://plnkr.co/edit/0MOKrc1lc2Bqw1MMZnZV?p=preview
function toScreenPosition(position, camera, width, height) {
var p = new THREE.Vector3(position.x, position.y, position.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
function updateScreenElements() {
var btn = document.querySelector('#btn-share')
var pos = plane.getWorldPosition();
var vec = toScreenPosition(pos, camera, canvas.width, canvas.height);
var translate = "translate3d("+vec.x+"px,"+vec.y+"px,"+vec.z+"px)";
var euler = camera.getWorldRotation();
var rotate = "rotateX("+euler.x+"rad)"+
" rotateY("+(euler.y)+"rad)"+
" rotateY("+(euler.z)+"rad)";
btn.style.transform= translate+ " "+rotate;
}
... And a screenshot of the issue.
I would highly recommend not trying to match this to the camera space, but instead to apply the image as a texture map to the red plane, and then use a raycast to see whether a click goes over the plane. You'll save yourself headache in translating and rotating and then hiding the symbol when it's behind the cube, etc
check out the THREEjs examples to see how to use the Raycaster. It's a lot more flexible and easier than trying to do rotations and matching. Then whatever the 'btn' onclick function is, you just call when you detect a raycast collision with the plane

what's wrong with it - or how to find correct THREE.PerspectiveCamera settings

I have a simple THREE.Scene where the main content is a THREE.Line mesh that visualizes the keyframe based path that the camera will follow for some scripted animation. There is then one THREE.SphereGeometry based mesh that is always repositioned to the current camera location.
The currently WRONG result looks like this (the fractal background is rendered independently but using the same keyframe input - and ultimately the idea is that the "camera path" visualization ends up in the same scale/projection as the respective fractal background...):
The base is an array of keyframes, each of which represents the modelViewMatrix for a specific camera position/orientation and is directly used to drive the vertexshader for the background, e.g.:
varying vec3 eye, dir;
void main() {
gl_Position = vec4(position, 1.0);
eye = vec3(modelViewMatrix[3]);
dir = vec3(modelViewMatrix * vec4(position.x , position.y , 1, 0));
}
(it is my understanding that "eye" is basically the camera position while "dir" reflects the orientation of the camera and by the way it is used during the ray marching implicitly leads to a perspective projection)
The respective mesh objects are created like this:
visualizeCameraPath: function(scene) {
// debug: visualize the camera path
var n= this.getNumberOfKeyFrames();
var material = new THREE.LineBasicMaterial({
color: 0xffffff
});
var geometry = new THREE.Geometry();
for (var i= 0; i<n; i++) {
var m= this.getKeyFrameMatrix(true, i);
var pos= new THREE.Vector3();
var q= new THREE.Quaternion();
var scale= new THREE.Vector3();
m.decompose(pos,q,scale);
geometry.vertices.push( new THREE.Vector3( pos.x, pos.y, pos.z ));
}
this.camPath = new THREE.Line( geometry, material );
this.camPath.frustumCulled = false; // Avoid getting clipped - does not seem to help one little bit
scene.add( this.camPath );
var radius= 0.04;
var g = new THREE.SphereGeometry(radius, 10, 10, 0, Math.PI * 2, 0, Math.PI * 2);
this.marker = new THREE.Mesh(g, new THREE.MeshNormalMaterial());
scene.add(this.marker);
}
in order to play the animation I update the camera and the marker position like this (I guess it is already wrong how I use the input matrix "m" directly on the "shadowCamera" - eventhough I think that it contains the correct position):
syncShadowCamera(m) {
var pos= new THREE.Vector3();
var q= new THREE.Quaternion();
var scale= new THREE.Vector3();
m.decompose(pos,q,scale);
this.applyMatrix(m, this.shadowCamera); // also sets camera position to "pos"
// highlight current camera-position on the camera-path-line
if (this.marker != null) this.marker.position.set(pos.x, pos.y, pos.z);
},
applyMatrix: function(m, targetObj3d) {
var pos= new THREE.Vector3();
var q= new THREE.Quaternion();
var scale= new THREE.Vector3();
m.decompose(pos,q,scale);
targetObj3d.position.set(pos.x, pos.y, pos.z);
targetObj3d.quaternion.set(q.x, q.y, q.z, q.w);
targetObj3d.scale= scale;
targetObj3d.updateMatrix(); // this.matrix.compose( this.position, this.quaternion, this.scale );
targetObj3d.updateMatrixWorld(true);
},
I've tried multiple things with regard to the camera and the screenshot reflects the output with disabled "this.projectionMatrix" (see below code).
createShadowCamera: function() {
var speed = 0.00039507;
var z_near = Math.abs(speed);
var z_far = speed * 65535.0;
var fH = Math.tan( this.FOV_Y * Math.PI / 360.0 ) * z_near;
var fW = Math.tan( this.FOV_X * Math.PI / 360.0 ) * z_near;
// orig opengl used: glFrustum(-fW, fW, -fH, fH, z_near, z_far);
var camera= new THREE.PerspectiveCamera();
camera.updateProjectionMatrix = function() {
// this.projectionMatrix.makePerspective( -fW, fW, fH, -fH, z_near, z_far );
this.projectionMatrix= new THREE.Matrix4(); // hack: fallback to no projection
};
camera.updateProjectionMatrix();
return camera;
},
My initial attempt had been to use the same kind of settings that the opengl shader for the fractal background had been using (see glFrustum above). Unfortunately it seems that I have yet managed to correctly map the input "modelViewMatrix" (and the projection implicitly performed by the raymarching in the shader) to equivalent THREE.PerspectiveCamera settings (orientation/projectionMatrix).
Is there any matrix calculation expert here, that knows how to obtain the correct transformations?
Finally I have found one hack that works.
Actually the problem was made up of two parts:
1) Row- vs column-major order of modelViewMatrix: The order expected by the vertex shader is the oposite of what the remaining THREE.js expects..
2) Object3D-hierarchy: i.e. Scene, Mesh, Geometry, Line vertices + Camera: where to put the modelViewMatrix data so that it creates the desired result (i.e. the same result that the old bloody opengl application produced): I am not happy with the hack that I found here - but so far it is the only one that seems to work:
I DO NOT touch the Camera.. it stays at 0/0/0
I directly move all the vertices of my "line"-Geometry relative to the real camera position (see "position" from the modelViewMatrix)
I then disable "matrixAutoUpdate" on the Mesh that contains my "line" Geometry and copy the modelViewMatrix (in which I first zeroed out the "position") into the "matrix" field.
BINGO.. then it works. (All of my attempts to achieve the same result by rotating/displacing the Camera or by displacing/rotating any of the Object3Ds involved have failed miserably..)
EDIT: I found a better way than updating the vertices and at least keeping all the manipulations on the Mesh level (I am still moving the world around - like the old OpenGL app would have done..). To get the right sequence of translation/rotation one can also use ("m" is still the original OpenGL modelViewMatrix - with 0/0/0 position info):
var t= new THREE.Matrix4().makeTranslation(-cameraPos.x, -cameraPos.y, -cameraPos.z);
t.premultiply ( m );
obj3d.matrixAutoUpdate=false;
obj3d.matrix.copy(t);
If somebody knows a better way that also works (one where the Camera is updated without having to directly manipulate object matrices) I'd certainly be interested to hear it.

three.js: Limiting camera's rotation

I'm working with three.js, attempting to model a real world camera. As such, I'd like to limit its axis of rotation to 90 degrees along x and y axises.
Is there a simply way to do this? My current code isn't working particularly well (and goes crazy when you attempt to move the camera past the X and Y boundaries simultaneously)
if(xRot != null && xRot != undefined){
camera.rotateX(xRot);
}
if(yRot != null && yRot != undefined){
camera.rotateY(yRot);
}
if(camera.rotation.x < minCameraRotX){
camera.rotation.x = minCameraRotX;
}else if (camera.rotation.x > maxCameraRotX){
camera.rotation.x = maxCameraRotX;
}
if(camera.rotation.y < minCameraRotY){
camera.rotation.y = minCameraRotY;
}else if(camera.rotation.y > maxCameraRotY){
camera.rotation.y = maxCameraRotY;
}
Any advice would be greatly appreciated!!!
I actually managed to find a solution by checking some of the existing code in a Three.js demo for a library called PointerLock. The idea is to actually stack multiple objects inside each other: start with an object that moves horizontally (the yaw object), place another object inside the yaw object that moves vertically (the pitch object), and then place the actual camera inside the pitch object.
Then, you only rotate the outside objects (yaw and pitch) along their respective axises, so if you rotate both, they'll self-correct. For example, if you rotate the yaw 45 degrees along the y-axis (making it turn to the right) and then rotate the pitch 45 degrees (making it turn downward), the pitch will go 45 degrees downward from the yaw's already rotated position.
Given that the camera is inside both, it just points wherever the yaw and pitch direct it.
Here is the code
/*
* CAMERA SETUP
*
* Root object is a Yaw object (which controls horizontal movements)
* Yaw object contains a Pitch object (which controls vertical movement)
* Pitch object contains camera (which allows scene to be viewed)
*
* Entire setup works like an airplane with a
* camera embedded in the propellor...
*
*/
// Yaw Object
var yawObject = new THREE.Object3D();
// Pitch Object
var pitchObject = new THREE.Object3D();
// Camera Object
var camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
// Max Camera angles (in radians)
var minCameraRotX = 0.5;
var maxCameraRotX = 0.5;
var minCameraRotY = 1;
var maxCameraRotY = 1;
// Setup
yawObject.add( pitchObject );
pitchObject.add( camera );
scene.add(yawObject);
...
var rotateCamera = function(xRot, yRot, zRot){
yawObject.rotation.y += yRot;
pitchObject.rotation.x += xRot;
// Enforce X-axis boundaries (rotates around y-axis)
yawObject.rotation.y = Math.max( minCameraRotY, Math.min( maxCameraRotY, yawObject.rotation.y ) );
// Enforce Y-axis boundaries (rotates around x-axis)
pitchObject.rotation.x = Math.max( minCameraRotX, Math.min( maxCameraRotX, pitchObject.rotation.x ) );
}
Here is the source code I referenced: https://github.com/mrdoob/three.js/blob/acda8a7c8f90ce9b71088e903d8dd029e229678e/examples/js/controls/PointerLockControls.js
Also, this is sort of cheesy, but this little plane cartoon helped me visual exactly what was going on in my setup

LIBGDX / OpenGL : Reducing the size of everything

This could be the worse question ever asked however that would be a cool achievement.
I have created a 3D world made of cubes that are 1x1x1 (think Minecraft), all the maths works great etc. However 1x1x1 nearly fills the whole screen (viewable area)
Is there a way I can change the ViewPort or something so that 1x1x1 is half the size it currently is?
Code for setting up camera
float aspectRatio = Gdx.graphics.getWidth() / Gdx.graphics.getHeight();
camera = new PerspectiveCamera(67, 1.0f * aspectRatio, 1.0f);
camera.near = 0.1f; // 0.5 //todo find out what this is again
camera.far = 1000;
fps = new ControlsController(camera , this, stage);
I am using the FirstPersonCameraController and PerspectiveCamera to try and make a first person game
I guess the problem is:
camera = new PerspectiveCamera(67, 1.0f * aspectRatio, 1.0f);
An standard initialization of your camera could be (based on this tutorial):
camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
// ...
Note how the width and height of the camera is nearly (if not the same) of the width and height of the native gdx window dimension. In your case you set this size to 1 (the same size of your mesh). Try with a bigger viewport dimension to allow your mesh be smaller (in perspective), something like:
/** Not too sure since is a perspective view, but play with this values **/
float multiplier = 2; // <- to allow your mesh be a fraction
// of the size of the viewport of the camera
camera = new PerspectiveCamera(67, multiplier * aspectRatio, multiplier );

Resources