How to zoom in AR - google-project-tango

I am using Unity and AR learning and I am trying to zoom in the visual so that I can select more distant points in space better.
Is there an easy way to do this? I noticed that the measure-it app from google does not support zooming.
The answer provided by gtp works wonderfully.
Thank you.

In a Tango Unity project, you can make the camera view zoom by modifying the UVs on the camera feed using VideoOverlayProvider.SetARScreenUVs and adjusting your camera frustum in a corresponding way.
As an example, start from the Unity Tango Examples, and modify TangoARScreen.cs so that in _SetRenderAndCamera, the uvs are adjusted before they are passed to _MaterialUpdateForIntrinsics (which in turn passes these through to VideoOverlayProvider.SetARScreenUVs). This is the snippet I added before the call to _MaterialUpdateForIntrinsics to verify that this works in practice:
float width = 1.0f - 2.0f * m_uOffset;
float height = 1.0f - 2.0f * m_vOffset;
float newWidth = width / m_zoomLevel;
float newHeight = height / m_zoomLevel;
m_uOffset = (1.0f - newWidth) / 2.0f;
m_vOffset = (1.0f - newHeight) / 2.0f;
If you build and run the AugmentedReality scene on your Tango device with these changes you should see the same pin-placement sample, but with the scene zoomed according to m_zoomLevel.

Related

Unity3d UI issue with Xiaomi

In Xiaomi devices, there are drawn an image outside of camera's letterbox
In other devices everything is correct
I attached both sumsung and xiaomi images, the screenshot that looks ugly is xiaomi, and good look in samsung
float targetaspect = 750f / 1334f;
// determine the game window's current aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height;
// current viewport height should be scaled by this amount
float scaleheight = windowaspect / targetaspect;
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
try setting the image to clamp instead of repeat.
this will give the result of black borders but you won't have that weird texture
I don't know what caused that problem, however i solved it in a tricky way. I just added second camera to display black background. Only My main camera's viewport is letterboxed, but not second camera. So it made display to look good

Standalone Player Looks Different than Game View

Okay so I've built this little dodger game and everything is perfect except the standalone player doesn't match the game view visually. Pics for reference. Please let me know anything to stop this issue.
I want the standalone player to be the same as the game view.
Changing the resolution in the player settings doesn't work so far.
Unity GameView
Standalone Player
Looks to me like its your scale settings in the Game window. It's set at 0.33 in the picture you've posted.
Try changing your view to Free Aspect, then adjust your camera GameObject to tighten in on your gameplay area. Or just refresh your layout, sometimes changing the aspect ratio while the Game view is smaller makes it difficult to restore the aspect you are looking for.
Reset your layout here:
Window\Layouts\Default (or whatever you prefer)
I used this code, with two different cameras rendering.
void Update ()
{
float targetaspect = 4f / 3f; // set the desired aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height; // determine the current aspect ratio
float scaleheight = windowaspect / targetaspect; // current viewport height should be scaled by this amount
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
else // add pillarbox
{
float scalewidth = 1.0f / scaleheight;
Rect rect = camera.rect;
rect.width = scalewidth;
rect.height = 1.0f;
rect.x = (1.0f - scalewidth) / 2.0f;
rect.y = 0;
camera.rect = rect;
}
}

LIBGDX / OpenGL : Reducing the size of everything

This could be the worse question ever asked however that would be a cool achievement.
I have created a 3D world made of cubes that are 1x1x1 (think Minecraft), all the maths works great etc. However 1x1x1 nearly fills the whole screen (viewable area)
Is there a way I can change the ViewPort or something so that 1x1x1 is half the size it currently is?
Code for setting up camera
float aspectRatio = Gdx.graphics.getWidth() / Gdx.graphics.getHeight();
camera = new PerspectiveCamera(67, 1.0f * aspectRatio, 1.0f);
camera.near = 0.1f; // 0.5 //todo find out what this is again
camera.far = 1000;
fps = new ControlsController(camera , this, stage);
I am using the FirstPersonCameraController and PerspectiveCamera to try and make a first person game
I guess the problem is:
camera = new PerspectiveCamera(67, 1.0f * aspectRatio, 1.0f);
An standard initialization of your camera could be (based on this tutorial):
camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
// ...
Note how the width and height of the camera is nearly (if not the same) of the width and height of the native gdx window dimension. In your case you set this size to 1 (the same size of your mesh). Try with a bigger viewport dimension to allow your mesh be smaller (in perspective), something like:
/** Not too sure since is a perspective view, but play with this values **/
float multiplier = 2; // <- to allow your mesh be a fraction
// of the size of the viewport of the camera
camera = new PerspectiveCamera(67, multiplier * aspectRatio, multiplier );

Convert from 1280x720 to UIView dimensions

I have an app that finds QR finder marks from the AVCaptureDevice sample buffer delegate. Now I am trying to put boxes on the screen to cover the QR marks. However, I am having trouble converting between 1280x720 pixel resolution and the cameraView size.
I'm trying to write a method that converts a CGRect based on these parameters (and the fact that the view coordinate system is upside-down), but it won't work. It might have something to do with the orientation of the camera device as well. Here is my code for the converting method:
-(CGRect) convertRect:(CGRect) oldRect From90DegreeRotatedCameraSize1280x720ToUIViewSize:(CGSize) viewSize
{
//remember that we are using a gravity of resize aspect fill
double xScale = viewSize.width / 720;
double yScale = viewSize.height / 1280;
if (xScale < yScale)
return CGRectMake(oldRect.origin.x * xScale, viewSize.height - oldRect.origin.y * xScale, oldRect.size.height * xScale, oldRect.size.width * xScale);
else
return CGRectMake(oldRect.origin.x * yScale, viewSize.height - oldRect.origin.y * yScale, oldRect.size.height * yScale, oldRect.size.width * yScale);
}
Anyone got an elegant to solution to this relatively simple spacial coordinate problem?
EDIT -
I did a NSLog of the connection video orientation and it turns out that it never changes from 1 (which I think is enum'ed to portrait orientation.) From this, it should be easier to find a solution because the coordinates do not change in different orientations.
Any ideas, coder community?
Figured it out:
When iOS fills a view, it centers it so that the both sides are cut off. In order to transition between view coordinates, you need to scale and THEN translate in the direction that was cut off:
return CGRectMake(oldRect.origin.x * xScale, oldRect.origin.y * xScale - (1280 * xScale - viewSize.height) / 2, oldRect.size.height * xScale, oldRect.size.width * yScale);

How do I set up the viewable area in a scene within OpenGL ES 2.0?

I have done programming in OpenGL and I know how to set the viewable area in it with gluOrtho(), but a function like this does not exist in OpenGL ES 2.0.
How would I do this in OpenGL ES 2.0?
P.S : I am doing my OpenGL ES 2.0 development in Ubuntu 10.10 with the PowerVR SDK emulator.
As Nicol suggests, you'll want to set up an orthographic projection matrix. For example, an Objective-C method I use to do this is as follows:
- (void)loadOrthoMatrix:(GLfloat *)matrix left:(GLfloat)left right:(GLfloat)right bottom:(GLfloat)bottom top:(GLfloat)top near:(GLfloat)near far:(GLfloat)far;
{
GLfloat r_l = right - left;
GLfloat t_b = top - bottom;
GLfloat f_n = far - near;
GLfloat tx = - (right + left) / (right - left);
GLfloat ty = - (top + bottom) / (top - bottom);
GLfloat tz = - (far + near) / (far - near);
matrix[0] = 2.0f / r_l;
matrix[1] = 0.0f;
matrix[2] = 0.0f;
matrix[3] = tx;
matrix[4] = 0.0f;
matrix[5] = 2.0f / t_b;
matrix[6] = 0.0f;
matrix[7] = ty;
matrix[8] = 0.0f;
matrix[9] = 0.0f;
matrix[10] = 2.0f / f_n;
matrix[11] = tz;
matrix[12] = 0.0f;
matrix[13] = 0.0f;
matrix[14] = 0.0f;
matrix[15] = 1.0f;
}
Even if you're not familiar with Objective-C method syntax, the C body of this code should be easy to follow. The matrix is defined as
GLfloat orthographicMatrix[16];
You would then apply this within your vertex shader to adjust the locations of your vertices, using code like the following:
gl_Position = modelViewProjMatrix * position * orthographicMatrix;
Based on this, you should be able to set the various limits of your display space to accommodate your geometry.
There is no function called gluOrtho. There is gluOrtho2D, and there is glOrtho, both of which do very similar things. But none of them set up the viewport.
The viewport transform of the OpenGL pipeline is controlled by glViewport and glDepthRange. What you are talking about is an orthographic projection matrix, which is what glOrtho and gluOrtho2D both compute.
OpenGL ES 2.0 does not have many of the fixed-function conveniences of desktop OpenGL pre-3.1. Therefore, you will have to create them yourself. The creation of an orthographic matrix is very easy; the docs for glOrtho and gluOrtho2D both state how they create matrices.
You will need to pass this matrix to your shader via a shader uniform. Then you will need to use this matrix to transform the vertex positions from eye space (defined with the eye position as the origin, +X to the right, +Y is up, and +Z is towards the eye).
You can also us the following convenience method defined in GLkit.
GLKMatrix4 GLKMatrix4MakeOrtho (
float left,
float right,
float bottom,
float top,
float nearZ,
float farZ
);
Just pass (-1, 1) for Z values.

Resources