Unity3d UI issue with Xiaomi - user-interface

In Xiaomi devices, there are drawn an image outside of camera's letterbox
In other devices everything is correct
I attached both sumsung and xiaomi images, the screenshot that looks ugly is xiaomi, and good look in samsung
float targetaspect = 750f / 1334f;
// determine the game window's current aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height;
// current viewport height should be scaled by this amount
float scaleheight = windowaspect / targetaspect;
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}

try setting the image to clamp instead of repeat.
this will give the result of black borders but you won't have that weird texture

I don't know what caused that problem, however i solved it in a tricky way. I just added second camera to display black background. Only My main camera's viewport is letterboxed, but not second camera. So it made display to look good

Related

Standalone Player Looks Different than Game View

Okay so I've built this little dodger game and everything is perfect except the standalone player doesn't match the game view visually. Pics for reference. Please let me know anything to stop this issue.
I want the standalone player to be the same as the game view.
Changing the resolution in the player settings doesn't work so far.
Unity GameView
Standalone Player
Looks to me like its your scale settings in the Game window. It's set at 0.33 in the picture you've posted.
Try changing your view to Free Aspect, then adjust your camera GameObject to tighten in on your gameplay area. Or just refresh your layout, sometimes changing the aspect ratio while the Game view is smaller makes it difficult to restore the aspect you are looking for.
Reset your layout here:
Window\Layouts\Default (or whatever you prefer)
I used this code, with two different cameras rendering.
void Update ()
{
float targetaspect = 4f / 3f; // set the desired aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height; // determine the current aspect ratio
float scaleheight = windowaspect / targetaspect; // current viewport height should be scaled by this amount
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
else // add pillarbox
{
float scalewidth = 1.0f / scaleheight;
Rect rect = camera.rect;
rect.width = scalewidth;
rect.height = 1.0f;
rect.x = (1.0f - scalewidth) / 2.0f;
rect.y = 0;
camera.rect = rect;
}
}

How to zoom in AR

I am using Unity and AR learning and I am trying to zoom in the visual so that I can select more distant points in space better.
Is there an easy way to do this? I noticed that the measure-it app from google does not support zooming.
The answer provided by gtp works wonderfully.
Thank you.
In a Tango Unity project, you can make the camera view zoom by modifying the UVs on the camera feed using VideoOverlayProvider.SetARScreenUVs and adjusting your camera frustum in a corresponding way.
As an example, start from the Unity Tango Examples, and modify TangoARScreen.cs so that in _SetRenderAndCamera, the uvs are adjusted before they are passed to _MaterialUpdateForIntrinsics (which in turn passes these through to VideoOverlayProvider.SetARScreenUVs). This is the snippet I added before the call to _MaterialUpdateForIntrinsics to verify that this works in practice:
float width = 1.0f - 2.0f * m_uOffset;
float height = 1.0f - 2.0f * m_vOffset;
float newWidth = width / m_zoomLevel;
float newHeight = height / m_zoomLevel;
m_uOffset = (1.0f - newWidth) / 2.0f;
m_vOffset = (1.0f - newHeight) / 2.0f;
If you build and run the AugmentedReality scene on your Tango device with these changes you should see the same pin-placement sample, but with the scene zoomed according to m_zoomLevel.

Flip image with different size width smooth transition

I'm trying to flip some animations in LibGDX, but because they are of different width, the animation plays weird. Here's the problem:
(the red dot marks the X/Y coordinate {0,0})
As you can see, when the animation plays "left" when you punch, the feet starts way behind than were it was, but when you punch right, the animations plays fine because the origin of both animations is the left corner, so the transition is smooth.
The only way I think of achieving what I want is to see what animation is playing and adjust the coordinates accordingly.
This is the code:
public static float draw(Batch batch, Animation animation, float animationState,
float delta,
int posX, int posY, boolean flip) {
animationState += delta;
TextureRegion r = animation.getKeyFrame(animationState, true);
float width = r.getRegionWidth() * SCALE;
float height = r.getRegionHeight() * SCALE;
if (flip) {
batch.draw(r, posX + width, posY, -width, height);
} else {
batch.draw(r, posX, posY, width, height);
}
return animationState;
}
Any suggestion is welcome as how to approach this.
Use some other batch.draw option (with other parameters). You can set "origin" parameters. It's like a hot spot...center of the image... So if you i.e. rotate, rotation will be done around that hot spot.
https://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g2d/Batch.html
I didn't use it for flipping, but it should work the same way. But if it doesn't then you have to adjust coordinates on your own, make some list with X offset for every frame and add it for flipped images.
Other solution would be to have wider frame images and keep center of the character always match the center of the image. That way your images will be wider then they have to - you'll have some empty space, but for sane number of frame it's acceptable.

LIBGDX / OpenGL : Reducing the size of everything

This could be the worse question ever asked however that would be a cool achievement.
I have created a 3D world made of cubes that are 1x1x1 (think Minecraft), all the maths works great etc. However 1x1x1 nearly fills the whole screen (viewable area)
Is there a way I can change the ViewPort or something so that 1x1x1 is half the size it currently is?
Code for setting up camera
float aspectRatio = Gdx.graphics.getWidth() / Gdx.graphics.getHeight();
camera = new PerspectiveCamera(67, 1.0f * aspectRatio, 1.0f);
camera.near = 0.1f; // 0.5 //todo find out what this is again
camera.far = 1000;
fps = new ControlsController(camera , this, stage);
I am using the FirstPersonCameraController and PerspectiveCamera to try and make a first person game
I guess the problem is:
camera = new PerspectiveCamera(67, 1.0f * aspectRatio, 1.0f);
An standard initialization of your camera could be (based on this tutorial):
camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
// ...
Note how the width and height of the camera is nearly (if not the same) of the width and height of the native gdx window dimension. In your case you set this size to 1 (the same size of your mesh). Try with a bigger viewport dimension to allow your mesh be smaller (in perspective), something like:
/** Not too sure since is a perspective view, but play with this values **/
float multiplier = 2; // <- to allow your mesh be a fraction
// of the size of the viewport of the camera
camera = new PerspectiveCamera(67, multiplier * aspectRatio, multiplier );

Example of OpenGL game coordinates system - done right?

Well it is not surprise what default OpenGL screen coords system quite hard to operate with x-axis: from -1.0 to 1.0, y-axis: from -1.0 to 1.0, and (0.0,0.0) in center of screen.
So i decided to write some wrapper to local game coords with next main ideas:
Screen coords will be 0..100.0 (x-axis), 0..100.0 (y-axis) with (0.0,0.0) in bottom left corner of screen.
There are different screens, with different aspects.
If we draw quad, it must stay quad, not squashed rectangle.
By the quad i mean
quad_vert[0].x = -0.5f;
quad_vert[0].y = -0.5f;
quad_vert[0].z = 0.0f;
quad_vert[1].x = 0.5f;
quad_vert[1].y = -0.5f;
quad_vert[1].z = 0.0f;
quad_vert[2].x = -0.5f;
quad_vert[2].y = 0.5f;
quad_vert[2].z = 0.0f;
quad_vert[3].x = 0.5f;
quad_vert[3].y = 0.5f;
quad_vert[3].z = 0.0f;
I will use glm::ortho and glm::mat4 to achieve this:
#define LOC_SCR_SIZE 100.0f
typedef struct coords_manager
{
float SCREEN_ASPECT;
mat4 ORTHO_MATRIX;//glm 4*4 matrix
}coords_manager;
glViewport(0, 0, screen_width, screen_height);
coords_manager CM;
CM.SCREEN_ASPECT = (float) screen_width / screen_height;
For example our aspect will be 1.7
CM.ORTHO_MATRIX = ortho(0.0f, LOC_SCR_SIZE, 0.0f, LOC_SCR_SIZE);
Now bottom left is (0,0) and top right is (100.0, 100.0)
And it works, well mostly, now we can translate our quad to (25.0, 25.0), scale it to (50.0, 50.0) and it will sit at bottom-left corner with size of 50% percent of screen.
But problem is what it not quad anymore it looks like rectangle, because our screen width not equal with height.
So we use our screen aspect:
CM.ORTHO_MATRIX = ortho(0.0f, LOC_SCR_SIZE * CM.SCREEN_ASPECT, 0.0f, LOC_SCR_SIZE);
Yeah we get right form but another problem - if we position it at (50,25) we get it kinda left then center of screen, because our local system is not 0..100 x-axis anymore, it's now 0..170 (because we multiply by our aspect of 1.7), so we use next function before setting our quad translation
void loc_pos_to_gl_pos(vec2* pos)
{
pos->x = pos->x * CM.SCREEN_ASPECT;
}
And viola, we get right form squad at right place.
But question is - am i doing this right?
OpenGL screen coords system quite hard to operate with x-axis: from -1.0 to 1.0, y-axis: from -1.0 to 1.0, and (0.0,0.0) in center of screen.
Yes, but you will never use them directly. There's usually always a projection matrix, that transforms your coordinates into the right space.
we get it kinda left then center of screen, because our local system is not 0..100 x-axis anymore,
That's why OpenGL maps NDC space (0,0,0) to the screen center. If you draw a quad with coordinates symmetrically around the origin it will stay in the center.
But question is - am i doing this right?
Depends on what you want to achieve.

Resources