Convert from 1280x720 to UIView dimensions - xcode

I have an app that finds QR finder marks from the AVCaptureDevice sample buffer delegate. Now I am trying to put boxes on the screen to cover the QR marks. However, I am having trouble converting between 1280x720 pixel resolution and the cameraView size.
I'm trying to write a method that converts a CGRect based on these parameters (and the fact that the view coordinate system is upside-down), but it won't work. It might have something to do with the orientation of the camera device as well. Here is my code for the converting method:
-(CGRect) convertRect:(CGRect) oldRect From90DegreeRotatedCameraSize1280x720ToUIViewSize:(CGSize) viewSize
{
//remember that we are using a gravity of resize aspect fill
double xScale = viewSize.width / 720;
double yScale = viewSize.height / 1280;
if (xScale < yScale)
return CGRectMake(oldRect.origin.x * xScale, viewSize.height - oldRect.origin.y * xScale, oldRect.size.height * xScale, oldRect.size.width * xScale);
else
return CGRectMake(oldRect.origin.x * yScale, viewSize.height - oldRect.origin.y * yScale, oldRect.size.height * yScale, oldRect.size.width * yScale);
}
Anyone got an elegant to solution to this relatively simple spacial coordinate problem?
EDIT -
I did a NSLog of the connection video orientation and it turns out that it never changes from 1 (which I think is enum'ed to portrait orientation.) From this, it should be easier to find a solution because the coordinates do not change in different orientations.
Any ideas, coder community?

Figured it out:
When iOS fills a view, it centers it so that the both sides are cut off. In order to transition between view coordinates, you need to scale and THEN translate in the direction that was cut off:
return CGRectMake(oldRect.origin.x * xScale, oldRect.origin.y * xScale - (1280 * xScale - viewSize.height) / 2, oldRect.size.height * xScale, oldRect.size.width * yScale);

Related

Unity3d UI issue with Xiaomi

In Xiaomi devices, there are drawn an image outside of camera's letterbox
In other devices everything is correct
I attached both sumsung and xiaomi images, the screenshot that looks ugly is xiaomi, and good look in samsung
float targetaspect = 750f / 1334f;
// determine the game window's current aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height;
// current viewport height should be scaled by this amount
float scaleheight = windowaspect / targetaspect;
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
try setting the image to clamp instead of repeat.
this will give the result of black borders but you won't have that weird texture
I don't know what caused that problem, however i solved it in a tricky way. I just added second camera to display black background. Only My main camera's viewport is letterboxed, but not second camera. So it made display to look good

Fit Object3D inside Variable Width Canvas Three.js

I have an object3D of width 100 in my scene centred at the origin. The camera has an FOV of 50 and I would like this to remain constant. I am currently positioning the camera with
var camDistance = (100/2)/Math.tan(50/2 * Math.PI/180);
var camHeight = camDistance * (6/25);
camera.position.set(0,camHeight,camDistance);
camera.lookAt(0,0,0);
This is looks good for larger displays but on mobile the object extends past the edges of the screen. I want to vary the distance from the camera to the object so that the object always occupies the same percentage of the screen horizontally, no matter what size viewport it is loaded on. What I thought should work is
var camDistance = (100/2)/Math.tan(50/2 * Math.PI/180) * (1700/window.innerWidth);
Since the object occupies about 1700px with this fov. This sort of works except the object is now too far away on very small screen widths and too close on very large screen widths.
Is there a way to actually make the object occupy the same horizontal percentage of the viewport instead of the poor approximation that I have come up with? Preferably a solution that avoids the magical-ness of 1700px.
So if I understand you correctly, you are only interested in fitting the width, not the height, of the object and screen.
It would have helped if you had added a HTML snippet of the problem in question, so I could try solutions for your application, but this is what I could come up with:
let dz = objectWidth/(2 * Math.tan(camera.fov/2) * camera.aspect);
camera.position.set(0, camHeight, margin + dz);
Here, margin is some z value that you can specify. You also need to make sure that camera.aspect corresponds to the actual aspect ratio of the window (below is how I would dynamically update it for a fullscreen application):
function onResize() {
let width = window.innerWidth;
let height = window.innerHeight;
camera.aspect = width / height;
renderer.setSize(width, height);
camera.updateProjectionMatrix();
}
This works in a sandbox I set up for myself, but please let me know if it can be applied to your application too or if there is something I haven't taken into account.

How to zoom in AR

I am using Unity and AR learning and I am trying to zoom in the visual so that I can select more distant points in space better.
Is there an easy way to do this? I noticed that the measure-it app from google does not support zooming.
The answer provided by gtp works wonderfully.
Thank you.
In a Tango Unity project, you can make the camera view zoom by modifying the UVs on the camera feed using VideoOverlayProvider.SetARScreenUVs and adjusting your camera frustum in a corresponding way.
As an example, start from the Unity Tango Examples, and modify TangoARScreen.cs so that in _SetRenderAndCamera, the uvs are adjusted before they are passed to _MaterialUpdateForIntrinsics (which in turn passes these through to VideoOverlayProvider.SetARScreenUVs). This is the snippet I added before the call to _MaterialUpdateForIntrinsics to verify that this works in practice:
float width = 1.0f - 2.0f * m_uOffset;
float height = 1.0f - 2.0f * m_vOffset;
float newWidth = width / m_zoomLevel;
float newHeight = height / m_zoomLevel;
m_uOffset = (1.0f - newWidth) / 2.0f;
m_vOffset = (1.0f - newHeight) / 2.0f;
If you build and run the AugmentedReality scene on your Tango device with these changes you should see the same pin-placement sample, but with the scene zoomed according to m_zoomLevel.

Parallax scrolling in Sprite LibGDX

I want to parallax scroll a Texture behind a Sprite with fixed width and height.
The problem is i need to just scroll the Texture in a given width and height and not to the end of the screen. I need something like a window view on this texture.
i could overlay the rest of the screen with black areas but there has to be a better solution i guess ;-)
currently im doing this
sprite.setX(sprite.getX() + (OVERLAY_ANIMATION_SPEED * delta));
sprite2.setX(sprite2.getX() + (OVERLAY_ANIMATION_SPEED * delta));
and reset the sprite where x is bigger than the screen width. But i have a smaller area inside the screen in which the scrolling should appear not from the beginng to the end of the screen.
Hope somebody has a hint for me how to achive this.
I'm using a glViewport to achieve something similar:
public void setViewPort(float dx, float dy, float sx, float sy)
{
Gdx.gl.glViewport((int) (screenWidth * dx), (int) (screenHeight * dy),
(int) (screenWidth * sx), (int) (screenHeight * sy));
}
So:
setViewPort(0, 0, 1, 1);
would render fullscreen and:
setViewPort(0.2f, 0.2f, 0.6f, 0.6f);
would render a 60% sized 'sub'window-viewport at 20% position
(thus centered), nothing is rendered outside that window (clipped by OpenGL). Hope this helps someone!

Calculating frame and aspect ratio guides to match cameras

I'm trying to visualize film camera crop and aspect ratio in Three.js. Please bear with me, it's a math problem, and I can't describe it in lesser words...
Instead of just using CameraHelper, I'm using three slightly modified CameraHelper objects for each camera. The helper lines can be seen when looking at a camera (cone), or when looking through a camera, the helper lines effectively create guide lines for the current camera.
Frame helper (bluish one with sides rendered). This is configured and supposed to be what an actual camera sees considering it's focal length and sensor or film dimensions. Calculated in getFOVFrame.
Monitor helper (white). Our frame aspect ratio here is 1.5. For example, if we plan to do a 2.35 (cinemascope) aspect ratio film with a camera of aspect ratio 1.5, this shows the crop area of the frame. So it needs to exactly fit the frame, with extra space either up and down or at the sides, but not both. Calculated in getFOVMonitor.
Screen helper (purple). We want full thing visible in the browser, and if the browser window dimensions/aspect ratio is different, we adjust the actual rendered Three.js camera so that it fits into the browser window and dimensions. So this helper always has the aspect ratio of current browser window, and focal length so that it fits the frame and monitor helper. Calculated in getFOVScreen
So based on our actual preferred camera (frame helper), we need to calculate the monitor camera and adjust it's fov that it exactly fits inside frame camera. Then we also need to calculate the screen camera and adjust it's fov that the frame camera exactly fits inside.
My current solution appears almost correct, but there is something wrong. With long lenses (small fov, big focal length) it seems correct:
Looking through, looks correct:
Both the current camera, and the camera in front look about correct:
Looking through, looks correct:
But at wide lenses (big fov, small focal length) the solution starts to break, there is extra space around the white monitor helper, for example:
Looking through, the white box should touch the bluish one from the sides:
Both the current camera, and the camera in front look wrong, the white boxes should touch the sides of blue box (both have very wide lens):
Looking through (very wide lens), looks wrong, white box should touch blue box and blue box should touch purple box:
So I think I'm calculating the various cameras wrong, although the result seems almost "close enough".
Here's the code that returns the vertical FOV, horizontal HFOV and aspect ratio, which are then used to configure the cameras and helpers:
// BLUE camera fov, based on physical camera settings (sensor dimensions and focal length)
var getFOVFrame = function() {
var fov = 2 * Math.atan( sensor_height / ( focal_length * 2 ) ) * ( 180 / Math.PI );
return fov;
}
var getHFOVFrame = function() {
return getFOVFrame() * getAspectFrame();
}
// PURPLE screen fov, should be able to contain the frame
var getFOVScreen = function() {
var fov = getFOVFrame();
var hfov = fov * getAspectScreen();
if (hfov < getHFOVFrame()) {
hfov = getHFOVFrame();
fov = hfov / getAspectScreen();
}
return fov;
}
var getHFOVScreen = function() {
return getFOVScreen() * getAspectScreen();
}
// WHITE crop area fov, should fit inside blue frame camera
var getFOVMonitor = function() {
var fov = getFOVFrame();
var hfov = fov * getAspectMonitor();
if (hfov > getHFOVFrame()) {
hfov = getHFOVFrame();
fov = hfov / getAspectMonitor();
}
return fov;
}
var getHFOVMonitor = function() {
return getFOVMonitor() * getAspectMonitor();
}
var getAspectScreen = function() {
return screen_width / screen_height;
}
var getAspectFrame = function() {
return sensor_width / sensor_height;
}
var getAspectMonitor = function() {
return monitor_aspect;
}
Why does this produce incorrect results when using large FOV / wide lenses? getFOVScreen and especially getFOVMonitor are the suspects.
Your equation var hfov = fov * getAspectScreen(); is not correct.
The relationship between the vertical FOV (vFOV) and the horizontal FOV (hFOV) are given by the following equations:
hFOV = 2 * Math.atan( Math.tan( vFOV / 2 ) * aspectRatio );
and likewise,
vFOV = 2 * Math.atan( Math.tan( hFOV / 2 ) / aspectRatio );
In these equations, vFOV and hFOV are in radians; aspectRatio = width / height.
In three.js, the PerspectiveCamera.fov is the vertical one, and is in degrees.
three.js r.59

Resources