PTZ for Onvif IP Camera - ip-camera

I have an application running on Android to connect and view the IP Camera.
Now I'm trying to add PTZ support to application.
Below are the requirements for PTZ.
1) When user drags the screen to any coordinates on the application, IP Camera should move(pan/tilt) accordingly.
2) When user zoom in/out on the particular area of the application, IP Camera should zoom that area.
Problems I'm facing are.
1) I'm not able to map the x,y coordinates of the user action to Onvif pan/tilt vector. Because Onvif pan/tilt vector X and Y value range is -1 to 1 and I don't know how to cover overall area of IP Camera(360degree horizontal 70degree vertical)
2) How to zoom in/out particular area since Onvif zoom vector does not provide way to specify the area(X and Y coordinates to zoom).

Related

Using ARCore for Measurment

I would like to know if it is possible to measure the dimensions of an object by just pointing the camera at the object without moving the camera from left to right like we do in Google Measurement.
A depth map cannot be calculated from just a 2D camera image. A smart phone does not have a distance sensor but it does have motion sensors, so by combining the movement of the device with changes in the input from the camera(s), ARCore can calculate depth. To put it simply, objects in close to the camera move around on screen more, compared to objects further away.
To get depth data from a fixed position would require different technologies than found on current phones, such as LiDAR or an infrared beam projector and infrared camera.

SceneKit - Fix screen frame to camera edges

I'm working on a app that renders a 3D scene that simulates a real space into an iPhone making its screen become a hollow box, as seen in the sketch below:
(note the camera position order down below)
The problem is on how to calculate the camera parameters to make the box look real fixed to the screen edges.
Is this feasible through SceneKit?
In this configuration the camera's zNear plane corresponds to the screen of the iPhone. From that you can can derive a z position from the camera's field of view and the screen's width (see here).

Project Tango - AD to SOS frame pair - unexpected rotation?

I'm trying to use the coordinate frame pair START_OF_SERVICE to AREA_DEFINITION, post localisation to the AD. I'd expect this to allow me to reconcile the original SOS origin to a proper location, and I'd like to use the incoming data in the posedata.
My test process is to create the ADF by centering my device in my area at a known position in the world with a known orientation in the world, then creating the ADF file. when I run my test app, if I provide a unity world-space offset that matches my ADF origin, everything looks exactly as expected. E.g. if I create my ADF origin at 0,1,0 (unity WS coordinates) by centering the physical device at 0,0 in my ground plane and 1m up in the Y axis, it matches what I'd expect in my Unity scene given a starting AD frame offset of 0,1,0.
If I then start the device at exactly the same real-world position such that the SOS frame should exactly match the AD frame, when the app localises to the AD, I get a translation of (close to) zero, but a rotation of 90 degrees around the Z axis in the quaternion.
As both the base and target frame share the same coordinate space, I'd expect the translation and rotation, given a SOS origin that matches pretty-accurately with the AD origin, to be a zero offset and an identity matrix.
Can anyone shed any light on what I'm doing wrong here? Thanks in advance!

MKMapview fix user at bottom of map

I'm creating a navigation app and want the map to show the users location at the bottom of the map, like in Maps app when routing/navigating. However you can only determine which coordinate to focus on by using the centre coordinate.
Therefore I need to calculate an offset distance and bearing and calculate a coordinate that this represents the central coordinate that will allow the displaying of the users location at the bottom of the map.
This is all complicated by the 3D view where the camera pitch and altitude affects the relationship of distance between the offset coordinate and the users location.
Is there an easier way to do this or does someone know how to calculate this?
Thanks (a lot!)
D

WebGL/OpenGL: Rotate camera according to device orientation

I have a web application I am trying to show a plane of map image tiles in a 3D space.
I want the plane to be always horizontal however the device rotate, the final effect is similar to this marine compass demo.
I can now capture device orientation through the W3C device orientation API for mobile devices, and I successfully rendered the map image tiles.
My problem is me lacking of essential math knowledge of how to rotate the camera correctly according to the device orientation.
I am using the Three.js library. I tried to set rotation of the camera object directly by the alpha/beta/gamma (converted to radian). But it's not working since the camera seems to always rotate according to the world axis used by openGL/webGL not according to its local axis.
I came across the idea of rotate a point 100 unit in front the camera and rotate the point relatively to the camera position by angles supplied by the device orientation API. But I don't know how to implement this either.
Can anyone help me with what I want to achieve?
EDIT:
MISTAKE CORRECTION:
For anyone interested in implementing similar things, I found out that Three.js objects uses local space axis by default, not world space axis, I was wrong. Though the official document stated that by setting "object.matrixAutoUpdate = false" and then modify "object.matrixWorld" and call "object.updateWorldMatrix()" you can manually move/rotate/scale the object in world axis. However it does not work when the object has a parent, the local axis matrix will always be used when the object has a parent.
According to the W3C Device Orientation Event Specification, the angles alpha, beta and gamma form a set of intrinsic Tait-Bryan angles of type Z-X'-Y''.
The Three.js camera also rotates according to intrinsic angles. However the default order in which the rotations are applied is:
camera.rotation.order = 'XYZ'.
What you need to do, then, is to set:
camera.rotation.order = 'ZXY'; // or whatever order is appropriate for your device
You then set the camera rotation like so:
camera.rotation.x = beta * Math.PI / 180;
camera.rotation.y = gamma * Math.PI / 180;
camera.rotation.z = alpha * Math.PI / 180;
Disclaimer: I do not have your device type. This is an educated guess based on my knowledge of three.js.
EDIT: Updated for three.js r.65

Resources