How to read iSight camera current focus distance (focal length)? - macos

Let's say you're holding your hand two feet away from a Mac's iSight camera and it's the thing in focus. I would like to be able to read this distance (either directly or by getting some other focus data that allows me to calculate the distance) from the iSight through some API. Anybody know if this is possible? I looked through the QTKit documentation and couldn't find anything about this.

Turns out that the iSight camera is a fixed focus camera which means there is no focal length to detect.

Related

Is there any difference between hololens' viewport and UE4's viewport?

Our main idea is that we take a picture by hololens then we get the 2D coordinates of something(the thermos and the printer) in this picture. Then we deproject these two things' 2D coordinates of their screenshot back to 3D coordinates in the unreal world, then we draw a box at coordinates' position.
However, as you can see, we marked the thermos(the first picture) and the printer(the second picture) with the 3D coordinates we calculated from their coordinates in their 2D screenshot with a static mesh. But, they have an obvious offset to left down. We speculate that maybe such kind of problem comes from the reason that our camera center is wrong.
Did you meet or solve such kind of problem? Can you give me some advice? Thanks a lot.
We noticed that you already have a new one with more information in Microsoft Q&A community platform. It seems like where exactly is returned from GetActorLocation is headset position and there is an offset from location of PV camera. So that, it is recommended you using the GetPVCameraToWorldTransform API which in HoloLensARFunctionLibrary.h header file to find the camera position in World Space. And then GetWorldSpaceRayFromCameraPoint can help to find what exists in world space at a particular pixel coordinate. For more detail about how to implement this solution, please go through this section: Find Camera Positions in World Space

Using ARCore for Measurment

I would like to know if it is possible to measure the dimensions of an object by just pointing the camera at the object without moving the camera from left to right like we do in Google Measurement.
A depth map cannot be calculated from just a 2D camera image. A smart phone does not have a distance sensor but it does have motion sensors, so by combining the movement of the device with changes in the input from the camera(s), ARCore can calculate depth. To put it simply, objects in close to the camera move around on screen more, compared to objects further away.
To get depth data from a fixed position would require different technologies than found on current phones, such as LiDAR or an infrared beam projector and infrared camera.

Limiting camera azimuth angle using OrbitControls

So, im building an architectural visualization with Three.js, and one of the things the user should be able to do is to click on things and orbit around them. The problem is that the camera is able to clip through wall. I fixed that by assigning each clickable object its own limiting azimuth and polar angles. Now the Problem is that azimuth angles go from -PI to +PI and its impossible to limit between for example 1.5, and -2.4 because its limiting the "wrong" way. I hope this graphic explains that a little better:
Heres a link to the live version:
(You control by clicking on the ground)
https://jim-fx.com/modern/
As you can see, on objects on the right side of the room the limiting works flawless, but on the cabinet and the vases the camera clips through the wall.
If anyone could help me that would be amazing. And any other tipps are welcome aswell.
Greetings, Max
There is several solution to your problem. One is to implements a kind of collision detection with some real or virtual wall for your camera, wich stops the rotation. However, I guess your are looking for something simpler to implement.
As i don't know Three.js very well, I will provides you a generic solution, but which should be easily adaptable to Three.js.
The first thing is to do not use the built-in Three.js orbit control, but to implement your own, where you control all your transformations. And, this is in fact very easy.
To create an orbitable camera, you simply have to crate:
A "null" transformable object, which mean a simple transformable entity that does not embed any shape (is not rendered, is invisible, but exists). I hope Three.js provides such elementary thing.
A camera, which should be itself another transformable.
Once you have this, you simply parent the camera to the "null" object. Now parented to the "null" object, if you rotate the "null" object, you rotate the camera with. Then to orbits, you now have to move back the camera from the parent object:
Null Camera
+ - - - - - - - - - |>
Like this, the "null" object becomes the camera "look at point", and if you rotate the "null" object around Y (I believe Three.js use Y up), you controls the camera azimuth. If you rotate the "null" object in X or Z (depending coordinate system), you will control the camera altitude. Then, you even can control the camera forward-backward to close up to the "look at point" by moving your camera in its local Z axis..
Well, you now have an orbit-camera easy to control. But your problem is not yet solved: How to make this control Pi / -Pi possible in every camera initial orientation ?
Simple: You create second "null" transform object, name it "the socle", and you parent the first one to this last one: Like this, the rotation of the camera "look at point" is always local, and you can now rotate "the socle" to give your "Orbital camera" group, an initial orientation in the world space.
In fact, it is pretty like creating virtual gimbals. I hope I was clear, with pictures this would be more easy to visualize...

Unity and Infrared

I would like to make a game where I use a camera with infrared tracking, so that I can track peoples heads (from top view). For example each player will get a helmet so that the camera or infrared sensor can track him/her.
After that I need to know the exact positions of that person in unity, to place a 3D gameobject at the players position.
Maybe there is another workaround to get peoples positions in unity. I know I could use a kinect, but I need to track at least 10 people at the same time.
Thanks
Note: This is not really a closed answer, just a collection of my thoughts regarding your question on how to transfer recorded positions into unity.
If you really need full 3D positions, I believe you won't be happy when using only one sensor. In order to obtain depth information, which can further be used to calculate 3D positions in a reference coordinate system, you would have to use at least 2 sensors.
Another thing you could do is fixing the camera position and assuming, that all persons are moving in the same plane (e.g. fixed y-component), which would allow you to determine 3D positions utilizing the projection formula given the camera parameters (so camera has to be calibrated).
What also comes to my mind is: You could try to simulate your real camera with a virtual camera in unity. This way you can use the virtual camera to project image coordinates (coming from the real camera) into unity's 3D world. I haven't tried this myself, but there was someone who tried it, you can have a look at that: https://community.unity.com/t5/Editor/How-to-simulate-Unity-Pinhole-Camera-from-its-intrinsic/td-p/1922835
Edit given your comment:
Okay, sticking to your soccer example, you could proceed as follows:
Setup: Say you define your playing area to be rectangular with its origin in the bottom left corner (think of UVs). You set these points in the real world (and in unitys representation of it) as (0,0) (bottom left) and (width, height) (top right), choosing whichever measure you like (e.g. meters, as this is unitys default unit). As your camera is stationary, you can assign the corresponding corner points in image coordinates (pixel coordinates) as well. To make things easier, work with normalized coordinates instead of pixels, thus bottom left is (0,0) ans top right is (1,1).
Tracking: When tracking persons in the image, you can calculate their normalized position (x,y) (with x and y in [0,1]). These normalized positions can be transferred into unitys 3D space (in unity you will have a playable area of the same width and height) by simply calculating a Vector3 as (x*widht, 0, y*height) (in unity x is pointing right, y is pointing up and z is pointing forward).
Edit on Tracking:
For top-view tracking in a game, I would say you are on the right track with using some sort of helmet, which enables you to use some sort of marker based tracking (in my opinion markerless multi-target tracking is not reliable enough for use in a video game) (if you want learn more about object tracking, there are lots of resources in the field of computer vision).
Independent of the sensor you are using (IR or camera), you would go create some unique marker for each helmet, thus enabling you to identify each helmet (and also the player). A marker in that case is some sort of unique pattern, that can be recognized by an algorithm for each recorded frame. In IR you can arrange quadratic IR markers to form a specific pattern and for normal cameras you can use markers like QR codes (there are also libraries for augmented reality related content, that offer functionality for creating and recognizing markers, e.g. ArUco or ARToolkit, although I don't know if they offer C# libraries, I have only used ArUco with c++ a while ago).
When you have your markers of choice, the tracking procedure is then pretty straightforward, for each recorded image:
- detect all markers in the current image (these correspond to all players currently visible)
- follow the steps from my last edit using the detected positions
I hope that helps, feel free to contact me again.

First person movement tracking using simple camera

The situation: I need to be able to track a hovering drone's translation (not height) and rotation over the ground using a downwards facing camera. I wouldn't know where to start looking. Can anyone with experience point me to some theory or resources? I'm looking for the type of algorithms a mouse would use but am not having much luck so far. Most results detail tracking an object in a fixed frame. In my case the environment is relatively static and the camera moves.

Resources