I would like to make a game where I use a camera with infrared tracking, so that I can track peoples heads (from top view). For example each player will get a helmet so that the camera or infrared sensor can track him/her.
After that I need to know the exact positions of that person in unity, to place a 3D gameobject at the players position.
Maybe there is another workaround to get peoples positions in unity. I know I could use a kinect, but I need to track at least 10 people at the same time.
Thanks
Note: This is not really a closed answer, just a collection of my thoughts regarding your question on how to transfer recorded positions into unity.
If you really need full 3D positions, I believe you won't be happy when using only one sensor. In order to obtain depth information, which can further be used to calculate 3D positions in a reference coordinate system, you would have to use at least 2 sensors.
Another thing you could do is fixing the camera position and assuming, that all persons are moving in the same plane (e.g. fixed y-component), which would allow you to determine 3D positions utilizing the projection formula given the camera parameters (so camera has to be calibrated).
What also comes to my mind is: You could try to simulate your real camera with a virtual camera in unity. This way you can use the virtual camera to project image coordinates (coming from the real camera) into unity's 3D world. I haven't tried this myself, but there was someone who tried it, you can have a look at that: https://community.unity.com/t5/Editor/How-to-simulate-Unity-Pinhole-Camera-from-its-intrinsic/td-p/1922835
Edit given your comment:
Okay, sticking to your soccer example, you could proceed as follows:
Setup: Say you define your playing area to be rectangular with its origin in the bottom left corner (think of UVs). You set these points in the real world (and in unitys representation of it) as (0,0) (bottom left) and (width, height) (top right), choosing whichever measure you like (e.g. meters, as this is unitys default unit). As your camera is stationary, you can assign the corresponding corner points in image coordinates (pixel coordinates) as well. To make things easier, work with normalized coordinates instead of pixels, thus bottom left is (0,0) ans top right is (1,1).
Tracking: When tracking persons in the image, you can calculate their normalized position (x,y) (with x and y in [0,1]). These normalized positions can be transferred into unitys 3D space (in unity you will have a playable area of the same width and height) by simply calculating a Vector3 as (x*widht, 0, y*height) (in unity x is pointing right, y is pointing up and z is pointing forward).
Edit on Tracking:
For top-view tracking in a game, I would say you are on the right track with using some sort of helmet, which enables you to use some sort of marker based tracking (in my opinion markerless multi-target tracking is not reliable enough for use in a video game) (if you want learn more about object tracking, there are lots of resources in the field of computer vision).
Independent of the sensor you are using (IR or camera), you would go create some unique marker for each helmet, thus enabling you to identify each helmet (and also the player). A marker in that case is some sort of unique pattern, that can be recognized by an algorithm for each recorded frame. In IR you can arrange quadratic IR markers to form a specific pattern and for normal cameras you can use markers like QR codes (there are also libraries for augmented reality related content, that offer functionality for creating and recognizing markers, e.g. ArUco or ARToolkit, although I don't know if they offer C# libraries, I have only used ArUco with c++ a while ago).
When you have your markers of choice, the tracking procedure is then pretty straightforward, for each recorded image:
- detect all markers in the current image (these correspond to all players currently visible)
- follow the steps from my last edit using the detected positions
I hope that helps, feel free to contact me again.
I have a custom photoshop image of an area.
It is mostly rectangular, so I have taken the coordinates of the 4 corners using Google maps.
I would like to translate these 4 corners to x and y axis on that image alone and then use Core Location to display the user's location within those 4 corners (anything outside of that is not necessary).
Can I do this without using a mapview? I really don't need any other functionality outside that small area. Or do I have to overlay a custom tile on mapkit, limit the view and zoom on that area and work from there? (which seems more resource demanding than if there were a Mercator coordinates way of doing this)
Any good tutorials appreciated!
While it's certainly possible to translate latitude and longitude to x,y coordinates on an image, it's usually more beneficial to use an MKOverlayView (as you described in your question) due to the free functionality you get: current position, annotations, zoom, etc.
One of the projects I've worked on in recent years benefited from having our custom map graphic surrounded by the Apple (or Google) map tiles -- it ended up giving everything a little bit more geographical context.
I'm creating a navigation app and want the map to show the users location at the bottom of the map, like in Maps app when routing/navigating. However you can only determine which coordinate to focus on by using the centre coordinate.
Therefore I need to calculate an offset distance and bearing and calculate a coordinate that this represents the central coordinate that will allow the displaying of the users location at the bottom of the map.
This is all complicated by the 3D view where the camera pitch and altitude affects the relationship of distance between the offset coordinate and the users location.
Is there an easier way to do this or does someone know how to calculate this?
Thanks (a lot!)
D
Is it possible in THREE JS to re-position a texture in real time?
I have a model of a heart and I'm projecting "color maps"/"texture with colors" onto the model but the position of the maps can be a little different each time.
UPDATE
More info:
I have about 20 color maps. They are 80 by 160 pixels. I need to position them on the model. The position of the color maps may differ slightly. Currently I add all the color maps to a big texture and then I load the texture onto the model. That all works just fine.
But sometimes a surgeon feels like a color map needs to be moved over or rotated a little. I can't expect him to change the hard-coded locations in the code. I want him to be able to drag the color map to the right location.
I studied the THREE JS documentation and examples but I haven't found anything yet.
I am trying to solve a problem using Bing maps v7 JS API. My problem is that I need a custom callout or bubble with the beak pointing to a specific position (lat/lon) and the bubble portion containing 2-4 short lines of text which describe that point. The bubble portion should be movable (and the text within it).
Our first try at this was using a pushpin and an infobox. This functioned as desired but was not aesthetically pleasing. It was created where the pushpin could be moved and the infobox followed. The only option to mark the original location of the pushpin once it was moved was to extend a polyline from the pushpin tip to the original point. This really wasn't intuitive when looking at the map because the actual location was not at the push pin but at the end of the polyline connecting the pushpin to the original location. Also, there were 3 different objects used for each point which detracted from the presentation of the map.
Now, alternatively, I have created a polygon resembling a callout whose bubble portion can be moved around while keeping the beak pointing to the desired location (this is a requirement as well). What I can't figure out is how to place text within the polygon. Moving this text with the polygon would be the next hurdle but something that I am pretty sure that I can handle once I figure out how to place it. The text needs to be, say white, with a transparent background so that it appears it and the polygon are one.
In this application there will be 3-8 of these callouts. And each may have their bubbles re-positioned so that they may all fit in the screen view.
How can I place text within a polygon using the v7 JS API?
Thanks!
Any ideas