How can I assess the visibility of the building's roof from an observer on the groud? - visibility

I would like to check if the roof of the buildings can be seen from points on the ground or not in ArcGIS.
I have tried Skyline and Line of Sight tools but I did not get the results I need. Attached I explain what I the problem Analys the visibilty of the roof from points
I am using 3D buildings (LoD2) with roof layer. i am just expecting to select all the roof the can be seen from the points.

Related

Drawing Optical Lenses for Ray Tracing

I am really new to optical system engineering and I am trying to do some very basic ray tracing through biconvex and biconcave lenses to understand how all of it works. I know there are tools such as OpticStudio etc, but I just want to code something up really simple to get my hands dirty with it.
Here is the issue I am running into. Consider a biconvex lens that has left surface and right surface with radii 1.628cm and 27.57cm respectively (-27.57 if we are following conventions). The thickness of the lens at the center is 0.357cm.
I am trying to draw this lenses on a plot using MATLAB. Since, these are spherical surfaces, I am basically drawing this lens as the intersection of two circles.
The question I have is, does the radii of the two surfaces and the thickness automatically constrain the height of the lens? Reason I ask this is that I am getting a maximum lens height of ~1.2cm.
Is this really how tall a lens with these parameters can be? Am I missing something really fundamental?

algorithm to detect ¨standable¨ places in a mesh?

I'm making a VR application that involves a user dynamically loading 3d models.
The user should be able to teleport to any place that resembles a floor, platform, or other kind of horizontal surface. For that, I need to find the coordinates where those planes lie - which aren't necessarily whole meshes, but perhaps a part of it (consider a single mesh of a room where the walls can't be teleported on but the floor can).
Is there a known algorithm that can be used for this use case?

Unity and Infrared

I would like to make a game where I use a camera with infrared tracking, so that I can track peoples heads (from top view). For example each player will get a helmet so that the camera or infrared sensor can track him/her.
After that I need to know the exact positions of that person in unity, to place a 3D gameobject at the players position.
Maybe there is another workaround to get peoples positions in unity. I know I could use a kinect, but I need to track at least 10 people at the same time.
Thanks
Note: This is not really a closed answer, just a collection of my thoughts regarding your question on how to transfer recorded positions into unity.
If you really need full 3D positions, I believe you won't be happy when using only one sensor. In order to obtain depth information, which can further be used to calculate 3D positions in a reference coordinate system, you would have to use at least 2 sensors.
Another thing you could do is fixing the camera position and assuming, that all persons are moving in the same plane (e.g. fixed y-component), which would allow you to determine 3D positions utilizing the projection formula given the camera parameters (so camera has to be calibrated).
What also comes to my mind is: You could try to simulate your real camera with a virtual camera in unity. This way you can use the virtual camera to project image coordinates (coming from the real camera) into unity's 3D world. I haven't tried this myself, but there was someone who tried it, you can have a look at that: https://community.unity.com/t5/Editor/How-to-simulate-Unity-Pinhole-Camera-from-its-intrinsic/td-p/1922835
Edit given your comment:
Okay, sticking to your soccer example, you could proceed as follows:
Setup: Say you define your playing area to be rectangular with its origin in the bottom left corner (think of UVs). You set these points in the real world (and in unitys representation of it) as (0,0) (bottom left) and (width, height) (top right), choosing whichever measure you like (e.g. meters, as this is unitys default unit). As your camera is stationary, you can assign the corresponding corner points in image coordinates (pixel coordinates) as well. To make things easier, work with normalized coordinates instead of pixels, thus bottom left is (0,0) ans top right is (1,1).
Tracking: When tracking persons in the image, you can calculate their normalized position (x,y) (with x and y in [0,1]). These normalized positions can be transferred into unitys 3D space (in unity you will have a playable area of the same width and height) by simply calculating a Vector3 as (x*widht, 0, y*height) (in unity x is pointing right, y is pointing up and z is pointing forward).
Edit on Tracking:
For top-view tracking in a game, I would say you are on the right track with using some sort of helmet, which enables you to use some sort of marker based tracking (in my opinion markerless multi-target tracking is not reliable enough for use in a video game) (if you want learn more about object tracking, there are lots of resources in the field of computer vision).
Independent of the sensor you are using (IR or camera), you would go create some unique marker for each helmet, thus enabling you to identify each helmet (and also the player). A marker in that case is some sort of unique pattern, that can be recognized by an algorithm for each recorded frame. In IR you can arrange quadratic IR markers to form a specific pattern and for normal cameras you can use markers like QR codes (there are also libraries for augmented reality related content, that offer functionality for creating and recognizing markers, e.g. ArUco or ARToolkit, although I don't know if they offer C# libraries, I have only used ArUco with c++ a while ago).
When you have your markers of choice, the tracking procedure is then pretty straightforward, for each recorded image:
- detect all markers in the current image (these correspond to all players currently visible)
- follow the steps from my last edit using the detected positions
I hope that helps, feel free to contact me again.

Projecting feature locations onto D3.js geo projections

I'm trying to add some dots representing the locations of various features onto some of D3.js's geographic projections. I would like the dots to rotate and get clipped the same as the country paths, but I'm having some difficulty getting this to work.
For a given projection, I can obtain the updated coordinates by updating the projection according to the drag as in the demo, then calling projection() on the coordinates that I want to update, but this does not clip the circles correctly (you can see the circles on the opposite side of the globe). Would it be possible to get an example of this? To recap, I'd like to draw a circle around, say, New York city, then be able to rotate the globe and have the circle "set".
Thanks!
Lauren

OpenGL : Line jittering with large scene and small values

I'm currently drawing a 3D solar system and I'm trying to draw the path of the orbits of the planets. The calculated data is correct in 3D space but when I go towards Pluto, the orbit line shakes all over the place until the camera has come to a complete stop. I don't think this is unique to this particular planet but given the distance the camera has to travel I think its more visible at this range.
I suspect its something to do with the frustum but I've been plugging values into each of the components and I can't seem to find a solution. To see anything I'm having to use very small numbers (E-5 magnitude) for the planet and nearby orbit points but then up to E+2 magnitude for the further regions (maybe I need to draw it twice with different frustums?)
Any help greatly appreciated...
Thanks all for answering but my solution to this was to draw it with the same matrices that were drawing the planet since it wasn't bouncing around as well. So the solution really is to code better really, sorry.

Resources