Pyephem Algorithms Reference - algorithm

I have never used pyephem before, and I'm not expert in satellite positioning.
I'd like to exploit pyephem to calculate the position of a satellite using TLE.
I have to do something very easy, like that:
tle=["ISS (ZARYA)","1 25544U 98067A 03097.78853147 .00021906 00000-0 28403-3 0 8652","2 25544 51.6361 13.7980 0004256 35.6671 59.2566 15.58778559250029"]
iss = ephem.readtle(*tle)
observer = ephem.Observer()
observer.lon, observer.lat = ('-84.39733', '33.775867')
observer.date = ephem.Date('2002/4/23 10:10:00.000')
iss.compute(observer)
print iss.alt, iss.az, iss.range
-40:06:46.3 199:08:24.3 8834968.0
These three variables provide the position of the satellite in the horizion reference system.
It's not clear for me how pyephem calculates this values. I've read the reference guide: http://rhodesmill.org/pyephem/radec
Reading the document, it seems that pyephem applies the precession and the nutation, but in the last two line of the document it says:
"Note that no precession was applied to either of the final two sets of coordinates, but only to the first. This means that only the “Astrometric” position will correspond to the lines in your star atlas. The other positions are what are called “epoch-of-date” coordinates, and are measured off of the orientation of the celestial pole and the celestial equator for the very day of the observation itself."
Is the earth precession applied for az and alt?
Moreover I'd like to know what kind of model pyephem uses for precession and nutation (I really need some reference). There is a link to Xephem and libastro, but I can't find anything about the algorithms.
Do you have any suggestions?
Thank you very much!

You can find the various algorithms that PyEphem uses by looking through the various C language files in its libastro directory:
https://github.com/brandon-rhodes/pyephem/tree/master/libastro-3.7.5
But to answer your specific question: precession, aberration, and nutation are effects that are generally only computed for objects outside of the Earth's moving reference frame ­— objects like the Sun, planets, and the distant stars. Since Earth satellites are travelling in our own reference frame, however, I think that libastro generally does a direct comparison between the position of a satellite above the Earth and the position of the observer on the Earth, since these are already coordinates in the same local reference frame.

Related

Bidirectional path tracing, algorithm explanation

I'm trying to understand path tracing. So far, I have only dealt with the very basis - when a ray is launched from each intersection point in a random direction within the hemisphere, then again, and so on recursively, until the ray hits the light source. As a result, this approach leads to the fact that in the case of small light sources, the image is extremely noisy.
The following images show the noise level depending on the number of samples (rays) per pixel.
I am also not sure that i am doing everything correctly, because the "Monte Carlo" method, as far as I understand, implies that several rays are launched from each intersection point, and then their result is summed and averaged. But this approach leads to the fact that the number of rays increases exponentially, and after 6 bounces reaches inadequate values, so i decided that it is better to just run several rays per pixel initially (slightly shifted from the center of the pixel in a random direction), but only 1 ray is generated at each intersection. I do not know whether this approach corresponds to "Monte Carlo" or not, but at least this way the rendering does not last forever..
Bidirectional path tracing
I started looking for ways to reduce the amount of noise, and came across bidirectional path tracing. But unfortunately, i couldn't find a detailed explanation of this algorithm in simple words. All I understood is that the rays are generated from both the camera and the light sources, and then there is a check on the possibility of connecting the endpoints of these paths.
As you can see, if the intersection points of the blue ray from the camera and the white ray from the light source can be freely connected (there are no obstacles in the connection path), then we can assume that the ray from the camera can pass through the points y1, y0 directly to the light source.
But there are a lot of questions:
If the light source is not a point, but has some shape, then the point from which the ray is launched must be randomly selected on the surface of this shape? If you take only the center - then there will be no difference from a point light source, right?
Do i need to build a path from the light source for each path from the camera, or should there be only one path from the light source, while several paths (samples) are built from the camera for one pixel at once?
The number of bounces/re-reflections/refractions should be the same for the path from the camera and the light source? Or not?
But the questions don't end there. I have heard that the bidirectional trace method allows you to model caustics well (in comparison with regular path tracing). But I completely did not understand how the method of bidirectional path tracing can somehow help for this.
Example 1
Here the path will eventually be built, but the number of bounces will be extremely large, so no caustics will work here, despite the fact that the ray from the camera is directed almost to the same point where the path of the ray from the light source ends.
Example 2
Here the path will not be built, because there is an obstacle between the endpoints of the paths, although it could be built if point x3 was connected to point y1, but according to the algorithm (if I understand everything correctly), only the last points of the paths are connected.
Question:
What is the use of such an algorithm, if in a significant number of cases the paths either cannot be built, or are unnecessarily long? Maybe I misunderstand something? I came across many articles and documents where this algorithm was somehow described, but mostly it was described mathematically (using all sorts of magical terms like biased-unbiased, PDF, BSDF, and others), and not.. algorithmically. I am not that strong in mathematics and all sorts of mathematical notation and wording, I would just like to understand WHAT TO DO, how to implement it correctly in the code, how these paths are connected, in what order, and so on. This can be explained in simple words, pseudocode, right? I would be extremely grateful if someone would finally shed some light on all this.
Some references that helped me to understand the Path tracing right :
https://www.scratchapixel.com/ (every rendering student should begin with this)
https://en.wikipedia.org/wiki/Path_tracing
If you're looking for more references, path tracing is used for "Global illumination" wich is the opposite as "Direct illumination" that only rely on a straight line from the point to the light.
What's more caustics is well knowned to be a hard problem, so don't begin with it! Monte Carlo method is a good straightforward method to begin with, but it has its limitations (ie Caustics and tiny lights).
Some advices for rendering newbees
Mathematics notations are surely not the coolest ones. Every one will of course prefer a ready to go code. But maths is the most rigourous way to describe the world. It permits also to modelize a whole physic interaction in a small formula instead of plenty of lines of codes that doesn't fit to the real problem. I suggest you to forget you to try reading what you read better as a good mathematic formula is always detailed. If some variables are not specified, don't loose your time and search another reference.

How to detect similar objects in this picture?

I want to find patterns in image. Saying "to find patterns" I mean "to detect similar objects", thus these patterns shouldn't be some high-frequency info like noise.
For example, on this image I'd like to get pattern "window" with ROI/ellipse of each object:
I've read advices to use Autocorrelation, FFT, DCT for this problem. As far as I've understood, Autocorrelation and FFT are alternative, not complementary.
First, I don't know if it is possible to get such high-level info in frequency domain?
As I have FFT implemented, I tried to use it. This is spectrogram:
Could you suggest how to further analyze this spectogram to detect objects "window" with their spatial locations?
Is it needed to find the brightest points/lines on spectrogram?
Should the FFT be done for image chunks instead of whole image?
If that't not possible to find such objects with this approach, what would you advice?
Thanks in advance.
P.S. Sorry for large image size.
Beware this is not my cup of tea so read with extreme prejudice. IIRC for such task are usually used SIFT/SURF + RANSAC methods.
Identify key points of interest of image SIFT/SURF
This will get you list of 2D locations in your image with specific features (which you can handle as integer hash code). I think SIFT (Scale Invariant Feature transform) is ideal for this. They work similarly like our human vision works (identify specific change in some feature and "ignore" the rest of the image). So instead of matching all the pixels of the image we cross match only few of them.
sort by occurrence
each of the found SIFT points have some feature list. if we do a histogram of this features (count how many similar or identical feature points there are) then we can group points with the same occurrence. The idea is that if we got n object placings in the image each of its key points should be n times duplicated in the final image.
So if we have many points with some n times occurrence it hints we got n similar objects in the image. From this we select just these key points for the next step.
find object placings
each object can have different scale,position and orientation. Let assume they got the same aspect ratio. So the corresponding key points in each object should have the same relative properties between the objects (like relative angle between key points, normalized distance, etc).
So the task is to regroup our key points into each object so all the objects have the same key points and the same relative properties.
This can be done by brute force (testing all the combination and checking the properties) or by RANSAC or any other method.
Usually we select one first key point (no matter which) and find 2 others that form the same angle and relative distance ratio (in all of the objects)
so angle is the same and |p1-p0| / |p2-p0| is also the same or close. While grouping realize that key points within objects are more likely closer to each other ... so we can augment our search by distance from the first selected key point.... to decide to which object the key point probably belongs to (if try those first we got high probability we found our combination fast). All the other points pi we can add similarly one by one (using p0,p1,pi)
So I would starting by closest 2 key points ... (this sometimes can be fouled by overlapping or touching mirrored images) as the key point from the neighbor object can be sometimes closer that from the own one ...
After such regrouping just check if all the found objects have the same properties (aspect ratio) ... to visualize them you can find the OBB (Oriented Bounding Box) of the key points (which can be also used for the check)

How to align "tracks" or modular objects in Unity ?

I'm developing a simple game, where user can place different but modular objects (for instance: tracks, road etc).
My question is: how to match and place different object when placed one near the other ?
My first approach is to create an hidden child object (a box) for each module objects, and put it in the border where is possible to place other object (see my image example), so i can use that coordinates (x,y,z) to align other object.
But i don't know if the best approach.
Thanks
Summary:
1.Define what is a "snapping point"
2.Define which is your threshold
3.Update new game object position
Little Explanation
1.
So I suppose that you need a way to define which parts of the object are the "snapping points".
Cause they can be clear in some examples, like a Cube, where the whole vertex could be snapping points, but it's hard to define that every vertex in amorphous objects.
A simple solution could be the one exposed by #PierreBaret, whic consists in define on your transform component which are the "snapping points".
The other one is the one you propouse, creating empty game objects that will act as snapping points locations on the game object.
2.After having those snaped points, when you will drop your new gameObject, you need to define a threshold, as long as you don't want that every object snaps allways to the nearest game object.
3.So you define a minimum distance between snapping points, so if your snapping point is under that threshold, you will need to update it's position, to adjust to the the snapped point.
Visual Representation:
Note: The Threshold distance is showing just ONE of the 4 current threshold checks on the 4 vertex in the square, but this dark blue circle should be repilcate 3 more times, one for each green snapping point of the red square
Of course this method seems expensive, you can make some improvements like setting a first threshold between gameobjects, and if the gameObject is inside this threshold, then check snapping threshold distance.
Hope it helps!
Approach for arbitrary objects/models and deformable models.
[A] A physical approach would consider all the surfaces of the 2 objects, and you might need to check that objects don't overlap, using dot products between surfaces. That's a bit more expensive computing, but nothing nasty. If there is no match involved here, you'll be able to add matching features (see [B]). However, that's the only way to work with non predefined models or deformable models.
Approaches for matching simple and complex models
[B] Snapping points are a good thing but it's not sufficient alone. I think you need to make an object have:
a sparse representation (eg., complex oriented sphere to a cube),
and place key snapping points,
tagged by polarity or color, and eventually orientation (that's oriented snapping points); eg., in the case of rails, you'll want rails to snap {+} with {+} and forbid {+} with {-}. In the case of a more complex object, or when you have several orientations (eg., 2 faces of a surface, but only one is candidate for an pair of objects matching) you'll need more than 2 polarities, but 3 different ones per matching candidate surface or feature therefore the colors (or any enumeration). You need 3 different colors to make sure there is a unique 3D space configuration. You create something that is called in chemistry an enantiomer.
You can also use point pair features that describes the relative
position and orientation of two oriented points, when an oriented
surface is not appropriate.
References
Some are computer vision papers or book extracts, but they expose algorithms and concepts to achieve what I developed in my answer.
Model Globally, Match Locally: Efficient and Robust 3D Object Recognition, Drost et al.
3D Models and Matching

Raytracing via diffusion algorithm

Many certain resources about raytracing tells about:
"shoot rays, find the first obstacle to cut it"
"shoot secondary rays..."
"or, do it reverse and approximate/interpolate"
I didnt see any algortihm that uses a diffusion algorithm. Lets assume a point-light is a point that has more density than other cells(all space is divided into cells), every step/iteration of lighting/tracing makes that source point to diffuse into neighbours using a velocity field and than their neighbours and continues like that. After some satisfactory iterations(such as 30-40 iterations), the density info of each cell is used for enlightment of objects in that cell.
Point light and velocity field:
But it has to be a like 1000x1000x1000 size and this would take too much time and memory to compute. Maybe just computing 10x10x10 and when finding an obstacle, partitioning that area to 100x100x100(in a dynamic kd-tree fashion) can help generating lighting/shadows for acceptable resolution? Especially for vertex-based illumination rather than triangle.
Has anyone tried this approach?
Note: Velocity field is here to make light diffuse to outwards mostly(not %100 but %99 to have some global illumination). Finite-element-method can make this embarassingly-parallel.
Edit: any object that is hit by a positive-density will be an obstacle to generate a new velocity field around the surface of it. So light cannot go through that object but can be mirrored to another direction.(if it is a lens object than light diffuse harder through it) So the reflection of light can affect other objects with a higher iteration limit
Same kd-tree can be used in object-collision algorithms :)
Just to take as a grain of salt: a neural-network can be trained for advection&diffusion in a 30x30x30 grid and that can be used in a "gpu(opencl/cuda)-->neural-network ---> finite element method --->shadows" way.
There's a couple problems with this as it stands.
The first problem is that, fundamentally, a photon in the Newtonian sense doesn't react or change based on the density of other photons around. So using a density field and trying to light to follow the classic Navier-Stokes style solutions (which is what you're trying to do, based on the density field explanation you gave) would result in incorrect results. It would also, given enough iterations, result in complete entropy over the scene, which is also not what happens to light.
Even if you were to get rid of the density problem, you're still left with the the problem of multiple photons going different directions in the same cell, which is required for global illumination and diffuse lighting.
So, stripping away the problem portions of your idea, what you're left with is a particle system for photons :P
Now, to be fair, sudo-particle systems are currently used for global illumination solutions. This type of thing is called Photon Mapping, but it's only simple to implement a direct lighting solution using it :P

Algorithm, tool or technique to represent 3D probability density functions on space

I'm working on a project with computer vision (opencv 2.4 on c++). On this project I'm trying to detect certain features to build a map (an internal representation) of the world around.
The information I have available is the camera pose (6D vector with 3 position and 3 angular values), calibration values (focal length, distortion, etc) and the features detected on the object being tracked (this features are basically the contour of the object but it doesn't really matter)
Since the camera pose, the position of the features and other variables are subject to errors, I want to model the object as a 3D probability density function (with the probability of finding the "object" on a given 3D point on space, this is important since each contour has a probability associated of how likely it is that it is an actually object-contour instead of a noise-contour(bear with me)).
Example:
If the object were a sphere, I would detect a circle (contour). Since I know the camera pose, but have no depth information, the internal representation of that object should be a fuzzy cylinder (or a cone, if the camera's perspective is included but it's not relevant). If new information is available (new images from a different location) a new contour would be detected, with it's own fuzzy cylinder merged with previous data. Now we should have a region where the probability of finding the object is greater in some areas and weaker somewhere else. As new information is available, the model should converge to the original object shape.
I hope the idea is clear now.
This model should be able to:
Grow dynamically if needed.
Update efficiently as new observations are made (updating the probability inside making stronger the areas observed multiple times and weaker otherwise). Ideally the system should be able to update in real time.
Now the question:
How can I do to computationally represent this kind of fuzzy information in such a way that I can perform these tasks on it?
Any suitable algorithm, data structure, c++ library or tool would help.
I'll answer with the computer vision equivalent of Monty Python: "SLAM, SLAM, SLAM, SLAM!": :-) I'd suggest starting with Sebastian Thrun's tome.
However, there's older older work on the Bayesian side of active computer vision that's directly relevant to your question of geometry estimation, e.g. Whaite and Ferrie's seminal IEEE paper on uncertainty modeling (Waithe, P. and Ferrie, F. (1991). From uncertainty to visual exploration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(10):1038–1049.). For a more general (and perhaps mathematically neater) view on this subject, see also chapter 4 of D.J.C. MacKay's Ph.D. thesis.

Resources