What unit is the area function for geopandas? - geopandas

I'm using the .area attribute for a geoseries, which returns a number, but I have no idea what this number means. I've looked in the geopandas documentation and I can't seem to find the answer to this question. I found one answer on stack exchange that said to do this:
print df.crs['units']
but that was from 2015, and I get the error 'TypeError: 'CRS' object is not subscriptable' when I try that for my geodataframe. I added the area as a column in my gdf and randomly tried
gp['area'].unit
but I got the error 'AttributeError: 'Series' object has no attribute 'unit''. Are the units in meters^2? Does it depend on the file? Please let me know!

Starting with the length of a geometry, its unit is expressed in the units of the CRS.
reference
The Length may be invalid for a geographic CRS using degrees as units.
The areas of each geometry are based on the same units of the CRS.
The unsatisfactory part is the computation method, Euclidean geometry is used to get the areas.

Related

3D triangulation using HALCON

My aim is to calibrate a pair of cameras and use them for simple measurement purposes. For this purpose, I have already calibrated them using HALCON and have all the necessary intrinsic and extrinsic camera Parameters. The next step for me is to basically measure known lengths to verify my calibration accuracies. So far I have been using the method intersect_lines_of_sight to achieve this. This has given me unfavourable results as the lengths are off by a couple of centimeters. Is there any other method which basically triangulates and gives me the 3D coordinates of a Point in HALCON? Or is there any leads as to how this can be done? Any help will be greatly appreciated.
Kindly let me know in case this post Needs to be updated with code samples
In HALCON there is also the operator reconstruct_points_stereo with which you can reconstruct 3D points given the row and column coordinates of a corresponding pixel. For this you will need to generate a StereoModel from your calibration data that is then used in the operator reconstruct_points_stereo.
In you HALCON installation there is an standard HDevelop example that shows the use of this operator. The example is called reconstruct_points_stereo.hdev and can be found in the example browser of HDevelop.

how do you set projections for city level geographies

I've asked similar questions before and I'm still struggling.
I want to create geo based info graphics at the level of a city.
I need to be able to take some latitude/longitude values and project them such that they are centered and appropriately zoomed.
It would help me a great deal to see an example that plots a small number of points.
37.781040, -122.497681
37.720504, -122.495622
37.723220, -122.395028
This is roughly an L shape and all three points should be in San Francisco.
It could be as simple as 3 black dots on a white background. I hope to learn:
which projection?
how do you adjust the projection so that an area the size of San Francisco is on the canvas?
how do you translate those coordinates and position them on that canvas?
Could someone create such an example?
Thanks.
-Kelly
I created a simple example that works.
https://gist.github.com/kellyfelkins/9741723
I think I was making multiple mistakes that made it really difficult to correct.
In case others have troubles too, here are some things to watch out for:
The projection method expects an array. For a while I was passing it 2 arguments, but it needs a single argument that is an array.
The projection expects the values in longitude, latitude order.
-Kelly

What does RiBasis which is described in RenderMan mean?

I'm working on a plugin of 3ds Max. In this plugin, I export the geometry information into a .rib file which can be rendered by a RenderMan renderer. When I export a nubrs curve's data into .rib file described by RiBasis and RiCurve. I use the RtBsplineBasis in RiBasis, but I get the wrong result that the rendered curve is short than the result of 3ds Max's renderer. Then I reprint the first and the last control vertex, the curve is long enough, but its shape is a little different.Who can tell me how I get wrong result or what does RiBasis mean? How can get correct RiBasis? Thank u very much!
RiCurve draws a cubic spline. The control points do not uniquely determine the curve; you also need the basis, which is expressed as a 4x4 matrix -- one matrix give the coefficients you need for a B-spline, Bezier, Catmull-Rom, and so on, and of course you can also supply the matrix yourself for some kind of hybrid interpolant that isn't quite one of the standard 3 or 4. The basis determines the character of the spline -- whether the curve is guaranteed to go through the control points or is merely approximating, the degree of continuity, the "tension", and so on.
There is a great discussion in one of the appendices of "The RenderMan Companion," including numeric examples of how different basis matrices affect the interpolation.
It sounds like you requested a B-spline basis, which is approximating (not interpolating) and continuous in both 1st and 2nd derivatives. Maybe that's not what you had in mind. It's hard to tell, since you didn't describe the properties of the spline that you were hoping for.
As an aside, approximating an arbitrary NURBS curve with a nonrational cubic is not always going to give you an exact match. Something else to keep in mind.

Algorithms for finding a look alike face?

I'm doing a personal project of trying to find a person's look-alike given a database of photographs of other people all taken in a consistent manner - people looking directly into the camera, neutral expression and no tilt to the head (think passport photo).
I have a system for placing markers for 2d coordinates on the faces and I was wondering if there are any known approaches for finding a look alike of that face given this approach?
I found the following facial recognition algorithms:
http://www.face-rec.org/algorithms/
But none deal with the specific task of finding a look-alike.
Thanks for your time.
I believe you can also try searching for "Face Verification" rather than just "Face Recognition". This might give you more relevant results.
Strictly speaking, the 2 are actually different things in scientific literature but are sometimes lumped under face recognition. For details on their differences and some sample code, take a look here: http://www.idiap.ch/~marcel/labs/faceverif.php
However, for your purposes, what others such as Edvard and Ari has kindly suggested would work too. Basically they are suggesting a K-nearest neighbor style face recognition classifier.
As a start, you can probably try that. First, compute a feature vector for each of your face images in your database. One possible feature to use is the Local Binary Pattern (LBP). You can find the code by googling it. Do the same for your query image. Now, loop through all the feature vectors and compare them to that of your query image using euclidean distance and return the K nearest ones.
While the above method is easy to code, it will generally not be as robust as some of the more sophisticated ones because they generally fail badly when faces are not aligned (known as unconstrained pose. Search for "Labelled Faces in the Wild" to see the results for state of the art for this problem.) or taken under different environmental conditions. But if the faces in your database are aligned and taken under similar conditions as you mentioned, then it might just work. If they are not aligned, you can use the face key points, which you mentioned you are able to compute, to align the faces. In general, comparing faces which are not aligned is a very difficult problem in computer vision and is still a very active area of research. But, if you only consider faces that look alike and in the same pose to be similar (i.e. similar in pose as well as looks) then this shouldn't be a problem.
The website your gave have links to the code for Eigenfaces and Fisherfaces. These are essentially 2 methods for computing feature vectors for your face images. Faces are identified by doing a K nearest neighbor search for faces in the database with feature vectors (computed using PCA and LDA respectively) closest to that of the query image.
I should probably also mention that in the Fisherfaces method, you will need to have "labels" for the faces in your database to identify the faces. This is because Linear Discriminant Analysis (LDA), the classification method used in Fisherfaces, needs this information to compute a projection matrix that will project feature vectors for similar faces close together and dissimilar ones far apart. Comparison is then performed on these projected vectors. Here lies the difference between Face Recognition and Face Verification: for recognition, you need to have "labels" your training images in your database i.e. you need to identify them.
For verification, you are only trying to tell whether any 2 given faces are of the same person. Often, you don't need the "labelled" data in the traditional sense (although some methods might make use of auxiliary training data to help in the face verification).
The code for computing Eigenfaces and Fisherfaces are available in OpenCV in case you use it.
As a side note:
A feature vector is actually just a vector in your linear algebra sense. It is simply n numbers packed together. The word "feature" refers to something like a "statistic" i.e. a feature vector is a vector containing statistics that characterizes the object it represents. For e.g., for the task of face recognition, the simplest feature vector would be the intensity values of the grayscale image of the face. In that case, I just reshape the 2D array of numbers into a n rows by 1 column vector, each entry containing the value of one pixel. The pixel value here is the "feature", and the n x 1 vector of pixel values is the feature vector. In the LBP case, roughly speaking, it computes a histogram at small patches of pixels in the image and joins these histograms together into one histogram, which is then used as the feature vector. So the Local Binary Pattern is the statistic and the histograms joined together is the feature vector. Together they described the "texture" and facial patterns of your face.
Hope this helps.
These two would seem like the equivalent problem, but I do not work in the field. You essentially have the following two problems:
Face recognition: Take a face and try to match it to a person.
Find similar faces: Take a face and try to find similar faces.
Aren't these equivalent? In (1) you start with a picture that you want to match to the owner and you compare it to a database of reference pictures for each person you know. In (2) you pick a picture in your reference database and run (1) for that picture against the other pictures in the database.
Since the algorithms seem to give you a measure of how likely two pictures belong to the same person, in (2) you just sort the measures in decreasing order and pick the top hits.
I assume you should first analyze all the picture in your database with whatever approach you are using. You should then have a set of metrics for each picture which you can compare a specific picture with and statistically find the closest match.
For example, if you can measure the distance between the eyes, you can find faces that have the same distance. You can then find the face that has the overall closest match and return that.

Pyephem Algorithms Reference

I have never used pyephem before, and I'm not expert in satellite positioning.
I'd like to exploit pyephem to calculate the position of a satellite using TLE.
I have to do something very easy, like that:
tle=["ISS (ZARYA)","1 25544U 98067A 03097.78853147 .00021906 00000-0 28403-3 0 8652","2 25544 51.6361 13.7980 0004256 35.6671 59.2566 15.58778559250029"]
iss = ephem.readtle(*tle)
observer = ephem.Observer()
observer.lon, observer.lat = ('-84.39733', '33.775867')
observer.date = ephem.Date('2002/4/23 10:10:00.000')
iss.compute(observer)
print iss.alt, iss.az, iss.range
-40:06:46.3 199:08:24.3 8834968.0
These three variables provide the position of the satellite in the horizion reference system.
It's not clear for me how pyephem calculates this values. I've read the reference guide: http://rhodesmill.org/pyephem/radec
Reading the document, it seems that pyephem applies the precession and the nutation, but in the last two line of the document it says:
"Note that no precession was applied to either of the final two sets of coordinates, but only to the first. This means that only the “Astrometric” position will correspond to the lines in your star atlas. The other positions are what are called “epoch-of-date” coordinates, and are measured off of the orientation of the celestial pole and the celestial equator for the very day of the observation itself."
Is the earth precession applied for az and alt?
Moreover I'd like to know what kind of model pyephem uses for precession and nutation (I really need some reference). There is a link to Xephem and libastro, but I can't find anything about the algorithms.
Do you have any suggestions?
Thank you very much!
You can find the various algorithms that PyEphem uses by looking through the various C language files in its libastro directory:
https://github.com/brandon-rhodes/pyephem/tree/master/libastro-3.7.5
But to answer your specific question: precession, aberration, and nutation are effects that are generally only computed for objects outside of the Earth's moving reference frame ­— objects like the Sun, planets, and the distant stars. Since Earth satellites are travelling in our own reference frame, however, I think that libastro generally does a direct comparison between the position of a satellite above the Earth and the position of the observer on the Earth, since these are already coordinates in the same local reference frame.

Resources