Geopandas to locate which area does a point contains - geopandas

So i have one geodataframe file containing all points in the city while another geodataframe file containing polygon files. With this, is there any other way for me to determine if a point is within a polygon without using sjoin? Since whenever i used this function, this error always appear
'AttributeError: 'BlockManager' object has no attribute 'dtype''

You can check if your point is within a polygon:
point.within(polygon)

Related

Convert point cloud to spline path

Trying to convert a .csv file which is a point cloud plot but in a sequential type plot..it has all three coordinates and a date time stamp/point..
What I’m trying to do is connect the dots and create a spline for other objects to attach…
Any help would be appreciated..
Was trying to follow the python script on another sub chat..
Fit Curve-Spline to 3D Point Cloud
Was trying this in cinema4D but it doesn’t seemingly handle this level of work..this might be for python patch in blender…
But also mathlab or maple would be a good alternative if anyone out there can help, let me know…

SRTM height reference is unclear

After having some trouble understanding the height reference in SRTM data cropped by elevation Python package, I checked the SRTM guide which states the following:
The unit of elevation is meters as referenced to the WGS84/EGM96 geoid
I am having difficulties with this sentence:
Assuming I read it as "WGS84 geoid OR EGM96 geoid", then why is WGS84 called a "geoid"? AFAIK is NOT a geoid but rather an ellipsoid that approximates a geoid.
If the data actually contains two sets of data - one referencing WGS84 and the other referencing EGM96, then why does the elevation library only produce tiff files with reference EPSG4236 (WGS84)? How do I create a tif with the EGM96 reference frame?
How do I understand this sentence?
SRTM provides data in BOTH references: WGS
So after a lot of hair-pulling, I can only come to the conclusion that elevation writes the wrong CRS inside the file.. We've checked the ellipsoid height at that location with two other systems and confirmed that while the .tiff file CRS is labeledd EPSG 4326 with no transformations/translations, it is in fact EGM96 elevations.
So the first thing to get a "real" EPSG 4326 tiff file, I had to force it to believe its current CRS is EGM96 (s_srs arg) and convert it to real EPSG 4326 (t_srs arg) using gdalwrap:
gdalwarp my.tiff my.wgs84.tif -s_srs EPSG:4326+5773 -t_srs EPSG:4326
After that, the CRS of the new tif remains the same as before, but querying for altitudes yields the right WGS84 values.

Is there a way to change the projection in a topojson file?

I am trying to create a topojson file projected using geoAlbersUsa, originating from the US Census's ZCTA (Zip Codes, essentially) shapefile. I was able to successfully get through the examples in the excellent https://medium.com/#mbostock/command-line-cartography-part-1-897aa8f8ca2c using the specified maps, and now I'm trying to get the same result using the Zip Code-level shapefiles.
I keep running into various issues due to the size of the file and the length of the strings within the file. While I have been able to create a geojson file and a topojson file, I haven't been able to give it the geoAlbersUsa projection I want. I was hoping to find something to convert the current topojson file into a topojson file with a geoAlbersUsa projection but I haven't been able to find any way.
I know this can be done programmatically in the browser, but everything I've read indicates that performance will be significantly better if as much as possible can be done in the files themselves first.
Attempt 1: I was able to convert the ZCTA-level shapefile to a geojson file successfully using shp2json (as in Mike Bostock's example) but when I try to run geoproject (from d3-geo-projection) I get errors related to excessive string length. In node (using npm) I installed d3-geo-projection (npm install -g d3-geo-projection) then ran the following:
geoproject "d3.geoAlbersUsa()" < us_zips.geojson > us_zips_albersUsa.json
I get errors stating "Error: Cannot create a string longer than 0x3fffffe7 characters"
Attempt 2: I used ogr2ogr (https://gdal.org/programs/ogr2ogr.html) to create the geojson file (instead of shp2json), then ran tried to run the same geoproject code as above and got the same error.
Attempt 3: I used ogr2ogr to create the geojson sequence file (instead of a geojson file), then ran geo2topo to create the topojson file from the geojsons file. While this succeeded in creating the topojson file, it still doesn't include the geoAlbersUsa projection in the resulting topojson file.
I get from the rather obtuse documentation of ogr2ogr that an output projection can be specified using -a_srs but I can't for the life of me figure out how to specify something that would get me the geoAlbersUsa projection. I found this reference https://spatialreference.org/ref/sr-org/44/ but I think that would get me the Albers and it may chop off Alaska and Hawaii, which is not what I want.
Any suggestions here? I was hoping I'd find a way to change the projection in the topojson file itself since that would avoid the excessively-long-string issue I seem to run into whenever I try to do anything in node that requires the use of the geojson file. It seems like possibly that was something that could be done in earlier versions of topojson (see Ways to project topojson?) but I don't see any way to do it now.
Not quite an answer, but more than a comment..
So, I Googled just "0x3fffffe7" and found this comment on a random Github/NodeJS project, and based on reading it, my gut feeling is that the node stuff, and/or the D3 stuff you're using is reducing your entire ZCTA-level shapefile down to ....a single string stored in memory!! That's not good for a continent-scale map with such granular detail.
Moreover, the person who left that comment suggested that the OP in that case would need a different approach to introduce their dataset to the client. (Which I suppose is a browser?) In your case, might it work if you query out each state's collection of zips into a single shapefile (ogr2ogr can do this using OGR-SQL), which would give you 5 different shapefiles. Then for each of these, run them through your conversions to get json/geoalbers. To test this concept, try exporting just one state and see if everything else works as expected.
That being said, I'm concerned that your approach to this project has an unworkable UI/architectural expectation: I just don't think you can put that much geodata in a browser DIV! How big is the DIV, full screen I hope?!?
My advice would be to think of a different way to present the data. For example an inset-DIV to "select your state", then clicking the state zooms the main DIV to a larger view of that state and simultaneously pulls down and randers that state's-specific ZCTA-level data using the 50 files you prepped using the strategy I mentioned above. Does that make sense?
Here's a quick example for how I expect you can apply the OGR_SQL to your scenario, adapt to fit:
ogr2ogr idaho_zcta.shp USA_zcta.shp -sql "SELECT * FROM USA_zcta WHERE STATE_NAME = 'ID'"
Parameters as follows:
idaho_zcta.shp < this is your new file
USA_zcta.shp < this is your source shapefile
-sql < this signals the OGR_SQL query expression
As for the query itself, a couple tips. First, wrap the whole query string in double-quotes. If something weird happens, try adding leading and trailing spaces to the start and end of your query, like..
" SELECT ... 'ID' "
It's odd I know, but I once had a situation where it only worked that way.
Second, relative to the query, the table name is the same as the shapefile name, only without the ".shp" file extension. I can't remember whether or not there is case-sensitivity between the shapefile name and the query string's table name. If you run into a problem, give the shapefile and all lowercase name and use lowercase in the SQL, too.
As for your projection conversion--you're on your own there. That geoAlbersUSA looks like it's not an industry standard (i.e EPSG-coded) and is D3-specific, intended exclusively for a browser. So ogr2ogr isn't going to handle it. But I agree with the strategy of converting the data in advance. Hopefully the conversion pipeline you already researched will work if you just have much smaller (i.e. state-scale) datasets to put through it.
Good luck.

Forward Kinematics Skeleton Programming

I am putting together a project where I need to be able to source outside data as a means of inputting skeleton joint positions into Maya. Basically I have a spreadsheet of sequential joint positions for the skeleton which I would like to load into Maya and then link to the skin. Does anyone know a way to upload or reference these positions (as FK into Maya)?
Probably the easiest thing to do is to convert your spreadsheet data to atom format
Atom is a json based format for exchanging animation data and, since its JSON based you should be able to concoct a CSV to ATOM translator using Python's built in csv and json modules.

Extracting x, y and z coordinates from .las dataset (lidar)

I was trying to extract bare coordinates of the points in a pointcloud of a classified (vegetation) .las dataset.
Using Arcgis and looking into the attribute table is not what I'm looking for so I was asking myself, if I can "de-convert" the .las to ASCII or so, to get those coordinates. I hope someone understands my question.
EDIT: I managed to get what I want with a simple toolset of Arcmap 10.2 called Featureclass Z to ASCII (3d Analyst)
Though you got you want in Arcmap, there is a free and open source way. To extract XYZ values from a .las file, check out liblas, specifically the las2txt command:
$ las2txt mylasfile.las mytextfile.txt

Resources