I was trying to extract bare coordinates of the points in a pointcloud of a classified (vegetation) .las dataset.
Using Arcgis and looking into the attribute table is not what I'm looking for so I was asking myself, if I can "de-convert" the .las to ASCII or so, to get those coordinates. I hope someone understands my question.
EDIT: I managed to get what I want with a simple toolset of Arcmap 10.2 called Featureclass Z to ASCII (3d Analyst)
Though you got you want in Arcmap, there is a free and open source way. To extract XYZ values from a .las file, check out liblas, specifically the las2txt command:
$ las2txt mylasfile.las mytextfile.txt
Related
Trying to convert a .csv file which is a point cloud plot but in a sequential type plot..it has all three coordinates and a date time stamp/point..
What I’m trying to do is connect the dots and create a spline for other objects to attach…
Any help would be appreciated..
Was trying to follow the python script on another sub chat..
Fit Curve-Spline to 3D Point Cloud
Was trying this in cinema4D but it doesn’t seemingly handle this level of work..this might be for python patch in blender…
But also mathlab or maple would be a good alternative if anyone out there can help, let me know…
I'm currently working on lattices. To solve some problems, I have to generate a big number of matrices of the same basis. This takes a lot of time. For example, to generate 10'000 bases, I have to launch the code when I go to bed and retrieve the list of basis in the morning. The problem is that I can't do it every day.
So I'd like to save my list of 1000 matrices once for all in a text file. The problem is that when I do it, I get strings.
The matrix list is named BB.
with open('yourfile.csv', 'w') as f1:
writefile = csv.writer(f1)
writefile.writerows(BB)
import csv
with open('yourfile.csv','rU') as f1:
data=list( csv.reader(f1) )
Do you know how I could find a way to save the matrix list and then, directly recover a list? I'm working on the Sage notebook.
The correct and easiest ways to save and load Sagemath objects via a file are
save(your_list_of_matrix, 'filename.sobj')
your_list_of_matrix = load('filename.sobj')
Saving Sagemath objects to CSV will need to convert the values into strings and will lose precision.
Refer to the official document for more detail.
I am trying to create a topojson file projected using geoAlbersUsa, originating from the US Census's ZCTA (Zip Codes, essentially) shapefile. I was able to successfully get through the examples in the excellent https://medium.com/#mbostock/command-line-cartography-part-1-897aa8f8ca2c using the specified maps, and now I'm trying to get the same result using the Zip Code-level shapefiles.
I keep running into various issues due to the size of the file and the length of the strings within the file. While I have been able to create a geojson file and a topojson file, I haven't been able to give it the geoAlbersUsa projection I want. I was hoping to find something to convert the current topojson file into a topojson file with a geoAlbersUsa projection but I haven't been able to find any way.
I know this can be done programmatically in the browser, but everything I've read indicates that performance will be significantly better if as much as possible can be done in the files themselves first.
Attempt 1: I was able to convert the ZCTA-level shapefile to a geojson file successfully using shp2json (as in Mike Bostock's example) but when I try to run geoproject (from d3-geo-projection) I get errors related to excessive string length. In node (using npm) I installed d3-geo-projection (npm install -g d3-geo-projection) then ran the following:
geoproject "d3.geoAlbersUsa()" < us_zips.geojson > us_zips_albersUsa.json
I get errors stating "Error: Cannot create a string longer than 0x3fffffe7 characters"
Attempt 2: I used ogr2ogr (https://gdal.org/programs/ogr2ogr.html) to create the geojson file (instead of shp2json), then ran tried to run the same geoproject code as above and got the same error.
Attempt 3: I used ogr2ogr to create the geojson sequence file (instead of a geojson file), then ran geo2topo to create the topojson file from the geojsons file. While this succeeded in creating the topojson file, it still doesn't include the geoAlbersUsa projection in the resulting topojson file.
I get from the rather obtuse documentation of ogr2ogr that an output projection can be specified using -a_srs but I can't for the life of me figure out how to specify something that would get me the geoAlbersUsa projection. I found this reference https://spatialreference.org/ref/sr-org/44/ but I think that would get me the Albers and it may chop off Alaska and Hawaii, which is not what I want.
Any suggestions here? I was hoping I'd find a way to change the projection in the topojson file itself since that would avoid the excessively-long-string issue I seem to run into whenever I try to do anything in node that requires the use of the geojson file. It seems like possibly that was something that could be done in earlier versions of topojson (see Ways to project topojson?) but I don't see any way to do it now.
Not quite an answer, but more than a comment..
So, I Googled just "0x3fffffe7" and found this comment on a random Github/NodeJS project, and based on reading it, my gut feeling is that the node stuff, and/or the D3 stuff you're using is reducing your entire ZCTA-level shapefile down to ....a single string stored in memory!! That's not good for a continent-scale map with such granular detail.
Moreover, the person who left that comment suggested that the OP in that case would need a different approach to introduce their dataset to the client. (Which I suppose is a browser?) In your case, might it work if you query out each state's collection of zips into a single shapefile (ogr2ogr can do this using OGR-SQL), which would give you 5 different shapefiles. Then for each of these, run them through your conversions to get json/geoalbers. To test this concept, try exporting just one state and see if everything else works as expected.
That being said, I'm concerned that your approach to this project has an unworkable UI/architectural expectation: I just don't think you can put that much geodata in a browser DIV! How big is the DIV, full screen I hope?!?
My advice would be to think of a different way to present the data. For example an inset-DIV to "select your state", then clicking the state zooms the main DIV to a larger view of that state and simultaneously pulls down and randers that state's-specific ZCTA-level data using the 50 files you prepped using the strategy I mentioned above. Does that make sense?
Here's a quick example for how I expect you can apply the OGR_SQL to your scenario, adapt to fit:
ogr2ogr idaho_zcta.shp USA_zcta.shp -sql "SELECT * FROM USA_zcta WHERE STATE_NAME = 'ID'"
Parameters as follows:
idaho_zcta.shp < this is your new file
USA_zcta.shp < this is your source shapefile
-sql < this signals the OGR_SQL query expression
As for the query itself, a couple tips. First, wrap the whole query string in double-quotes. If something weird happens, try adding leading and trailing spaces to the start and end of your query, like..
" SELECT ... 'ID' "
It's odd I know, but I once had a situation where it only worked that way.
Second, relative to the query, the table name is the same as the shapefile name, only without the ".shp" file extension. I can't remember whether or not there is case-sensitivity between the shapefile name and the query string's table name. If you run into a problem, give the shapefile and all lowercase name and use lowercase in the SQL, too.
As for your projection conversion--you're on your own there. That geoAlbersUSA looks like it's not an industry standard (i.e EPSG-coded) and is D3-specific, intended exclusively for a browser. So ogr2ogr isn't going to handle it. But I agree with the strategy of converting the data in advance. Hopefully the conversion pipeline you already researched will work if you just have much smaller (i.e. state-scale) datasets to put through it.
Good luck.
I am putting together a project where I need to be able to source outside data as a means of inputting skeleton joint positions into Maya. Basically I have a spreadsheet of sequential joint positions for the skeleton which I would like to load into Maya and then link to the skin. Does anyone know a way to upload or reference these positions (as FK into Maya)?
Probably the easiest thing to do is to convert your spreadsheet data to atom format
Atom is a json based format for exchanging animation data and, since its JSON based you should be able to concoct a CSV to ATOM translator using Python's built in csv and json modules.
There is a table in a geoDB with fields in Well-known binary (WKB) data format. I would like to get latitude and longitude from this binary data in normal decimal format independent from DB (with Java for example). Does any library or code example exist for such transformation?
Thanks in advance.
Not sure if you found a solution yet, but I found a free Java library. Info Here and download page Here.
From what I can tell it supports WKB and WKT as well as a few others.
Enjoy!
Select the data from the geometry field using the AsText(Geometry) function. This gives you the geometry object in a textural representations. If you have to draw on a web page use AsSVG(Geometry) and you get the data as SVG