I'm handling shp files now and I encountered problems with the projections.
Let me give you my code below.
import pandas as pd
import geopandas as gpd
from geopandas import GeoSeries, GeoDataFrame
import os
Aelly = gpd.read_file(r'C:\Users\Hyun Mo\Downloads\조인 (1)\after_join.shp', encoding = 'utf-8')
base_map = gpd.read_file(r'C:\Users\Hyun Mo\Downloads\11000 (3)\TL_SCCO_SIG.shp', encoding = 'ANSI')
Aelly_to_crs = Aelly.to_crs(base_map.crs)
Aelly_to_crs.plot(ax=base_map.plot())
And here is my data construction
print(base_map.head())
print(Aelly.head())
When I executed print(base_map.crs), print(Aelly_to_crs.crs), I got the results like below.
Aelly_to_crs.plot(ax=base_map.plot())
The above picture is the result of executing Aelly.plot(ax=base_map.plot())
And you can see that the two pictures don't match each other.
How can I solve this problems??
-----------edit
My desired output is below picture.
Here are my data links:
http://blog.naver.com/khm2963/220929301892
below pictures are procedure for downloading my flie
From the data that you have printed it looks like everything is working as it should! The coordinates between the shapefiles are very different, but crs is the same, so the plot totally makes sense.
GeoPandas isn't able to tell you whether the data and number make sense in a real world. You gave it two shapefiles with well defined projection (EPSG: 32652) and with hardcoded coordinates and GeoPandas is happy with that.
If you know that in reality both shapefiles represent the same area, then you are the one that has to realize that datasource is somehow corrupt. I think that one of the shapefiles accidentally got a different crs definition as a metadata (imagine it as a wrong text encoding, for instance).
The easiest way to figure that out and correct it is by using ArcGIS or QGIS software, where you can play with different projections in order to figure out, what the correct projection was. Then you can save the shapefile with new projection metadata and the rest will work out of the box.
Related
Can someone guide me on how best to add a background map to two seaborn jointplots I created?
To give context, I am currently analyzing a dataset from Austin Police Dept's Crime Reports database. What I am attempting to do is visualize the density of murders and capital murders in Austin, TX. The dataset extends from the beginning of 2003 to the present.
The notebook can be located at: https://github.com/rgrantham82/Hate_Crimes_Analysis/blob/master/Austin%20Crimes%20Report%20Analysis.ipynb
So far, I visualized both data frames using the seaborn jointplot method, using latitude and longitude.
I BELIEVE this is a good method to plot the density of murders judging by the dataset but if someone has a better idea, I am open to instruction on that as well.
So, if it is even possible, how do I add a basemap to both plots?
So far, I attempted the contextily method and the geopandas method. Admittedly, this is my first attempt (outside of DataCamp class) using either method. As of to date, I am unsuccessful with both.
The contextily addbasemap() method did not produce a map (I guess it does not have one for the Austin area?). And also with the geopandas method I could not get it to work in producing a viable basemap either. I seem to be simply swamped by it.
Murders Plot
Capital Murders Plot
I am trying to create a topojson file projected using geoAlbersUsa, originating from the US Census's ZCTA (Zip Codes, essentially) shapefile. I was able to successfully get through the examples in the excellent https://medium.com/#mbostock/command-line-cartography-part-1-897aa8f8ca2c using the specified maps, and now I'm trying to get the same result using the Zip Code-level shapefiles.
I keep running into various issues due to the size of the file and the length of the strings within the file. While I have been able to create a geojson file and a topojson file, I haven't been able to give it the geoAlbersUsa projection I want. I was hoping to find something to convert the current topojson file into a topojson file with a geoAlbersUsa projection but I haven't been able to find any way.
I know this can be done programmatically in the browser, but everything I've read indicates that performance will be significantly better if as much as possible can be done in the files themselves first.
Attempt 1: I was able to convert the ZCTA-level shapefile to a geojson file successfully using shp2json (as in Mike Bostock's example) but when I try to run geoproject (from d3-geo-projection) I get errors related to excessive string length. In node (using npm) I installed d3-geo-projection (npm install -g d3-geo-projection) then ran the following:
geoproject "d3.geoAlbersUsa()" < us_zips.geojson > us_zips_albersUsa.json
I get errors stating "Error: Cannot create a string longer than 0x3fffffe7 characters"
Attempt 2: I used ogr2ogr (https://gdal.org/programs/ogr2ogr.html) to create the geojson file (instead of shp2json), then ran tried to run the same geoproject code as above and got the same error.
Attempt 3: I used ogr2ogr to create the geojson sequence file (instead of a geojson file), then ran geo2topo to create the topojson file from the geojsons file. While this succeeded in creating the topojson file, it still doesn't include the geoAlbersUsa projection in the resulting topojson file.
I get from the rather obtuse documentation of ogr2ogr that an output projection can be specified using -a_srs but I can't for the life of me figure out how to specify something that would get me the geoAlbersUsa projection. I found this reference https://spatialreference.org/ref/sr-org/44/ but I think that would get me the Albers and it may chop off Alaska and Hawaii, which is not what I want.
Any suggestions here? I was hoping I'd find a way to change the projection in the topojson file itself since that would avoid the excessively-long-string issue I seem to run into whenever I try to do anything in node that requires the use of the geojson file. It seems like possibly that was something that could be done in earlier versions of topojson (see Ways to project topojson?) but I don't see any way to do it now.
Not quite an answer, but more than a comment..
So, I Googled just "0x3fffffe7" and found this comment on a random Github/NodeJS project, and based on reading it, my gut feeling is that the node stuff, and/or the D3 stuff you're using is reducing your entire ZCTA-level shapefile down to ....a single string stored in memory!! That's not good for a continent-scale map with such granular detail.
Moreover, the person who left that comment suggested that the OP in that case would need a different approach to introduce their dataset to the client. (Which I suppose is a browser?) In your case, might it work if you query out each state's collection of zips into a single shapefile (ogr2ogr can do this using OGR-SQL), which would give you 5 different shapefiles. Then for each of these, run them through your conversions to get json/geoalbers. To test this concept, try exporting just one state and see if everything else works as expected.
That being said, I'm concerned that your approach to this project has an unworkable UI/architectural expectation: I just don't think you can put that much geodata in a browser DIV! How big is the DIV, full screen I hope?!?
My advice would be to think of a different way to present the data. For example an inset-DIV to "select your state", then clicking the state zooms the main DIV to a larger view of that state and simultaneously pulls down and randers that state's-specific ZCTA-level data using the 50 files you prepped using the strategy I mentioned above. Does that make sense?
Here's a quick example for how I expect you can apply the OGR_SQL to your scenario, adapt to fit:
ogr2ogr idaho_zcta.shp USA_zcta.shp -sql "SELECT * FROM USA_zcta WHERE STATE_NAME = 'ID'"
Parameters as follows:
idaho_zcta.shp < this is your new file
USA_zcta.shp < this is your source shapefile
-sql < this signals the OGR_SQL query expression
As for the query itself, a couple tips. First, wrap the whole query string in double-quotes. If something weird happens, try adding leading and trailing spaces to the start and end of your query, like..
" SELECT ... 'ID' "
It's odd I know, but I once had a situation where it only worked that way.
Second, relative to the query, the table name is the same as the shapefile name, only without the ".shp" file extension. I can't remember whether or not there is case-sensitivity between the shapefile name and the query string's table name. If you run into a problem, give the shapefile and all lowercase name and use lowercase in the SQL, too.
As for your projection conversion--you're on your own there. That geoAlbersUSA looks like it's not an industry standard (i.e EPSG-coded) and is D3-specific, intended exclusively for a browser. So ogr2ogr isn't going to handle it. But I agree with the strategy of converting the data in advance. Hopefully the conversion pipeline you already researched will work if you just have much smaller (i.e. state-scale) datasets to put through it.
Good luck.
I trained my model in Nvidia Digits 5 and I would now like to extract the accuracy and loss plots that were generated during training for a report. Is this data saved somewhere so that it would possible to extract the data for these plots so that I could plot it in Python and perhaps ultimately modify the plots to compare different models etc?
The best solution I have found is to either look at the HTML file or to scan the text file caffe_output.log that is produced by Caffe. The text file is usually stored in /var/digits/jobs/insert_your_job_id/ but you can also just run on linux systems:
locate caffe_output.log
Go to your DIGITS job folder and locate your job's subfolder. Inside you'll find a file status.pickle, which is a pickled object containing all your job's information.
You can load it in python like so:
import digits
import pickle
data = pickle.load(open('status.pickle','rb'))
This object is somewhat generic and may contain multiple tasks. For a typical classification task it will likely be just one, but you will still need to access it via data.tasks[0]. From there you can grab the plots:
data.tasks[0].combined_graph_data()
which returns a somewhat convoluted dict (unfortunately - since your network can produce many accuracy/loss outputs, as well as even custom ones). It contains everything you need though - I managed to plot accuracy with:
plt.plot( data.tasks[0].combined_graph_data()['columns'][2][1:] )
but it's likely that you'll have to write a bit of custom code. As always, dir() is your friend.
I'm new at this an essentially have very little idea of what I'm doing.
(FYI I'm working off of this tutorial:
http://bost.ocks.org/mike/map/)
I'm trying to get topojson to work.
I've successfully installed homebrew and node.
I've done the
"npm install -g topojson" part as well.
And then, after that, when I try to type in the "which ogr2ogr" etc -- just, nothing happens.
He says if having trouble to edit path variable environments. I have only a vague idea of what that means, and not sure if that's my problem or not.
Let me know what other information you need. I really just want to make a map. The global install does seem to have worked. I just don't know what to do from here.
The tutorial you linked to is a great starting point. I wish I'd seen it before trying to figure everything out on my own. :)
From what I understand, you probably missed the step in which you install gdal. If you're seeing some other errors, please post them in your question.
You can get ogr2ogr working by running:
brew install gdal
Here's some background info for you, so you'll get a better understanding of what's going on there.
topojson and ogr2ogr are two distinct utilities. ogr2ogr is part of the gdal package and in our case is used to generate GeoJSON from a shapefile.
GDAL is a translator library for raster geospatial data formats that
is released under an X/MIT style Open Source license by the Open
Source Geospatial Foundation. As a library, it presents a single
abstract data model to the calling application for all supported
formats. It also comes with a variety of useful commandline utilities
for data translation and processing.
TopoJSON is used to compress the rather large GeoJSON output from the previous GDAL conversion. It reduces redundancy by specifying paths with arcs rather than discrete points. It's pretty neat, actually:
TopoJSON is an extension of GeoJSON that encodes topology. Rather than
representing geometries discretely, geometries in TopoJSON files are
stitched together from shared line segments called arcs. TopoJSON
eliminates redundancy, offering much more compact representations of
geometry than with GeoJSON; typical TopoJSON files are 80% smaller
than their GeoJSON equivalents. In addition, TopoJSON facilitates
applications that use topology, such as topology-preserving shape
simplification, automatic map coloring, and cartograms.
The output of these two steps (shapefile -> GeoJSON -> TopoJSON) will be a JSON string which is easily interpreted by JavaScript. You'll need to use topojson in your drawing code to convert back to GeoJSON for actually drawing the map.
Recall from earlier the two closely-related JSON geographic data
formats: GeoJSON and TopoJSON. While our data can be stored more
efficiently in TopoJSON, we must convert back to GeoJSON for display.
Breaking this step out to make it explicit:
var subunits = topojson.object(uk, uk.objects.subunits);
For ubuntu, I used this way to have ogr2ogr
sudo apt-get install gdal-bin
I am trying to get user input from matplotlib XY plot. The plot contains multiple datasets and I need get from user selection of which dataset to use and the range. I need this to fit model to right dataset and range.
Therefore I need two indicators, which would be "attached" to specific dataset, per user choosing. I need to get from them both the dataset info and the range info.
Somehow in line with what commercial plotting packages (Igor Pro, Kaleidagraph, Sigmaplot...) provide as "cursors" and similarly named widgets for control of their fitting interface, which is what I am trying to reproduce.
I have checked various examples with rangeselector and other methods I was able to Google on the web, but none I was able to find seems to be able to provide what I need.
Would anyone have any pointers to where to look or what to start with, please?
You might want to look at this example: http://matplotlib.sourceforge.net/examples/pylab_examples/ginput_manual_clabel.html
The interesting functions are ginput, waitforbuttonpress.