I am trying to unify the geosystem of a shp file and a DEM file. I tried to use rasterstats.zonal_stats to find the overlap between these two data but it returns None. So I assume there is some problem with the geo system.
I printed the shp file crs by using geopandas. The crs shows
<Projected CRS: EPSG:26917>
Name: NAD83 / UTM zone 17N
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: North America - between 84°W and 78°W - onshore and offshore. Canada - Nunavut; Ontario; Quebec. United States (USA) - Florida; Georgia; Kentucky; Maryland; Michigan; New York; North Carolina; Ohio; Pennsylvania; South Carolina; Tennessee; Virginia; West Virginia.
- bounds: (-84.0, 23.81, -78.0, 84.0)
Coordinate Operation:
- name: UTM zone 17N
- method: Transverse Mercator
Datum: North American Datum 1983
- Ellipsoid: GRS 1980
- Prime Meridian: Greenwich
The crs of the DEM file is got by rasterio, it shows the crs is
CRS.from_epsg(26917)
It looks like both two files are using CRS 26917. But when I plot the two files. The x axile of the shp file is from -81.9 to -81.5. The x axile of the DEM file is from 420000 to 500000. So there must be something wrong about the geo system.
How should I unify this two so I can plot them in the same picture and do rasterstats.zonal_stats?
Reprojecting the raster is more complicated than changing the crs of the geometry in geopandas; Therefore, if you know the crs of the DEM, it may be faster to change the crs of the shp file. gdf.to_crs(raster_crs)
If the CRS of the DEM is unknown, it should be prioritized to find the correct CRS using the TSM of QGIS. (After displaying an online map such as OSM, input the raster and make sure it represents the correct location.)
The next thing you will do is clip the raster using shp.
When working with raster, this is called MASK.
Please refer to the following link.
https://rasterio.readthedocs.io/en/latest/topics/masking-by-shapefile.html
Related
I am working with some data which specifies an installation path, in another data source I have the location of events based on their lat/long location.
The installation location contained in the oracle attribute SDO_ORDINATE_ARRAY does not match any X/Y geographic coordinate system I am familiar with (Lat/Long or UTM). Is there a way to figure out what the data type is that is stored in the SDO_ORDINATE_ARRAY?
Here is an example of the data for a path with 3 (x,y) points:
MDSYS.SDO_GEOMETRY(2002,1026911,NULL,
MDSYS.SDO_ELEM_INFO_ARRAY(1,2,1),
MDSYS.SDO_ORDINATE_ARRAY(
1352633.64991299994289875030517578125,
12347411.6615570001304149627685546875,
1352638.02988700009882450103759765625,
12347479.02890899963676929473876953125,
1352904.06293900008313357830047607421875,
12347470.76137300021946430206298828125,
))
The above should be roughly within the proximity to 33.9845° N, 117.5159° W, and I went through various conversions but could not find anything that led me anywhere close to the above.
I read through the documentation on SDO_GEOMETRY from the oracle page and did not find any help in figuring out what the data type is.
https://docs.oracle.com/database/121/SPATL/sdo_geometry-object-type.htm#SPATL494
Alternatively, if there is a way I can type in the lat/long somewhere to see all of the different coordinate types which are equivalent, I might also be able to figure out which format this is.
Looks like there is a typo inside MDSYS.SDO_GEOMETRY(2002,1026911,NULL,
1026911 is supposed to be a SRS - Spatial Reference System.
If we remove the first 1 we have 102691, and that is a very well known SRS code.
ESRI:102691 for NAD 1983 for StatePlane Minnesota North FIPS 2201 Feet
The corresponding WKT gives you all the necessary information to perform any coordinate conversion:
PROJCS["NAD_1983_StatePlane_Minnesota_North_FIPS_2201_Feet",
GEOGCS["GCS_North_American_1983",
DATUM["North_American_Datum_1983",
SPHEROID["GRS_1980",6378137,298.257222101]],
PRIMEM["Greenwich",0],
UNIT["Degree",0.017453292519943295]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["False_Easting",2624666.666666666],
PARAMETER["False_Northing",328083.3333333333],
PARAMETER["Central_Meridian",-93.09999999999999],
PARAMETER["Standard_Parallel_1",47.03333333333333],
PARAMETER["Standard_Parallel_2",48.63333333333333],
PARAMETER["Latitude_Of_Origin",46.5],
UNIT["Foot_US",0.30480060960121924],
AUTHORITY["EPSG","102691"]]
The exiftool command below 'works' in that the data gets written to the file (see grep below). However, Finder, Photos and Adobe Lightroom don't recognize that the image has a position. Why not?
I took all of these values from an image that was taken with iphone and stamped 'correctly' - and that file shows position data in all of the programs above. There are seemingly hundreds of other exif attributes, so I'm not sure what's missing or invalid.
exiftool \
-GPSLongitude="110 deg 12' 12.40\" E" \
-GPSLatitude="7 deg 36' 28.92\" S" \
-GPSLatitudeRef="South" \
-GPSLongitudeRef="East" \
-GPSDateStamp="2019:06:04" \
-GPSTimeStamp="08:09:53" \
-GPSStatus="Measurement Active" \
-GPSMeasureMode="3-Dimensional Measurement" \
-GPSMapDatum="WGS-84" \
-GPSDifferential="No Correction" \
screenshot1.png
1 image files updated
exiftool screenshot1.png | grep GPS
GPS Version ID : 2.3.0.0
GPS Latitude Ref : South
GPS Longitude Ref : East
GPS Time Stamp : 08:09:53
GPS Status : Measurement Active
GPS Measure Mode : 3-Dimensional Measurement
GPS Map Datum : WGS-84
GPS Date Stamp : 2019:06:04
GPS Differential : No Correction
GPS Date/Time : 2019:06:04 08:09:53Z
GPS Latitude : 7 deg 36' 28.92" S
GPS Longitude : 110 deg 12' 12.40" E
GPS Position : 7 deg 36' 28.92" S, 110 deg 12' 12.40" E
The problem in this case is that you are using a PNG file. Software support for metadata in PNG files is lacking in most cases. If you convert it to another format, for example, TIFF, which would be a lossless conversion, then you would probably have better results.
Additionally, you might also try adding the data to the XMP group, which might give you better results with Lightroom (though probably not with the others) as XMP is an Adobe creation. Change your commands so that they have an XMP: prefix (i.e. XMP:GPSLongitude, XMP:GPSLatitude). Additionally, you would have to use XMP:GPSDateTime instead of the separate GPSDateStamp and GPSTimeStamp tags and drop the Ref tags (GPSLatitudeRef,GPSLongitudeRef, though keep GPSAltitudeRef if you have it) as XMP tags include the reference direction directly in the tag.
Using exiftool, if I have a file named foobar.jpeg and the shell variables lat="37.7708" and lon="-122.451".
What command do I use to set the EXIF metadata for that jpeg file such that its "geotag" / GPS metadata is set with those coordinates.
exiftool "-GPSLatitude=$lat" "-GPSLatitudeRef=$lat" "-GPSLongitude=$lon" "-GPSLongitudeRef=$lon" foobar.jpeg
Because the GPS coordinates tags are unsigned, you need to make sure and assign the values to the relevant GPSLatitudeRef/GPSLongitudeRef tags as well, especially if the location is in the Western and/or Southern hemisphere. Even though these values would normally be set with N/S and E/W, exiftool well accept the raw values and figure out the proper Ref direction from that.
I'm currently tying to geocode addresses in Austria. From a given postal code and street etc. I want to automatically find latitude and longitude in WGS84 format. My first approach was to use a webservice like OpenStreetMaps Nominatim. Actually this gives decent results.
Still it is mandatory to use a disk with addresses (PAC) and their respective geolocation as a ground truth. Only if I'm not able to match the address within this data I can use the web service approach as a backup solution.
So much for context, now the problem:
On this disk the gelocation is given as X_LAMBERT and Y_LAMBERT and I can't quite figure out how to convert into latitude+longitude.
The diks's data is loaded into an Oracle 11g database with Locator and I'm doing the following:
Step 1:
Creation of spatial geometry object: MDSYS.SDO_GEOMETRY(2003,31287,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,1), MDSYS.SDO_ORDINATE_ARRAY(X_LAMBERT,Y_LAMBERT))
Step 2: Projection from WGS84 to LatLng:
SDO_CS.TRANSFORM( GEOMETRY, 4326) in my Java client I obtain latitude, longitude from the geometry via oracle.spatial.geometry.JGeometry.
Note:
I suspect 31287 to be SRID of the lambert format, I tried 8307 and 3416, but latter give even worse results.Step 2 works fine with other geometry objects, where I know the correct SRID for sure... so I guess the error is somewhere in Step 1.
Do I have to set something in USER_SDO_GEOM_METADATA ? Wrong usage of SDO_GEOMETRY constructor ? Am I still mixing up datum and projection? Maybe you can enlighten me there aswell...
The result of my action and example for input are:
Input: X_LAMBERT 602823, Y_LAMBERT 464397
Resulting Output: WGS84 (48.0469757028535, 16.0542604612583)
Expected Output: WGS84 (48.546363, 16.085735)
Its like almost correct, but still some 100 kilometers off ;-)
Is this possible: to get (similar to) Stanford Named Entity Recognizer functionality using just NLTK?
Is there any example?
In particular, I am interested in extraction LOCATION part of text. For example, from text
The meeting will be held at 22 West Westin st., South Carolina, 12345
on Nov.-18
ideally I would like to get something like
(S
22/LOCATION
(LOCATION West/LOCATION Westin/LOCATION)
st./LOCATION
,/,
(South/LOCATION Carolina/LOCATION)
,/,
12345/LOCATION
.....
or simply
22 West Westin st., South Carolina, 12345
Instead, I am only able to get
(S
The/DT
meeting/NN
will/MD
be/VB
held/VBN
at/IN
22/CD
(LOCATION West/NNP Westin/NNP)
st./NNP
,/,
(GPE South/NNP Carolina/NNP)
,/,
12345/CD
on/IN
Nov.-18/-NONE-)
Note that if I enter my text into http://nlp.stanford.edu:8080/ner/process I get results far from perfect (street number and zip code are still missing) but at least "st." is a part of LOCATION and South Carolina is a LOCATION and not some "GPE / NNP" : ?
What I am doing wrong please? how can I fix it to use NLTK for extracting location piece from some text please?
Many thanks in advance!
nltk DOES have an interface for Stanford NER, check nltk.tag.stanford.NERTagger.
from nltk.tag.stanford import NERTagger
st = NERTagger('/usr/share/stanford-ner/classifiers/all.3class.distsim.crf.ser.gz',
'/usr/share/stanford-ner/stanford-ner.jar')
st.tag('Rami Eid is studying at Stony Brook University in NY'.split())
output:
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'),
('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'),
('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'LOCATION')]
However every time you call tag, nltk simply writes the target sentence into a file and runs Stanford NER command line tool to parse that file and finally parses the output back to python. Therefore the overhead of loading classifiers (around 1 min for me every time) is unavoidable.
If that's a problem, use Pyner.
First run Stanford NER as a server
java -mx1000m -cp stanford-ner.jar edu.stanford.nlp.ie.NERServer \
-loadClassifier classifiers/english.all.3class.distsim.crf.ser.gz -port 9191
then go to pyner folder
import ner
tagger = ner.SocketNER(host='localhost', port=9191)
tagger.get_entities("University of California is located in California, United States")
# {'LOCATION': ['California', 'United States'],
# 'ORGANIZATION': ['University of California']}
tagger.json_entities("Alice went to the Museum of Natural History.")
#'{"ORGANIZATION": ["Museum of Natural History"], "PERSON": ["Alice"]}'
Hope this helps.