GeoDjango: Calculate and save Polygon area in units upon object creation - area

I am struggling to save the area of a polygon to the database. I would like to be able to calculate the area, convert it to the right units and finally save it in the table, but until now I have not found the correct way to do it.
So far I have the following model, where I calculate and save the area in square degrees.
class Parcel(models.Model):
srid = settings.SRID
geometry = models.GeometryField(srid=srid, geography=True)
area = models.FloatField(blank=False, null=True)
def save(self, *args, **kwargs):
self.area = self.geometry.area
super(Parcel, self).save()
In order to use the area in acres I do elsewhere:
p = Parcel.objects.annotate(area_=Area('geometry')).get(id=parcel_id)
parcel_area = p.area_.standard/1000
But this operation is a bit heavy since it calculates the area for all parcels and after that is gets the desired parcel and also it does not use or save the area in acres in the database.
I have seen that some people transform to an srid that has the right units but that would not work for me because my polygons are from all around the earth.
Thanks!

I think the best way to do it is to calculate the area in m² based on a metric coordinate system like utm.
Here is a function to calculate it:
def get_sqm_by_wgs84_polygon(geom: GEOSGeometry) -> float:
def get_utm_by_wgs_84(cent_lon, cent_lat):
utm_zone_num = int(math.floor((cent_lon + 180) / 6) + 1)
utm_zone_hemi = 6 if cent_lat >= 0 else 7
utm_epsg = 32000 + utm_zone_hemi * 100 + utm_zone_num
return utm_epsg
lon = geom.centroid.x
lat = geom.centroid.y
epsg_code = get_utm_by_wgs_84(lon, lat)
transformed_geom = geom.transform(epsg_code, clone=True)
area = transformed_geom.area
return area
In your Model you can save it on creation:
class Parcel(models.Model):
srid = settings.SRID
geometry = models.GeometryField(srid=srid, geography=True)
area = models.FloatField(blank=False, null=True)
def save(self, *args, **kwargs):
self.area = get_sqm_by_wgs84_polygon(self.geometry)
super(Parcel, self).save()
This method only works for WGS84 (EPSG:4326) as input. From degrees to meters.

Related

Saving adversarial samples into images and loading it back, but it fails attack

I am testing the adversarial sample attack using deepfool and sparsefool on mnist dataset. It did an attack on the preprocessed image data. However, when I save it into an image and then load it back, it fails attack.
I have test it using sparsefool and deepfool, and I think there are some precision problems when I save it into images. But I cannot figure it out how to implement it correctly.
if __name__ == "__main__":
# pic_path = 'testSample/img_13.jpg'
pic_path = "./hacked.jpg"
model_file = './trained/'
image = Image.open(pic_path)
image_array = np.array(image)
# print(np.shape(image_array)) # 28*28
shape = (28, 28, 1)
projection = (0, 1)
image_norm = tf.cast(image_array / 255.0 - 0.5, tf.float32)
image_norm = np.reshape(image_norm, shape) # 28*28*1
image_norm = image_norm[tf.newaxis, ...] # 1*28*28*1
model = tf.saved_model.load(model_file)
print(np.argmax(model(image_norm)), "nnn")
# fool_img, r, pred_label, fool_label, loops = SparseFool(
# image_norm, projection, model)
print("pred_label", pred_label)
print("fool_label", np.argmax(model(fool_img)))
pert_image = np.reshape(fool_img, (28, 28))
# print(pert_image)
pert_image = np.copy(pert_image)
# np.savetxt("pert_image.txt", (pert_image + 0.5) * 255)
pert_image += 0.5
pert_image *= 255.
# shape = (28, 28, 1)
# projection = (0, 1)
# pert_image = tf.cast(((pert_image - 0.5) / 255.), tf.float32)
# image_norm = np.reshape(pert_image, shape) # 28*28*1
# image_norm = image_norm[tf.newaxis, ...] # 1*28*28*1
# print(np.argmax(model(image_norm)), "ffffnnn")
png = Image.fromarray(pert_image.astype(np.uint8))
png.save("./hacked.jpg")
It should attack 4 to 9, however, the saved image is still predicted into 4.
The full code project is shared on
https://drive.google.com/open?id=132_SosfQAET3c4FQ2I1RS3wXsT_4W5Mw
Based on my research and also this paper as reference https://arxiv.org/abs/1607.02533
You can see in real life when you converted to images, all of the adversarial attack samples generated from attack will not work on in real world. it can explain as below "This could be explained by the fact that iterative methods exploit more subtle kind of
perturbations, and these subtle perturbations are more likely to be destroyed by photo transformation"
As example, your clean image has 127,200,55,..... you dividing into 255 (as it is 8bit png) and sending to you ML as (0.4980,0.78431,0.2156,...) . And deepfool is advanced attack method it added small perturb and change it to (0.4981,0.7841,0.2155...). Now this is adversarial sample which can fool your ML. but if you try to save it to 8bit png you will get again 127,200,55.. as you will multiply it by 255. So adversarial information is lost.
Simple put, you use deep fool method it added some perturb so small which essential not possible in real world 8bit png.

Unable to get the location bound of Sketchup model using Sketchup Ruby API

I have a Sketchup 3d model that is geo-located. I can get the geo-location of the model as follows :-
latitude = Sketchup.active_model.attribute_dictionaries["GeoReference"]["Latitude"]
longitude = Sketchup.active_model.attribute_dictionaries["GeoReference"]["Longitude"]
Now i want to render this model on a 3D globe. So i need the location bounds of the 3d model.
Basically i need bounding box of the model on 2d map.
Right now i am extracting the same from the corners of a model(8 corner).
// This will return left-front-bottom corner.
lowerCorner = Sketchup.active_model.bounds.corner(0)
// This will return right-back-top corner.
upperCorner = Skectup.active_model.bounds.corner(6)
But it returns simple geometrical points in meters, inches depending upon the model.
For example i uploaded this model in sketchup. Following are the values of geo-location, lowerCorner and upperCorner respectively that i'm getting by using the above code for the above model.
geoLocation : 25.141407985864, 55.18563969191 //lat,long
lowerCorner : (-9483.01089", -6412.376053", -162.609524") // In inches
upperCorner : (-9483.01089", 6479.387909", 12882.651999") // In inches
So my first question is what i'm doing is correct or not ?
Second question is If yes for the first how can i get the values of lowerCorner and upperCorner in lat long format.
But it returns simple geometrical points in meters, inches depending upon the model.
Geom::BoundingBox.corner returns a Geom::Point3d. The x, y and z members of that is a Length. That is always returning the internal value of SketchUp which is inches.
However, when you use Length.to_s it will use the current model's unit settings and format the values into that. When you call Geom::Point3d.to_s it will use Length.to_s. On the other hand, if you call Geom::Point3d.inspect it will print the internal units (inches) without formatting.
Instead of tapping into the attributes of the model directly like that I recommend you use the API methods of geo-location: Sketchup::Model.georeferenced?
By the sound of it you might find Sketchup::Model.point_to_latlong useful.
Example - I geolocated a SketchUp model to the town square of Trondheim, Norway (Geolocation: 63°25′47″N 10°23′36″E):
model = Sketchup.active_model
bounds = model.bounds
# Get the base of the boundingbox. No need to get the top - as the
# result doesn't contain altiture information.
(0..3).each { |i|
pt = bounds.corner(i)
latlong = model.point_to_latlong(pt)
latitude = latlong.x.to_f
longitude = latlong.y.to_f
puts "#{pt.inspect} => #{longitude}, #{latitude}"
}

How i can convert coordinates x y from image to longitude/latitude and back?

I have image of the city, how i can get longitude/latitude for the points that i add to the image if i know 3 points like
Point1XRelative = "-18340651.0304568";
Point1YRelative = "14945227.3984772";
Point2XRelative = "-3960915.94162438";
Point2YRelative = "-7933119.6827411";
Point3XRelative = "4901426.10152285";
Point3YRelative = "13585796.8781726";
Point1XWorld = "53.1186547";
Point1YWorld = "29.2392344";
Point2XWorld = "52.6341388";
Point2YWorld = "29.7438198";
Point3XWorld = "53.0900105";
Point3YWorld = "30.0548051";
I have algrithm that can convert only for the plane and when i convert from the long/lat to x y they converts with offset.
Please advice me how i can resolve this problem.
It also depends on the zoom level. I think you will find what you need here.

Create point by offset in RGeo

I'm writing a relatively simple app in which I'm using RGeo to calculate distances between points on the globe. I'm doing this using a RGeo::Geographic.spherical_factory.
Now I want to be able to create a new point by adding an offset to an existing point. For example, I would like to be able to find the longitude and latitude of the point 500 metres north and 200 metres east of an existing point.
How should I go about doing this?
Maybe this helps:
a = move_point(-72.4861, 44.1853, 0, 0) # POINT (-72.4861 44.18529999999999)
b = move_point(-72.4861, 44.1853, 100, 0) # POINT (-72.48520168471588 44.18529999999999)
c = move_point(-72.4861, 44.1853, 0, 100) # POINT (-72.4861 44.18594416889434)
puts a.distance(b)
puts a.distance(c)
Which gives you
99.99999999906868
99.99999999906868
Note: I'm not sure what the different between RGeo::Geographic.simple_mercator_factory and RGeo::Geographic.spherical_factory would be here.
require 'rgeo'
def move_point(lon, lat, x_offset_meters, y_offset_meters)
wgs84 = RGeo::Geographic.simple_mercator_factory.point(lon, lat)
wgs84_factory = wgs84.factory
webmercator = wgs84_factory.project wgs84
webmercator_factory = webmercator.factory
webmercator_moved = webmercator_factory.point(webmercator.x+x_offset_meters, webmercator.y+y_offset_meters)
wgs84_factory.unproject webmercator_moved
end
From How to move a point in Rgeo

Setting correct limits with imshow if image data shape changes

I have a 3D array, of which the first two dimensions are spatial, so say (x,y). The third dimension contains point-specific information.
print H.shape # --> (200, 480, 640) spatial extents (200,480)
Now, by selecting a certain plane in the third dimension, I can display an image with
imdat = H[:,:,100] # shape (200, 480)
img = ax.imshow(imdat, cmap='jet',vmin=imdat.min(),vmax=imdat.max(), animated=True, aspect='equal')
I want to now rotate the cube, so that I switch from (x,y) to (y,x).
H = np.rot90(H) # could also use H.swapaxes(0,1) or H.transpose((1,0,2))
print H.shape # --> (480, 200, 640)
Now, when I call:
imdat = H[:,:,100] # shape (480,200)
img.set_data(imdat)
ax.relim()
ax.autoscale_view(tight=True)
I get weird behavior. The image along the rows displays the data till 200th row, and then it is black until the end of the y-axis (480). The x-axis extends from 0 to 200 and shows the rotated data. Now on, another rotation by 90-degrees, the image displays correctly (just rotated 180 degrees of course)
It seems to me like after rotating the data, the axis limits, (or image extents?) or something is not refreshing correctly. Can somebody help?
PS: to indulge in bad hacking, I also tried to regenerate a new image (by calling ax.imshow) after each rotation, but I still get the same behavior.
Below I include a solution to your problem. The method resetExtent uses the data and the image to explicitly set the extent to the desired values. Hopefully I correctly emulated the intended outcome.
import matplotlib.pyplot as plt
import numpy as np
def resetExtent(data,im):
"""
Using the data and axes from an AxesImage, im, force the extent and
axis values to match shape of data.
"""
ax = im.get_axes()
dataShape = data.shape
if im.origin == 'upper':
im.set_extent((-0.5,dataShape[0]-.5,dataShape[1]-.5,-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((dataShape[1]-.5,-.5))
else:
im.set_extent((-0.5,dataShape[0]-.5,-.5,dataShape[1]-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((-.5,dataShape[1]-.5))
def main():
fig = plt.gcf()
ax = fig.gca()
H = np.zeros((200,480,10))
# make distinguishing corner of data
H[100:,...] = 1
H[100:,240:,:] = 2
imdat = H[:,:,5]
datShape = imdat.shape
im = ax.imshow(imdat,cmap='jet',vmin=imdat.min(),
vmax=imdat.max(),animated=True,
aspect='equal',
# origin='lower'
)
resetExtent(imdat,im)
fig.savefig("img1.png")
H = np.rot90(H)
imdat = H[:,:,0]
im.set_data(imdat)
resetExtent(imdat,im)
fig.savefig("img2.png")
if __name__ == '__main__':
main()
This script produces two images:
First un-rotated:
Then rotated:
I thought just explicitly calling set_extent would do everything resetExtent does, because it should adjust the axes limits if 'autoscle' is True. But for some unknown reason, calling set_extent alone does not do the job.

Resources