How to make an animation (or animated gif), from a number of geopandas plots - animation

I have a Geodataframe ("mhg") in which the index are months (i.e. "2019-01-01", "2019-02-01", ...), and the GDF have a column that is the geometry of certain regions (i.e. POLYGON(...)), and finally another column that is the population corresponding to that geometry at that month.
sample data (with onyl two months) could be created by:
import geopandas as gpd
data = [['2019-01-01', 'POLYGON(123...)', 1000], ['2019-01-01', 'POLYGON(456...)', 1500], ['2019-01-01', 'POLYGON(789...)', 1400], ['2019-02-01', 'POLYGON(123...)', 1100], ['2019-02-01', 'POLYGON(456...)', 1600], ['2019-02-01', 'POLYGON(789...)', 1300]]
mhg = gpd.GeoDataFrame(data, columns=['month','geometry', 'population'])
mhg.set_index('month')
I can make a multicolor plot of the users living in each region (all periods) with:
mhg.plot(column='population',cmap='jet')
and I can make the same, but filtering by month, using:
mhg.loc['2019-01-01'].plot(column='population',cmap='jet')
I would like to get some kind of ""animation" or animated gif where I can see the temporal evolution of the population, by using this kind of pseudocode:
for all the plots in
mhg.loc['2019-01-01'].plot(column='population',cmap='jet')
mhg.loc['2019-02-01'].plot(column='population',cmap='jet')
mhg.loc['2019-03-01'].plot(column='population',cmap='jet')
...
then merge all plots into 1 animated gif
But I dont' know how to do it: the number of months can be up to hundreds, I don't how how to make the for loop, and I don't know even how to start...
Any suggestions?
EDIT: I tried the following (following https://linuxtut.com/en/c089c549df4d4a6d815c/):
months = np.sort(np.unique(mhg.month.values))
from matplotlib.animation import FuncAnimation
from matplotlib.animation import PillowWriter
fig, ax = plt.subplots()
ims = []
def update_fig(month):
if len(ims) > 0:
ims[0].remove()
del ims[0]
geos = mhg['geometry'].values
users = mhg[(mhg.month==month)].population
apl = gpd.plotting.plot_polygon_collection(ax, geos, population, True, cmap='jet')
ims.append(apl)
ax.set_title('month = ' + str(month))
return ims
anim = FuncAnimation(fig, update_fig, interval=1000, repeat_delay=3000, frames=months)
plt.show()
But I got a UserWarning: animation was deleted without rendering anything...
So I am stuck again.

I managed to do it this way:
mhg = mhg.reset_index()
groups = mhg.groupby('month')
for month, grp in groups:
grp.plot(column='users',cmap='jet',legend=True,figsize=(10, 10),norm=matplotlib.colors.LogNorm(vmin=mhg.users.min(), vmax=mhg.users.max()))
plt.title({month})
plt.xlim([-20, 5])
plt.ylim([25, 45])
plt.savefig("plot{month}.png".format(month=month), facecolor='white')
plt.close()
And then I joined all the png's with convert (imagemagick tool):
convert -delay 50 -loop 0 *.png aniamtion.gif

Related

Geoview and geopandas groupby projection error

I’m experiencing projection errors following a groupby on geodataframe. Below you will find the libraries that I am using:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import holoviews as hv
from holoviews import opts
import panel as pn
from bokeh.resources import INLINE
import geopandas as gpd
import geoviews as gv
from cartopy import crs
hv.extension('bokeh', 'matplotlib')
gv.extension('bokeh')
pd.options.plotting.backend = 'holoviews'
Whilst these are the versions of some key libraries:
bokeh 2.1.1
geopandas 0.6.1
geoviews 1.8.1
holoviews 1.13.3
I have concatenated 3 shapefiles to build a polygon picture of UK healthcare boundaries (links to files provided if needed). Unfortunately, from what i have found the UK doesn’t produce one file that combines all of those, so have had to merge the shape files from the 3 individual countries i’m interested in. The 3 shape files have a size of:
shape file 1 = (https://www.opendatani.gov.uk/dataset/department-of-health-trust-boundaries)
shape file 2 = (https://geoportal.statistics.gov.uk/datasets/5252644ec26e4bffadf9d3661eef4826_4)
shape file 3 = (https://data.gov.uk/dataset/31ab16a2-22da-40d5-b5f0-625bafd76389/local-health-boards-december-2016-ultra-generalised-clipped-boundaries-in-wales)
My code to concat them together is below:
England_CCG.drop(['objectid', 'bng_e', 'bng_n', 'long', 'lat', 'st_areasha', 'st_lengths'], inplace = True, axis = 1 )
Wales_HB.drop(['objectid', 'bng_e', 'bng_n', 'long', 'lat', 'st_areasha', 'st_lengths', 'lhb16nmw'], inplace = True, axis = 1 )
Scotland_HB.drop(['Shape_Leng', 'Shape_Area'], inplace = True, axis = 1)
#NI_HB.drop(['Shape_Leng', 'Shape_Area'], inplace = True, axis = 1 )
England_CCG.rename(columns={'ccg20cd': 'CCG_Code', 'ccg20nm': 'CCG_Name'}, inplace = True )
Wales_HB.rename(columns={'lhb16cd': 'CCG_Code', 'lhb16nm': 'CCG_Name'}, inplace = True )
Scotland_HB.rename(columns={'HBCode': 'CCG_Code', 'HBName': 'CCG_Name'}, inplace = True )
#NI_HB.rename(columns={'TrustCode': 'CCG_Code', 'TrustName': 'CCG_Name'}, inplace = True )
UK_shape = [England_CCG, Wales_HB, Scotland_HB]
Merged_Shapes = gpd.GeoDataFrame(pd.concat(UK_shape))
Each of the files has the same esri projection once joined, and the shape plots perfectly as one when I run:
Test= gv.Polygons(Merged_Shapes, vdims=[('CCG_Name')], crs=crs.OSGB())
This gives me a polygon plot of the UK, with all the area boundaries for each ccg.
To my geodataframe, I then add a new column, called ‘Country’ which attributes each CCG to whatever the country they belong to. So, all the Welsh CCGs are attributed to Wales, all the English ones to England and all the Scottish ones to Scotland. Just a simple additional grouping of the data really.
What I want to achieve is to have a dropdown next to the polygon map I am making, that will show all the CCGs in a particular country when it is selected from the drop down widget. I understand that the way to to do this is by a groupby. However, when I use the following code to achieve this:
c1 = gv.Polygons(Merged_Shapes, vdims=[('CCG_Name','Country')], crs=crs.OSGB()).groupby(['Country'])
I get a long list of projection errors stating:
“WARNING:param.project_path: While projecting a Polygons element from a PlateCarree coordinate reference system (crs) to a Mercator projection none of the projected paths were contained within the bounds specified by the projection. Ensure you have specified the correct coordinate system for your data.”
To which I am left without a map but I retain the widget. Does anyone know what is going wrong here and what a possible solution would be? its been driving me crazy!
Kind regards,
For some reason geoviews doesn't like the OSGB projection then followed by a groupby, as it tries to default back to platecaree projection.
The way I fixed it was to just make the entire dataset project in epsg:4326. For anyone who also runs into this problem, code below (it is a well documented solution:
Merged_Shapes.to_crs({'init': 'epsg:4326'},inplace=True)
gv.Polygons(Merged_Shapes, vdims=[('CCG_Name'),('Country')]).groupby('Country')
The groupby works fine after this.

Which methodology for calculating SNR is appropriate for this image?

I was trying to find snr for a set of images that I have but my two methodologies of doing so creates two different answers and I'm not sure which is right. I was wondering if one of them is just straight up the wrong way of doing this or if neither way is correct?
I am trying to characterize the snr of a set of images that I'm processing. I have 1 set of data with images and darkfields. From these pieces of data I subtracted the darkfield from the image and got "corrected_images".
So since I know snr is (mean of signal)/(std of noise), in my first methodology I was working with the corrected image and background noise image and I just took the mean of every pixel on the spectrum (from the corrected image) with a value greater than 1 for signal and the general std for the background noise image as my values. the plot for this methodology is in blue.
In my second methodology I used a single uncorrected image and basically considered every pixel above 50 as signal and every pixel below 50 as noise.This gives us the orange values for snr.
# -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
from PIL import Image
from matplotlib import pyplot as plt
import numpy as np
import os
signals=[]
name = r"interpolating_streaks/corrected"
name2 = r"interpolating_streaks/averages"
file = os.listdir(name)
file2 = os.listdir(name2)
wv1=[]
signal = []
snr = []
noise = []
x=0
for i in file:
wv=(i[:3])
wv1.append(wv)
corrected_image = Image.open(name+"/"+i) #opens the image
streak= np.array(corrected_image)
dark_image = Image.open(name2+'/d'+wv+'_averaged.tif')
dark = np.array(dark_image)
darkavg = dark[:][:].mean(axis=0)
avg= streak[:][:].mean(axis=0)
for i in avg:
if i >= 1:
signal.append(i)
noiser = np.std(darkavg)
signalr = np.mean(signal)
snr.append(signalr/noiser)
plt.plot(wv1,snr)
signal = []
noise = []
snr = []
for i in file2:
if(i[0] !='d'):
image = Image.open(name2+'/' + i )
im = np.array(image)
im_avg = im[:][:].mean(axis=0)
for i in im_avg:
if i <= 50:
noise.append(i)
else:
signal.append(i)
snr.append(np.mean(signal)/np.std(noise))
plt.plot(wv1,snr)
I would expect the snr values to be the same , and I know for my camera the snr has to be below 45 dB (but also I'm pretty sure this methodology for snr doesnt output decibels)
here are my current results
![1]: https://imgur.com/a/Vgecyp1

How can I classify different images with various sizes and formats in scikit-learn?

I'm trying to build a simple image classifier using scikit-learn. I'm hoping to avoid having to resize and convert each image before training.
Question
Given two different images that are different formats and sizes (1.jpg and 2.png), how can I avoid a ValueError while fitting the model?
I have one example where I train using only 1.jpg, which fits successfully.
I have another example where I train using both 1.jpg and 2.png and a ValueError is produced.
This example will fit successfully:
import numpy as np
from sklearn import svm
import matplotlib.image as mpimg
target = [1, 2]
images = np.array([
# target 1
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
# target 2
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
])
n_samples = len(images)
data = images.reshape((n_samples, -1))
model = svm.SVC()
model.fit(data, target)
This example will raise a Value error.
Observe the different 2.png image in target 2.
import numpy as np
from sklearn import svm
import matplotlib.image as mpimg
target = [1, 2]
images = np.array([
# target 1
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
# target 2
[mpimg.imread('./2.png'), mpimg.imread('./1.jpg')],
])
n_samples = len(images)
data = images.reshape((n_samples, -1))
model = svm.SVC()
model.fit(data, target)
# ValueError: setting an array element with a sequence.
1.jpg
2.png
For this, I would really recommend using the tools in Keras that are specifically designed to preprocess images in a highly scalable and efficient way.
from keras.preprocessing.image import ImageDataGenerator
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
1 Determine the target size of your new pictures
h,w = 150,150 # desired height and width
batch_size = 32
N_images = 100 #total number of images
Keras works in batches, so batch_size just determines how many pictures at once will be processed (this does not impact your end result, just the speed).
2 Create your Image Generator
train_datagen = ImageDataGenerator(
rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'Pictures_dir',
target_size=(h, w),
batch_size=batch_size,
class_mode = 'binary')
The object that is going to do the image extraction is ImageDataGenerator. It has the method flow_from_directory which I believe might be useful for you here. It will read the content of the folder Pictures_dir and expect your images to be in folders by class (eg: Pictures_dir/class0 and Pictures_dir/class1). The generator, when called, will then create images from these folders and also import their label (in this example, 'class0' and 'class1').
There are plenty of other arguments to this generator, you can check them out in the Keras documentation (especially if you want to do data augmentation).
Note: this will take any image, be it PNG or JPG, as you requested
If you want to get the mapping from class names to label indices, do:
train_generator.class_indices
# {'class0': 0, 'class1': 1}
You can check what is going on with
plt.imshow(train_generator[0][0][0])
3 Extract all resized images from the Generator
Now you are ready to extract the images from the ImageGenerator:
def extract_images(generator, sample_count):
images = np.zeros(shape=(sample_count, h, w, 3))
labels = np.zeros(shape=(sample_count))
i = 0
for images_batch, labels_batch in generator: # we are looping over batches
images[i*batch_size : (i+1)*batch_size] = images_batch
labels[i*batch_size : (i+1)*batch_size] = labels_batch
i += 1
if i*batch_size >= sample_count:
# we must break after every image has been seen once, because generators yield indifinitely in a loop
break
return images, labels
images, labels = extract_images(train_generator, N_images)
print(labels[0])
plt.imshow(images[0])
Now you have your images all at the same size in images, and their corresponding labels in labels, which you can then feed into any scikit-learn classifier of your choice.
Its difficult because of the math operations behind the scene, (the details are out of scope) if you manage do so, lets say you build your own algorithm, still you would not get the desired result.
i had this issue once with faces with different sizes. maybe this piece of code give you starting point.
from PIL import Image
import face_recognition
def face_detected(file_address = None , prefix = 'detect_'):
if file_address is None:
raise FileNotFoundError('File address required')
image = face_recognition.load_image_file(file_address)
face_location = face_recognition.face_locations(image)
if face_location:
face_location = face_location[0]
UP = int(face_location[0] - (face_location[2] - face_location[0]) / 2)
DOWN = int(face_location[2] + (face_location[2] - face_location[0]) / 2)
LEFT = int(face_location[3] - (face_location[3] - face_location[2]) / 2)
RIGHT = int(face_location[1] + (face_location[3] - face_location[2]) / 2)
if UP - DOWN is not LEFT - RIGHT:
height = UP - DOWN
width = LEFT - RIGHT
delta = width - height
LEFT -= int(delta / 2)
RIGHT += int(delta / 2)
pil_image = Image.fromarray(image[UP:DOWN, LEFT:RIGHT, :])
pil_image.thumbnail((50, 50), Image.ANTIALIAS)
pil_image.save(prefix + file_address)
return True
pil_image = Image.fromarray(image)
pil_image.thumbnail((200, 200), Image.ANTIALIAS)
pil_image.save(prefix + file_address)
return False
Note : i wrote this long time ago maybe not a good practice

Joining edited images in python using numpy image slicer

I am learning image manipulation as a beginner in python. My goal is to section my image into an nxn grid where each square is the average color (greyscale image) of the original, respectively. I succeeded in splitting the image, changing its pixel data and saving the new images. My problem is now stitching the image back together. I know the join function is pointing back to the original image, I had hoped that by saving over the tiles I could work around this.
This is my first time posting to stackoverflow (and I am super, super new to python), so apologies if I am not clear or if the formatting is wrong.
# Import packages
import numpy as np
from numpy import matlib
import PIL
import image_slicer
import math
import glob
from image_slicer import join
from PIL import Image
### Use PIL to import image
##img = Image.open("einstein.jpg")
# Display original image
# img.show()
##new_img = img.resize((256,256))
##new_img.save('einstein-256x256','png')
### new_img.show()
#Slice image into four pieces
tiles = image_slicer.slice("einstein.jpg", 16)
# Use glob to open every .png file with for loop
for filename in glob.glob("*.png"):
img=Image.open(filename)
pixels = img.load() # create the pixel map
pixelMap = img.load() #create the pixel map
#convert to array
arr = np.asarray(img)
#find mean
pixelMean = arr.mean(0).mean(0)[0]
# Convert mean to integer
IntMean = math.floor(pixelMean)
print(IntMean)
##pixel = pixelMap[0,0] #get the first pixel's value
##print(pixel)
# Loop for going through every pixel in image and converting it
for i in range(img.size[0]): # for every col:
for j in range(img.size[1]): # For every row
pixels[i,j] = (IntMean,IntMean,IntMean) # set the colour accordingly
# Save new monotone images
img.save(filename)
# Join new images into one
image = join(tiles)
# Save new image
image.save("einsteinJoined.jpg")
image.show()
Your question seems to be missing the error you get with your current code.
However, if I read it correctly, you will get back your original image, as was the problem in Split and Join images in Python. Similar to the answer accepted there, the solution is to change the image in each tile by ending your loop with:
tile.image = Image.open(filename)
Where tile is the tile corresponding to the file, you should loop over the tiles from the image_slicer.slice-function to do so. This is also given in answer to the question linked to.

Keras Image Data Generator show labels

I am using an ImageDataGenerator to augment my images. I need to get the y labels from the generator.
Example : I have 10 training images, 7 are label 0 and 3 are label 1. I want to increase training set size to 100.
total_training_images = 100
total_val_images = 50
model.fit_generator(
train_generator,
steps_per_epoch= total_training_images // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps= total_val_images // batch_size)
By my understanding, this trains a model on 100 training images for each epoch, with each image being augmented in some way or the other according to my data generator, and then validates on 50 images.
If I do train_generator.classes, I get an output [0,0,0,0,0,0,0,1,1,1]. This corresponds to my 7 images of label 0 and 3 images of label 1.
For these new 100 images, how do I get the y-labels?
Does this mean when I am augmenting this to 100 images, my new train_generator labels are the same thing, but repeated 10 times? Essentially np.append(train_generator.classes) 10 times?
I am following this tutorial, if that helps :
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
The labels generate as one-hot-encoding with the images.Hope this helps !
training_set.class_indices
from keras.preprocessing import image
import matplotlib.pyplot as plt
x,y = train_generator.next()
for i in range(0,3):
image = x[i]
label = y[i]
print (label)
plt.imshow(image)
plt.show()
Based on what you're saying about the generator, yes.
It will replicate the same label for each augmented image. (Otherwise the model would not train properly).
One simple way to check what the generator is outputting is to get what it yields:
X,Y = train_generator.next() #or next(train_generator)
Just remember that this will place the generator in a position to yield the second element, not the first anymore. (This would make the fit method start from the second element).

Resources