I have about 10 lists (declared as separate inputs), each of size 4x4, containing some values (densities), and I want to generate density plot animations (using the first list, then the second, and so on), similar to what we get by throwing in one of the lists into the ListDensityPlot function.
I couldn't find any straightforward way to do that. Any suggestions? Thanks!
Related
I am looking for the very simplest way to rasterise a shpfile in geopandas - the equivalent to arcpy PolygonToRaster_conversion() which does things in one line.
I have found some relatively involved methods eg
https://snorfalorpagus.net/blog/2014/11/09/masking-rasterio-layers-with-vector-features/
is it this complicated? or is there a one line option like arcpy's PolygonToRaster_conversion()
I'm looking for the simplest starting point to get the idea
I've been exploring rasterio to do this, but perhaps there are other ways
I'm only just starting to use Geopandas and would appreciate any pointers
Are you trying to rasterize a set of polygons with unique values in one step? If so, you want to rasterize using that unique value for each polygon, but beware that the last polygon rasterized to a given pixel will "claim" it (i.e., multiple polygons may touch a pixel, but the last one in your list of features will be the value rasterized there).
Or do you want to rasterize each polygon independently (or all polygons at the same time, as if they were a single polygon), so that you can extract out statistics from the raster? Mask may work for this, in a loop over each feature.
The closest you are likely to get to a one-line operation is using rasterio's rio mask or rio rasterize operation. The reason that the example you link to is more involved is that you need to do a few extra things to extract a subset of your original raster. There are now a few extra methods in rasterio that make that a bit easier (docs).
From geopandas, your geometry is in a GeoSeries. I haven't tested this directly, but you may need to call the __geo_interface__ of the series to get back GeoJSON-like shapes that rasterio expects as input.
I have a 3d matrix A=[mXnXl], which I want to inpaint, using a mask of mask=[mXn].
So each slice along the "l" is a 2D image (0-255 RGB range). I care about continuity along that axis as also along the 3rd dimenbtison.
I use the inpainting with the two following forms
im1=inpaint.inpaint_biharmonic(np.uint8(A), np.uint8(mask), multichannel=True)
for i in range(0,l):
im2[:,:,i]=inpaint.inpaint_biharmonic(np.uint8(A[:,:,i]), np.uint8(mask), multichannel=False)
How is the 3rd dimension handled in the algorithm? Will they produce the same results?
You can look at the source code of the function here:
https://github.com/scikit-image/scikit-image/blob/c221d982e493a1e39881feb5510bd26659a89a3f/skimage/restoration/inpaint.py#L76
As you can see in the for-loop in that function, the function is just doing the same thing as your for-loop, so the results should be identical.
I have a healpy full-sky map, and need to chose some sky patches centered around given pixels. hp.query_disc finds the pixels within a given disc, which is nice. But I would like to be able to add/subtract on different sky patches, and so the fact that different discs cannot be the same size causes a problem.
I have also used
hp.gnomview(return_projected_map=True)
to look at my sky patches, and found that each returned 2D array has the same size. However since I would probably do stuff on many submaps, I do not want the maps to be drawn every time. Is it possible to obtain only the 2D array? And if so, is it fine to use plt.imshow after adding/subtracting those arrays?
I feel like there should be more elegant ways to achieve what I want -- Are there any existing routines that I overlooked? Thanks!
I have a written a code, that takes the difference of intensities of neighbor pixels and gets the maximum difference. However I would like some thoughts on how to implement my "algorithm" faster. Till now I resorted to switchand ifstatements.
my code is simple yet messy. There is the thoughts behind it:
go to my point of interest
identify the pixels in its direct neighborhood
calculate the difference of intensities
compare the calculated intensities and deduce the maximum
take the maximum to etc...
That lead me to multiple switch and if statements. Do you have any thoughts on that ?
You can see OpenCv library, not need write this code this library have this and many other function. Read this:
http://www.seas.upenn.edu/~bensapp/opencvdocs/ref/opencvref_cv.htm
I've checked methods like Phasher to get similar images. Basically to resize images to 8x8, grayscale, get average pixel and create a binary hash of each pixel comparing if it's above or below the average pixel.
This method is very well explained here:
http://hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
Example working:
- image 1 of a computer on a table
- image 2, the same, but with a coin
This would work, since, using the hash of a very reduced, grayscale image, both of them will be almost the same, or even the same. So you can conclude they are similar when 90% of more of the pixels are the same (in the same place!)
My problem is in images that are taken from the same point of view but different angle, for example this ones:
In this case, the hashes "fingerprint" generated are so shifted each other, that we can not compare the hashes bit by bit, it will be very different.
The pixels are "similar", but they are not in the same place, since in this case there's more sky, and the houses "starts" more below than the first one.
So the hash comparison results in "they are different images".
Possible solution:
I was thinking about creating a larger hash for the first image, then get 10 random "sub hashes" for the second one, and try to see if the 10 sub hashes are or are not in "some place" of the first big hash (if a substring is contained into another bigger).
Problem here I think is the CPU/time when working with thousands of images, since you have to compare 1 image to 1000, and in each one, compare 10 sub hashes with a big one.
Other solutions ? ;-)
One option is to detect a set of "interesting" points for each image and store that alongside your hash. It's somewhat similar to the solution you suggested.
We want those points be unlikely to vary between images like yours that have shifts in perspective. These lecture slides give a good overview of how to find points like that with fairly straightforward linear algebra. I'm using Mathematica because it has built in functions for a lot of this stuff. ImageKeypoints does what we want here.
After we have our interesting points we need to find which ones match between the images we're comparing. If your images are very similar, like the ones in your examples, you could probably just take an 8x8 greyscale image for each interesting point and compare each from one image with the ones for the nearby interesting points on the other image. I think you could use your existing algorithm.
If you wanted to use a more advanced algorithm like SIFT you'd need to have a look at ImageKeypoint's properties like scale and orientation.
The ImageKeypoints documentation has this example you can use to get a small piece of the image for each interesting point (it uses the scale property instead of a fixed size):
MapThread[ImageTrim[img, {#1}, 2.5 #2] &,
Transpose#
ImageKeypoints[img, {"Position", "Scale"},
"KeypointStrength" -> .001]]
Finding a certain number of matching points might be enough to say that the images are similar, but if not you can use something like RANSAC to figure out the transformation you need to align your hash images (the 8x8 images you're already able to generate) enough that your existing algorithm works.
I should point out that Mathematica has ImageCorrespondingPoints, which does all of this stuff (using ImageKeypoints) much better. But I don't know how you could have it cache the intermediate results so that scales for what you're trying to do. You might want to look into its ability to constrain matching points to a perspective transform, though.
Here's a plot of the matching points for your example images to give you an idea of what parts end up matching:
So you can precalculate the interesting points for your database of images, and the greyscale hashes for each point. You'll have to compare several hash images for each image in your database, rather than just two, but it will scale to within a constant factor of your current algorithm.
You can try an upper bound if the hashes doesn't match compare how many pixels match from the 8x8 grid. Maybe you can try to match the colors like in photo mosaic:Photo Mosaic Algorithm. How to create a mosaic photo given the basic image and a list of tiles?.