Stack drone images in IDL_overlap degree - image

I have about 6000 aerial images taken by 3DR drone for vegetation sites.
The images have to overlap to some extant because the drone flights cover the area go from EW and then again NS, so the images present the same area from two directions. I need the overlap for the images for extra accuracy.
I don't know to write a code on IDL to combine the images and create that overlap. Can anyone help please?
Thanks

What you need is something that is identifiable that occurs in both images. Preferably you would have several things across the field of view so that you could get the correct rotation as well as a simple x-y shift.
The basic steps you will need to follow are:
Source Identification - Identify sources in all images that will later be used to align the images. Make sure the centering of these soruces are good so that they will align better later.
Basic alignment. Start with a guess on where the images should align then try to match the sources.
Match the sources. There are several libraries that can do this for stars (in astronomical images) that could be adapted for this.
Shift and rotate the images. This can be done to the pixels or to the header that is read in and have a program manipulate the pixels on the fly.

Related

Tensorflow object detection training

I would like to detect objects (upper half of the image below) in images (bottom half). Is it smart to train the dataset with images in a different scale (or size)? Or shall I train it with parts of the bottom half of the image below? What is the best way to mark the objects for training?
Kind regards
If I understand your question correctly. If you are exclusively interested in detecting objects at roughly the scale of the below picture, your training data should consist of images like the below one. To add on: try to get at least a decent range of sizes around the bottom so as to avoid small deviations from a specific scale throwing it off, but generally you should be fine.

Detecting hexagonal shapes in greyscale or binary image

For my bachelor thesis I need to analyse images taken in the ocean to count and measure the size of water particles.
my problem:
besides the wanted water particles, the images show hexagonal patches all over the image in:
- different sizes
- not regular shape
- different greyscale values
(Example image below!)
It is clear that these patches will falsify my image analysis concerning the size and number of particles.
For this reason this patches need to be detected and deleted somehow.
Since it will be just a little part of the work in my thesis, I don't want to spend much time in it and already tried classic ways like: (imageJ)
playing with the threshold (resulting in also deleting wanted water particles)
analyse image including the hexagonal patches and later sort out the biggest areas (the hexagonal patches have quite the biggest areas, but you will still have a lot of haxagons)
playing with filters: using gaussian filter on a duplicated image and subtract the copy from the original deletes many patches (in reducing the greyscale value) but also deletes little wanted water particles and so again falsifies the result
a more complicated and time consuming solution would be to use a implemented library in for example matlab or opencv to detect points, that describe the shapes.
but so far I could not find any code that fits my task.
Does anyone of you have created such a code I could use for my task or any other idea?
You can see a lot of hexagonal patches in different depths also.
the little spots with an greater pixel value are the wanted particles!
Image processing is quite an involved area so there are no hard and fast rules.
But if it was me I would 'Mask' the image. This involves either defining what you want to keep or remove as a pixel 'Mask'. You then scan the mask over the image recursively and compare the mask to the image portion selected. You then select or remove the section (depending on your method) if it meets your criterion.
One such example of a criteria would be the spatial and grey-scale error weighted against a likelihood function (eg Chi-squared, square mean error etc.) or a Normal distribution that you define the uncertainty..
Some food for thought
Maybe you can try with the Hough transform:
https://en.wikipedia.org/wiki/Hough_transform
Matlab have an built-in function, hough, wich implements this, but only works for lines. Maybe you can start from that and change it to recognize hexagons.

Align images (maps) coming from a scanned source

I have several images like the one below which are coming from a scanned book.
The images have been catted with the same image-size but they are slightly distorted and do not overlap perfectly. You can see an animation here https://dl.dropboxusercontent.com/u/29337496/animation.gif .
Before apply a georeferencing process (by gdal) i need to align them in order to have the country borders in perfect overlap.
I have already test align_image_stack (hugin sw) with the different flags,
but i did not get positive results.
Any idea?
I'm using Ubuntu.
Thanks
Best Giuseppe

image alignment in matlab

I'm trying to align several images in matlab and I'm having trouble getting matlab to align them properly. I want to align them so that I can overlay them to make a larger image by stitching/averaging them together. I posted several of the images here. While it is not difficult to align 5 images manually I want to make a script to do it so that I do not need to align hundreds of similar images manually as well.
Over the past few days I've tried several ways of getting this to work. My thought was that if I could filter the images enough then I could make a mask for the letters and then it would be easy to align them--but I haven't been able to make it work.
I've tried using local adaptive thresholding to compensate for a nonuniform brightness level across the picture but it hasn't allowed me to align them properly. For the actual image alignment I've been using imregister() and normxcorr2() but both do not properly align the images.
I don't think that this should be that difficult to do but I haven't been able to do it. Any thoughts or insight would be greatly appreciated.
Edit: uploaded images after performing various operations here
I would do some feature detection, followed by the RANSAC algorithm to find the transformation between any pair of images.
see this demo on the Mathworks web-site: http://www.mathworks.com/videos/feature-detection-extraction-and-matching-with-ransac-73589.html

Image Comparison using OpenCV in order to determine traffic density

I am working on a project which gives plots real time traffic status on Google Maps, & make it available to user on an Android phone and web browser.
http://www.youtube.com/watch?v=tcAyMngkzjk
I need to compare 2 images in openCV in order to determine traffic density. Can you please guide me how to compare the images? Should I go for histogram comparison or simple image subtraction?
One common solution is using background subtraction to track moving objects (cars) and then export an image with the moving objects remarked, so you can easily extract the objects from the image. If this is not the case, you will have to detect the vehicles and that's more challenging task because as carlosdc says there are many approaches depending on the angle of the camera, the size of vehicles, light conditions, cluttered backgrounds, etc.
If you specify a little more the problem ...
It really depends, and it would be impossible to determine without looking at your images.
Also, let me point out that it may be quite difficult to make this work adequately in all conditions: day/night, ray/shine, etc. Perhaps you should start by looking at what others have done and how good/bad it works. One such example would be this
try to read this two tutorials about OpenCV versus detect/recognition and find contur.
http://python-catalin.blogspot.ro/search/label/OpenCV
or try to find the color change in your image ... ( for example find colors versus background street )

Resources