MATLAB image processing - edge detection algorithm - image

Hi, first picture represents the damage and delamination that follows
What I want to do is remove the intact area and only visualize the damage area that is marked by black curves (I want everything to be white, or blank except the damage area)
I tried Thresholding method but doesn't seem to be effective
So I found out that histogram equalization and laplacian of gaussian filter are useful for edge detection image processing.
Are there any other image processing tool to get what I want?
Will histogram equalization or laplacian of gaussian filter be good enough?
Any tips are welcome!
Thanks in advance guys

Related

Automatically detect an image collage

I'm trying to automatically detect if an image is a collage vs a single photograph. I'm not too concerned with edge cases. What I'm trying to solve is rectangular collages like below. I've tried edge detection (canny) + vertical and horizontal sobel filtering + line detection (Hough transform) to try to identify perpendicular lines but am getting too many false positives. I'm not very good at image processing so any input would be welcome. Thx!
This is a challenge, because images "have the right" to contain verticals and horizontals, and there can be sudden changes in the background due to occlusion for instance.
But a clue can help you: the borders between two pictures are perfectly straight, "unnaturally" straight, and they are long.

Are there methods for histogram bluring in image processing?

I'm looking for methods for histogram blurring in image processing. I found this old thread but the answers there does not solve my case.
One answer there suggest that
There is actually nothing called Histogram blurring.
so Is there any way for histogram blurring in image processing?
[edit1] some more info
image size is 3880x2592.
I want to blur with gaussian blur with radius about 15-20 (*pixels?).
I am using 256×16bit 8ea(*?) single ports memories.
I want to imlement this on FPGA
if by blur you mean smooth (removing high frequencies) then you can use any smooth filter or algorithm (most of them are based on FIR low pass filters)
if your question is what to smooth then the answer is same as in the question you linked it depends on what you need:
if you need smoothed histogram for some computation then smooth histogram directly and leave image be as is
if you need the image colors to be smoothed then smooth image and recompute histogram
sometimes is hard to smooth image to get smoothed histogram
(due to slow bleeding of colors or by too big data loss)
In that case you can smooth the histogram (remembering the original) then compute the area change for each color and statistically recolor whole image (it is not an easy task).
Pick (random) pixel of color a which area needs to be decreased and recolor it to closest color b thats area needs to be increased
update area of booth colors
loop until areas are matching ...

how to detect spiral in an image using matlab

I'm new to matlab, I'm not clear of how to detect the spirality and spiral center in an image using matlab.
For example I need to detect the spiral center of the galaxy.
Question: How to model spirality concept in these kind of spiral image for example....
Thank you.
original images taken from here:
storm
galaxy
Optical flow
is moving intensity/color of scene
not image of an object !!!
this is taken from flying insects vision
they use it to:
determine flight direction (compensate wind drift)
navigation
collision avoidance
landing
Spiral image
in your case you should look for geometry + density analysis (nothing to do with Optical flow)
here are few things that pop up in my head for your case:
make density map
find the biggest density
or density center
vectorise the whole thing
find center mathematically
or look for joint of arms
or look for eye of the storm
also you can vectorise the gaps
if they are curved and rotated to each other then you have spiral
make gap occurence map
number of gaps per square area
the bigger the count is the closer you are to center
beware inside center area can be 0 gaps
find max gap count positions
compute average middle between all of them
to improve accuracy you could segmentate gaps before
and count only different gaps per area
[Notes]
I would go for option 3
it is most simple of all of them
just few for loops
you can also combine more approaches together to improve accuracy
use proper filtrations and color reduction/tresholding before detection
like sharpening, artifact reduction, smoothing, erosion/corosion ...

Can anyone tell me in what other way we can diffuse an image without losing specific parts of an image

The Perona Malik diffusion equation removes noise from the image on the basis of two points:
preferring high contrast edges over low contrast ones.
preferring wide regions over smaller ones
Can anyone tell me in what other way we can diffuse an image without losing specific parts of an image like edges, image content, lines and other details?
There is an anisotropic method that claims to preserve edges while reducing noise. See
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=541427
Diffusion operators by nature remove features, that's how it removes the noise. So there is always some loss associated. But by choosing to diffuse different near edges (either diffuse less, or diffuse tangent to the edge rather than normal), edges can be preserved.
If there are features that cannot be lost, the diffusion operator cannot be applied across them or you run the risk of losing the feature.
More advanced methods are using the Structure Tensor of the image to smooth around edges without blurring them.
Basically, the Structure Tensor gives you information about the local gradient of the image.
Using this info you can smooth "Flat" areas and sharpen areas which are edges.
The Perona Malik is very old and better approaches are available now.
Have a look here:
https://github.com/RoyiAvital/Fast-Anisotropic-Curvature-Preserving-Smoothing

Organizing Images By Color

Maybe you've noticed but Google Image search now has a feature where you can narrow results by color. Does anyone know how they do this? Obviously, they've indexed information about each image.
I am curious what the best methods of analyzing an image's color data to allow simple color searching.
Thanks for any and all ideas!
Averaging the colours is a great start. Just downscale your image to 10% of the original size using a Bicubic or Bilinear filter (or something advanced anyway). This will vastly reduce the colour noise and give you a result which is closer to how humans perceive the image. I.e. a pixel-raster consisting purely of yellow and blue pixels would become clean green.
If you don't blur or downsize the image, you might still end up with an average of green, but the deviation would be huge.
The Google feature offers 12 colors with which to match images. So I would calculate the Lab coordinate of each of these swatches and plot the (a*, b*) coordinate of each of these colors on a two dimensional space. I'd drop the L* component because luminance (brightness) of the pixel should be ignored. Using the 12 points in the (a*, b*) space, I'd calculate a partitioning using a Voronoi Diagram. Then for a given image, I'd take each pixel, calculate its (a*, b*) coordinate. Do this for every pixel in the image and so build up the histogram of counts in each Voronoi partition. The partition that contains the highest pixel count would then be considered the image's 'color'.
This would form the basis of the algorithm, although there would be refinements related to ignoring black and white background regions which are perceptually not considered to be part of the subject of the image.
Average color of all pixels? Make a histogram and find the average of the 'n' peaks?

Resources