Save image in original resolution with imfindcircle plot in Matlab - image

I have a really long picture where I use imfindcircles on. But I need to check if the right ones are found. It is a 158708x2560 logical.
So I have:
[centers, radii] = imfindcircles(I,[15 35],'ObjectPolarity','bright','Sensitivity',0.91);
figure(1)
imshow(I)
viscircles(centers,radii);
and I want to save that output you see in the figure box (binary image with circles on it) to a image file. File format doesn't matter as long as it has the same resolution of 158708x2560 pixel.
Every suggestion I find online alters the resolution or makes the image broader, like when saving the figure directly you get a huge grey border and resolution goes down.
What would also work is a way to zoom into the figure but the zoom option in the figure menu does not magnify correctly. It does magnify but the image stays really thin so you can't see a thing.
Matrix: https://www.dropbox.com/s/rh9wakimc7atfhg/I.mat?dl=0
There are two round spots repeating. I want to find those, not the others. And export the image with the circles plotted over it.

Related

Color correcting one image to fit another

I have a bunch of optical microscopy images of graphene, where we've taken an image before an after the samples have been subject to different treatments. Unfortunately something has gone wrong in the calibration of most of the latter images, meaning that they look darker and has a different color compared the first. We have +40 image pairs and manually color correcting them is both extremely time consuming and too imprecise. What i would like to do is find a good basis for color correction from one photo to another, which i can then apply to all the images. Beneath is shown an exampleBefore (Upper) and after (Lower)
Does anyone know a way to do this?
For example of full quality images:
https://imgur.com/a/93KRhLM
I've tried to manually color correct in Gimp, but i gave up after 2 hours of adjustments

How to find images with a variable sized grey rectangle (JPEG corruption) in them?

I had to recover a hard drive and a lot of photos in it came out corrupted. I'm talking about 200.000 photos. I already wrote a script that finds corrupted JPEGs. But some of these images are not corrupted on a file format level. Yet they appear as the example I am showing. The grey part i suspect is data missing from the file. The grey part size is variable and sometimes it has an incomplete line in it.
So I'm thinking I could write or find a script that finds grey rectangles in these images.
How do I do this? Something that opens the image data and looks for this giant grey rectangle? I have no idea where to start. I can code in a bunch of languages.
Any help/examples, is much appreciated.
I was thinking that the grey rectangle is always the same colour, so I created a function to see if that grey is one of the top 10 most frequent
colours.
If the colour had changed, then I would have adjusted the code accordingly to check if the top colour is at least 10x more frequent than the second most frequent colour.
Didn't have to learn feature detection this time. Shame. :(
from collections import Counter
from PIL import Image
# Open the image file
image = Image.open(file)
# Convert the image to RGB format (if it's not already)
image = image.convert('RGB')
# Get a list of all the pixels in the image
pixels = list(image.getdata())
# Count the number of pixels with each RGB value
counts = Counter(pixels)
most_common_colors = counts.most_common(10)
return (128,128,128) in [t[0] for t in most_common_colors]

how to whiten the white parts and blacken the black parts of a scanned image in MatLab or photoshop

I have a scanned image, scanned from printed word (docx) file. I want the scanned image to be looked like the original word file i.e. to remove noise and enhancement. i.e. to fully whiten the white parts and fully blacken the black parts without changing colorful parts on the fileenter image description here
There are a number of ways you could approach this. The simplest would be to apply a levels filter with the black point raised a bit and the white point lowered a bit. This can be done to all 3 color channels or selectively to a subset. Since you're going for creating pure black and white and there's no color cast on the image, I would apply the same settings to all 3 color channels. It works like this:
destVal = (srcVal - blackPt) + srcVal / (whitePt - blackPt);
This will slightly change the colored parts of the image, probably resulting in making them slightly more or less saturated.
I tried this in a photo editing app and was disappointed with the results. I was able to remove most of the noise by bringing the white point down to about 66%. However, the logo in the upper left is so wispy that it also ended up turning it very white. The black point didn't really need to be moved.
I think you're going to have a tough time with that logo. You could isolate it from your other changes, though, and that might help. A simple circular area around it where you just ignore any processing would probably do the trick.
But I got to thinking - this was made with Word. Do you have a copy of Word? It probably wouldn't be too difficult to put together a layout that's nearly identical. It still wouldn't help with the logo. But what you could do is layout the text the same and export it to a PDF or other image format. (Or if you can find the original template, just use it directly.) Then you could write some code to process your scanned copy and wherever a pixel is grayscale (red = green = blue), use the corresponding pixel from the version you made, otherwise use the pixels from the scan. That would get you all the stamps and signatures, while having the text nice and sharp. Perhaps you could even find the organization's logo online. In fact, Wikipedia even has a copy of their logo.
You'd probably need to have some sort of threshold for the grayscale because some pixels might be close but have a slight color cast. One option might be something like this:
if ((fabs(red - green) < threshold) && (fabs(red - blue) < threshold))
{
destVal = recreationVal; // The output is the same as the copy you made manually
}
else
{
destVal = scannedVal; // The output is the same as the scan
}
You may find this eats away at some of the colored marks, so you could do a second pass over the output where any pixel that's adjacent to a colored pixel brings in the corresponding pixel from the original scan.

Applying an image as a mask in matlab

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

Pulling non-transparent areas to the center of the transparent areas in an image

I am making an image processing project which has a few steps and stuck in one of them. Here is the thing; I have segmented an image and subtract the foreground from background. Now, I need to fill the background.
So far, I have tried the inpainting algorithms. They don't work in my case because my background images haven't at least 40% of them. I mean they fail when they are trying the complete 40% of an image. (By the way, these images have given bad results even in the Photoshop with content-aware tool.)
Anyway, I've given up trying inpainting and decided something else. In my project, I don't need to complete 100% of my background. I want to illustrate my solution;
As you see in the image above, I want to pull the image to the black area (which is transparent) with minimum corruption. Any MATLAB code samples, technique, keyword and approach would be great. If you need further explanation, feel free to ask.
I can think of two crude ways to fill the hole:
use roifill: this fills gaps in 2d image preserving image smoothness.
Alteratively, you can use bwdist to compute the nearest neighbor of each black pixel and assign it to its nearest neighbor's color:
[~, nnIdx] = bwdist( bw );
fillImg(bw) = IMG(bw);
although this code snippet works only for gray images, it is quite trivial to extend it to RGB color images.

Resources