Image intensity appears higher in MATLAB - image

I have a 2D DICOM images that appears (as it should do) to have a high intensity center, with the outside area having an intensity that approaches zero. When I load and view in MATLAB R2021b:
img=dicomread('image.dcm');
imshow(img);
The intensity appears much higher. I worry that this has affected the actual pixel intensity values and isn't just imshow trying to increase visibility. Has anyone any ideas how to get rid of this correction?
Regular image
,
Image in MATLAB

Related

Grayscale image segmentation using epanechnikov kernel

In the case of gray scale images, what to do if we get a decimal value like 65.5 as my convergence point in meanshift method?
How do we choose the window radius?

Image pixelation library, non-square "pixel" shape

I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.

Matlab GUIDE blurs image?

my plan is implementing an image in a Matlab GUIDE figure. Somehow the output is always blurred (see screenshot). On the left you can see the image in Photoshop on the right in Matlab - notice how the font and other parts become blurred.
I experimented with JPEG and PNG file formats (no compression), I also tried various pixel sizes(resolutions smaller, same and bigger as the actual position of the image) and DPI(values between 30-300) settings, because I expected some scaling issue. Somehow I am stuck - Looking forward to your input!
Thank you,
Florian
Screenshot of the issue: http://s1.bild.me/bilder/260513/6875414Screen_Shot_2014-06-29_at_23.19.34.png
Most probably the reason for the blur is interpolation.
If the axis size you allocated for the image is different from the size of the image MATLAB will resize the image to occupy the whole area.
In order to prevent any interpolation you must set the axis dimension to be the image dimension.

Why did I get very low negative SNR value of White Gaussian noise I added into a image

I'm using the following code to add white Gaussian noise to a 3D synthetic image I created. (100*100*100)
sigma = sqrt(10.0^(-snr/10.0));
r=x+sigma*randn(size(x));
I found that different ways of adding the noise need different range of SNR value.
eg. if I add noise stack by stack (x=image stack), the original image become invisible until SNR goes down to -50.
but when I try to add noise straight away in 3D, (x=3D image). the original image become invisible until SNR goes down to -5.
I've looked up everywhere but couldn't found solution of what causing this...could anyone please tell me if it's normal to have a 3D noisy image with SNR=-50dB or even -70dB? Or is there any way to know the true SNR of my noisy images?

Organizing Images By Color

Maybe you've noticed but Google Image search now has a feature where you can narrow results by color. Does anyone know how they do this? Obviously, they've indexed information about each image.
I am curious what the best methods of analyzing an image's color data to allow simple color searching.
Thanks for any and all ideas!
Averaging the colours is a great start. Just downscale your image to 10% of the original size using a Bicubic or Bilinear filter (or something advanced anyway). This will vastly reduce the colour noise and give you a result which is closer to how humans perceive the image. I.e. a pixel-raster consisting purely of yellow and blue pixels would become clean green.
If you don't blur or downsize the image, you might still end up with an average of green, but the deviation would be huge.
The Google feature offers 12 colors with which to match images. So I would calculate the Lab coordinate of each of these swatches and plot the (a*, b*) coordinate of each of these colors on a two dimensional space. I'd drop the L* component because luminance (brightness) of the pixel should be ignored. Using the 12 points in the (a*, b*) space, I'd calculate a partitioning using a Voronoi Diagram. Then for a given image, I'd take each pixel, calculate its (a*, b*) coordinate. Do this for every pixel in the image and so build up the histogram of counts in each Voronoi partition. The partition that contains the highest pixel count would then be considered the image's 'color'.
This would form the basis of the algorithm, although there would be refinements related to ignoring black and white background regions which are perceptually not considered to be part of the subject of the image.
Average color of all pixels? Make a histogram and find the average of the 'n' peaks?

Resources