Extend a greyscale Image to fit a RGB image - image

I need to extend an image array, that currently only holds grey-scale values in the shape of:
(640,480) to (640,480,3). Ultimately I need to concatenateboth - a rgb numpy array with the greyscale numpy array.
greyscalearray[..., np.newaxis] results in (640,480,1) - is there a way I can add np.zeros(3,1) for the last axis?

I think you want:
RGB = np.dstack((grey, np.zeros_like(grey), np.zeros_like(grey)))
The benefit of using np.zeros_like() is that you get an array matching the dimensions and the dtype of your single-channel, grey image without having to specify either!
So here is the full code:
import numpy as np
grey = np.ones((64,48),dtype=np.uint8)
print(grey.shape) # prints (64, 48)
# Make 3-channel from singe-channel
RGB = np.dstack((grey, np.zeros_like(grey), np.zeros_like(grey)))
print(RGB.shape) # prints (64, 48, 3)
print(RGB[0,0]) # prints [1,0,0]
np.dstack() stacks in the depth axis. Its friends are np.vstack() which stacks vertically to concatenate one image vertically below another, and np.hstack() which stacks horizontally to concatenate one image horizontally beside another.

Related

Counting homogeneous objects along the contour

I'm trying to get amount of objects on the frame by finding their contours with OpenCV.
That's a frame after Canny filter applying:
Then I call findContours() method and leave ones which suitable in size.
When I overlay them on a frame I've got
the following picture.
It can be seen that we've got only full contour objects.
So the question is:
How can we artificially make the boundaries of objects holistic?
I tried to use dilate and erode (result of that) but after that borders of objects are glued together and we can't find their contours any more.
Since the contours are connected together, findContours will detect the connected contours as a single contour instead of individual separated circles. When you have connected contours, a potential approach is to use Watershed to label and detect each contour. Here's the results:
Input image
Output
Code
import cv2
import numpy as np
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage
# Load in image, convert to gray scale, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Compute Euclidean distance from every binary pixel
# to the nearest zero pixel then find peaks
distance_map = ndimage.distance_transform_edt(thresh)
local_max = peak_local_max(distance_map, indices=False, min_distance=20, labels=thresh)
# Perform connected component analysis then apply Watershed
markers = ndimage.label(local_max, structure=np.ones((3, 3)))[0]
labels = watershed(-distance_map, markers, mask=thresh)
# Iterate through unique labels
for label in np.unique(labels):
if label == 0:
continue
# Create a mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# Find contours and determine contour area
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
c = max(cnts, key=cv2.contourArea)
color = list(np.random.random(size=3) * 256)
cv2.drawContours(image, [c], -1, color, 4)
cv2.imshow('image', image)
cv2.waitKey()
Here are some other references:
Image segmentation with Watershed Algorithm
Watershed Algorithm: Marker-based Segmentation
How to define the markers for Watershed
Find contours after watershed
It seems like you have a pattern for your objects, and those objects sometimes overlap.
I'd suggest you convolve your image with an object pattern and then process the outcoming scores image.
In more detail:
Suppose for simplicity that your initial image has only one channel. and the object you're looking for looks like this:. this is our pattern. say it's size is [W_p,H_p]
First step: construct new image - scores - where each pixel S in scores = probability that this pixel is the pattern center.
One way to do that is: for each pixel P in the original image, "cut" the [W_p,H_p] patch around P ( e.g. img(Rect(P-W_p/2,P-H_p/2,W_p,H_p))), and subtract patch from pattern to find the "distance" between them (e.g. cv::sum(cv::absdiff(patch, pattern)) function in opencv), and save this sum to S.
Another way to do this is: S = P.clone(); pattern = pattern / cv::sum(pattern);
and then use cv::filter2D for S with pattern...
Now that you have a scores image, you should filter false positives:
1. take the top 2% of the scores( one way is with cv::calcHist)
2. for each pixel that has a neighbor within [W_p,H_p] with a heigher score - turn this pixel to zero !
Now you should remain with image of zeroes where only pattern centers have some value. Hurray!
If you don't know in advance how an object will look like, you can find one object using contours, then 'cut it out' using convex hull of its contour (+ bounding box), and use it as the convolution kernel for finding the rest.

Remove icon from jpeg image

Is there any way to remove icon from image that in original didn't had the icon.
Maybe with help of hexdump or something?
Here is an example of image.
is there a way to remove the heart icon from it?
*I don't really need this image it is just for example
One method is to use color thresholding to obtain a binary mask which can be used to isolate the desired regions to keep. Once we have this mask, we bitwise-and to effectively remove the heart
After color thresholding with a HSV lower and upper range, we obtain this mask
To remove the heart, we invert the mask which represents all regions in the image that we want to keep then bitwise-and with the input image. Since you didn't specify what you want to replace it with, I've just colored the removed region with white. Here's an implementation using Python and OpenCV
import numpy as np
import cv2
image = cv2.imread('1.jpg')
original = image.copy()
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 138, 155])
upper = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower, upper)
invert = 255 - mask
result = cv2.bitwise_and(original, original, mask=invert)
result[invert==0] = (255,255,255)
cv2.imshow('mask', mask)
cv2.imshow('result', result)
cv2.waitKey()

Getting the dimensions of a numpy array right to plot converted greyscale image

as part of Unity's ML Agents images fed to a reinforcement learning agent can be converted to greyscale like so:
def _process_pixels(image_bytes=None, bw=False):
s = bytearray(image_bytes)
image = Image.open(io.BytesIO(s))
s = np.array(image) / 255.0
if bw:
s = np.mean(s, axis=2)
s = np.reshape(s, [s.shape[0], s.shape[1], 1])
return s
As I'm not familiar enough with Python and especially numpy, how can I get the dimensions right for plotting the reshaped numpy array? To my understanding, the shape is based on the image's width, height and number of channels. So after reshaping there is only one channel to determine the greyscale value. I just didn't find a way yet to plot it yet.
Here is a link to the mentioned code of the Unity ML Agents repository.
That's how I wanted to plot it:
plt.imshow(s)
plt.show()
Won't just doing this work?
plt.imshow(s[..., 0])
plt.show()
Explanation
plt.imshow expects either a 2-D array with shape (x, y), and treats it like grayscale, or dimensions (x, y, 3) (treated like RGB) or (x, y, 4) (treated as RGBA). The array you had was (x, y, 1). To get rid of the last dimension we can do Numpy indexing to remove the last dimension. s[..., 0] says, "take all other dimensions as-is, but along the last dimension, get the slice at index 0".
It looks like the grayscale version has an extra single dimension at the end. To plot, you just need to collapse it, e.g. with np.squeeze:
plt.imshow(np.squeeze(s))

Histogram equalization help in MATLAB

My code is shown below:
G= histeq(imread('F:\Thesis\images\image1.tif'));
figure,imshow(G);
The error message I got was the following and I'm not sure why it is appearing:
Error using histeq
Expected input number 1, I, to be two-dimensional.
Error in histeq (line 68)
validateattributes(a,{'uint8','uint16','double','int16','single'}, ...
Error in testFile1 (line 8)
G= histeq(imread('F:\Thesis\images\image1.tif'));
Your image is most likely colour. histeq only works on grayscale images. There are three options available to you depending on what you want to do. You can either convert the image to grayscale, you can histogram equalize each channel individually or what is perceptually better is to convert the image into the HSV colour space, histogram equalize the V or Value component, then convert back into RGB. I tend to prefer the last option for colour images. Therefore, one method will be an enhanced grayscale image, and the other two will be an enhanced colour image.
Option #1 - Convert to grayscale then equalize
G = imread('F:\Thesis\images\image1.tif');
G = histeq(rgb2gray(G));
figure; imshow(G);
Use rgb2gray to convert the image to grayscale, then equalize the image.
Option #2 - Equalize each channel individually
G = imread('F:\Thesis\images\image1.tif');
for i = 1 : size(G, 3)
G(:,:,i) = histeq(G(:,:,i));
end
figure; imshow(G);
Loop through each channel and equalize.
Option #3 - Convert to HSV, histogram equalize the V channel then convert back
G = imread('F:\Thesis\images\images1.tif');
Gh = rgb2hsv(G);
Gh(:,:,3) = histeq(Gh(:,:,3));
G = im2uint8(hsv2rgb(Gh));
figure; imshow(G);
Use the rgb2hsv function to convert a colour image into HSV. We then use histogram equalization on the V or Value channel, then convert back from HSV to RGB with hsv2rgb. Note that the output of hsv2rgb will be a double type image and so assuming that the original input image was uint8, use the im2uint8 function to convert from double back to uint8.

Make a representative HSV image in matlab

I want to create an HSV image (or maybe a coordinate map) which shows that coordinates accurately.
I am using the following code and get an image like this the result of the following code which is not what I want.
img = rand(200,200);
[ind_x, ind_y] = ind2sub(size(img),find(isfinite(img)));
ind_x = reshape(ind_x,size(img));
ind_y = reshape(ind_y,size(img));
ind = ind_x.*ind_y;
figure, imagesc(ind); axis equal tight xy
Lets say you quantize the HSV space (0-1) into 256 bins. There will be 256*256*256 possible colors. We could fix a dimension (say saturation) and generate the matrix. Then there will be 256*256 colors.
[x1,x2]=meshgrid(linspace(0,1,256),linspace(0,1,256));
img(:,:,1)=x1;
img(:,:,2)=1; %fully saturated colors
img(:,:,3)=x2;
imgRGB=hsv2rgb(img); %for display purposes
imshow(imgRGB,[])
It will look different in RGB (that's where you would visualize). It looks similar to your image if you visualize HSV matrix (i.e. without converting it to RGB, but MATLAB doesn't know that its HSV)
imshow(img,[]);
The second image you have posted can be obtained with:
[x1,x2]=meshgrid(linspace(0,1,256),linspace(0,1,256));
img(:,:,1)=x1;
img(:,:,2)=0;
img(:,:,3)=x2;
imshow(img,[]) %visualizing HSV

Resources