CvBOX2D Processing - image

I've already got my ROI(CvBOX2D type) by series of contour processing, now I just want to focus on the image part within the ROI, e.g.: feed this part into another processing function, how can I do that? I know there is CvSetImageROI, but the type is CvRect, so I should convert CvBox2D to CvRect first? Or some way to apply a mask on it with the area outside the box set to 0?
Thanks in advance!

Only axis aligned ROIs are directly supported in OpenCV (CvRect or IplROI). This is because they allow direct access to the image memory buffer.
There are 2 ways to go about working on a non-axis aligned ROI in OpenCV. Neither of them is as efficient as using axis-aligned ROIs.
Rotate your image, or bounding box, so that your ROI is now axis aligned in the resulting rotated image.
Note: the rotation will slightly blur your image.
Use a mask: Draw your ROI as a white rectangle on a black BG the same size as the image, and give your processing functions this mask as the additional parameter.
Note: not all functions support masks.
I would recommend option 1 if you really must stay within the exact bounds of your ROI. Otherwise, just use the bounding rect.

Use c++ api of opencv. seriously. do it.
cv::Rect roi = cv::RotatedRect(box).boundingRect();
Mat_<type> working_area(original_mat, roi);
// now operate on working_area
Note: this will operate on the bounding rect. I didn't find information on how to create a mask out of rotatedrect. Probably you have to do it by hand in a scanline fashion.

Related

Python, plotting curve into image from pixels

I need help, I have a matrix of pixels of values 0 to 255 and I need to plot them into image with plane curve.
Into this image I need to put a specific curve.
For example, I have a curve x=10+cos(t), y=10+sin(t) and a matrix of pixels m[i][j]. I need to show both in one image, looking like that:
I know I might need opencv, it's not problem, but I can't find myself writing the code.
You are showing a circle.
If you need to draw a circle, use the cv::circle drawing call.
API docs: https://docs.opencv.org/4.x/d6/d6e/group__imgproc__draw.html
If you need an ellipse, use the ellipse drawing call.
If you need to draw arbitrary polylines/drawContours, calculate the list of points and then use either of these calls.
OpenCV has a Plot2D class in the opencv_contrib repository/part of the modules. That's not parametric plot though.

Using regionprops in MATLAB to detect shapes only in part of the picture

I have video frames with elliptical objects in them. I'm trying to detect the main ellipse using regionprops and it works just fine.
However, since I want to speed up the process, I want regionprops to only look for those ellipses in a certain area of the image. I can crop the images each frame, to only have the relevant area left, but I would rather have regionprops look only in specified areas.
Is such an option possible?
Regioprops uses label matrix provided by bwlabel(bw_image) or just logical(grayscale_image) if you suppose that there is only object in the image. To make regionprops process only a part of image you should set irrelevant part of the label matrix to zero.

Image pixelation library, non-square "pixel" shape

I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.

Any ideas on how to remove small abandoned pixel in a png using OpenCV or other algorithm?

I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.

how to limit an image to it's real shape

How can I use an image that whenever I want to make a collision in XNA, it happens only for area of the shape not around of it.
For example when I use below picture, I want collision detection happens only when arrow in shape is touched.
The collision detection happens in the area in this picture
How can I make limitation for area of image only?
What you can do is also to create two rectangles. That makes the overlapping area (the area there the image isn't but the rectangle) a bit smaller. But if you need to do this pixel excact you have to use the recource-expensive per-pixel-collision.
You shouldn't try restricting the image shape, because regardless of your efforts - you will have a rectangle. What you need to do is work with detecting pixel collisions. It is a fairly extensive topic - you can read more about a Windows Phone-specific XNA implementation here.

Resources