i need to blend a lot of images taken with camera plugin (ui.Image format) using a median filter like in photoshop.
I currently usi canvas with drawImage and Paint.BlendMode (i'm using lighten which makes a kinda similar effect), but there is no "median" blend mode.
Median filter is done by taking a median value of every single channel RGB of all the images.
So... is there anything for median filter in dart/flutter, or i need to check manually all the image pixels? In this second case, how can i convert ui.Image in a class which allows me to get the single pixel and write it in another image?
Example:
color = image.getPixel(x, y);
newImage.setPixel(x, y, color);
Thank you in advance!
You can convert ui.image to RGB in flutter, but such manipulation will be prohibitively slow. You need some additional package, e.g. https://pub.dev/packages/photofilters or some other.
Related
This is my binary image:
input binary image
I'm trying to isolate the digits from all of the other stuff from the image. I got this output after applying stroke width transform to a runner with a bib.
I already tried using morphological transformations to close the holes of the digits and check the area of each contour then disregard those with a contour less than the average size. However, in this case, using the contour area to determine the noise is not useful because the noise is bigger than the characters.
Do you have any suggestions on how I can do it? Big thanks.
You can try to use blur filter here
You can try to use Dilate after Erode.
I have video frames with elliptical objects in them. I'm trying to detect the main ellipse using regionprops and it works just fine.
However, since I want to speed up the process, I want regionprops to only look for those ellipses in a certain area of the image. I can crop the images each frame, to only have the relevant area left, but I would rather have regionprops look only in specified areas.
Is such an option possible?
Regioprops uses label matrix provided by bwlabel(bw_image) or just logical(grayscale_image) if you suppose that there is only object in the image. To make regionprops process only a part of image you should set irrelevant part of the label matrix to zero.
I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.
Does opengl provide any facilities to help with image smoothing?
My project converts scientific data to textures, each of which is a single line of colored pixels which is then mapped onto the appropriate area of the image. Lines are mapped next to each other.
I'd like to do simple image smoothing of this, but am wondering of OGL can do any of it for me.
By smoothing, I mean applying a two-dimensional averaging filter to the image - effectively increasing the number of pixels but filling them with averages of nearby actual colors - basically normal image smoothing.
You can do it through a custom shader if you want. Essentially you just bind your input texture, draw it as a fullscreen quad, and in the shader just take multiple samples around each fragment, average them together, and write it out to a new texture. The new texture can be an arbitrary higher resolution than the input texture if you desire that as well.
I've already got my ROI(CvBOX2D type) by series of contour processing, now I just want to focus on the image part within the ROI, e.g.: feed this part into another processing function, how can I do that? I know there is CvSetImageROI, but the type is CvRect, so I should convert CvBox2D to CvRect first? Or some way to apply a mask on it with the area outside the box set to 0?
Thanks in advance!
Only axis aligned ROIs are directly supported in OpenCV (CvRect or IplROI). This is because they allow direct access to the image memory buffer.
There are 2 ways to go about working on a non-axis aligned ROI in OpenCV. Neither of them is as efficient as using axis-aligned ROIs.
Rotate your image, or bounding box, so that your ROI is now axis aligned in the resulting rotated image.
Note: the rotation will slightly blur your image.
Use a mask: Draw your ROI as a white rectangle on a black BG the same size as the image, and give your processing functions this mask as the additional parameter.
Note: not all functions support masks.
I would recommend option 1 if you really must stay within the exact bounds of your ROI. Otherwise, just use the bounding rect.
Use c++ api of opencv. seriously. do it.
cv::Rect roi = cv::RotatedRect(box).boundingRect();
Mat_<type> working_area(original_mat, roi);
// now operate on working_area
Note: this will operate on the bounding rect. I didn't find information on how to create a mask out of rotatedrect. Probably you have to do it by hand in a scanline fashion.