I am showing a SVG map with the coastline drawn with a blurry effect as shown on this image:
I am using a simple feGaussianBlur filter to draw the coastline below the land polygons:
<filter id="blur">
<feGaussianBlur in="SourceGraphic" stdDeviation="4">
</feGaussianBlur></filter>
The result is satisfying on the north coast. However, some rectangular patterns appear in the red circle. This is due to the segmentation of the coast into several linear elements, whose blurry margins intersect.
Is there a way to fix this and have a 'nice' blurry effect everywhere?
I already tried color-interpolation-filters=sRGB and image-rendering=optimizeQuality without any success.
FYI, the demo map is here with the source code.
Somehow I think this because of the filter dimensions, the extended parts are cut. Try extending these boundaries:
<filter id="degenCodeNeon" x="-50%" y="-50%" width="200%" height="200%">
The percentage uses objectBoundingBox, you could have also specified userSpaceOnUse. But first try this one.
Tried with x,y -250% and width/height 600%, seems to work. I suggest adding color matrix or component transfer filters as addition to completely reduce alpha to 0 below a certain threshold.
Related
I have a figure looks like this
I want to find the coordinates of all intersections of three hexagons.
How can I do this? Should I use OpenCV?
I am still trying to think of a faster/better method, but I think the following should work:
threshold your image to pure blacks and whites
generate and save a list of all black pixels for later
label your image so that each white hexagon is effectively flood-filled with a unique color (or shade of grey) - some folks call this "labelling", some call it "Blob Analysis", some call it "Connected Component Analysis". Whatever it is called, you will get something like this:
Now look at each black pixel from the list you saved in the second step and count how many different colours other than black are in the surrounding 9x9, or 15x15 area. If it's three it is probably an intersection like you are looking for.
Of course there are variations on this - you could implement a "minimum distance from other intersection" on top, for example. Or a "black line thinning first". Or a dilation of each blob to erode the black lines and make the three colours closer together. You could scale your image down (being careful to use NEAREST_NEIGHBOUR rather than interpolation) after labelling to reduce processing time - if important.
You can try to find these features using Harris corner detector.
Also check if findContours with analysis of result intersections could give you useful information.
I am trying to find a way to determine whether an image needs to be rotated in order for the text to be horizontally aligned. And if it does need to be rotated then by how many degrees?
I am sending the images to tesseract and for tesseract to be effective, the text in the images needs to be horizontally aligned.
I'm looking for a way do this without depending on the "Orientation" metadata in the image.
I've thought of following ways to do this:
Rotate the image 90 degrees clockwise four times and send all four images to tesseract. This isn't ideal because of the need to process one image 4 times.
Use hough line transform to see if the lines are vertical or horizontal. If they are vertical then rotate the image. This way the image still might need to be rotated 180 degrees. So I'm unsure how effective this would be.
I'm wondering if there are other ways to accomplish this using OpenCV, imageMagik or any other image processing techniques.
If you have a 1000 images which say horizontal or vertical, you can resize these images to 224x224 and then fine-tune a Convolutional neural network, like AlexNet or VGG for this task. If you want to know how many right rotations to make for the image, you can set the labels as the number of clock-wise rotations, like 0,1,2,3.
http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html
Aytempting ocr on all 4 orientations seems like a reasonable choice, and I doubt you will find a more reliable heuristic.
If speed is an issue, you could OCR a small part of the image first. Select a rectangular region, that has the proper amount of edge pixels and white/black ratio for text, then send that to tesseract in different orientations. With a small region, you could even try smaller steps than 90°, or combine it with another heuristic like Hough.
If you remember the most likely orientation based on previous images, and stop once an orientation is successfully processed by tesseract, you probably do not even have to try most orientations in most cases.
You can figure this out in a terminal with tesseract's psm option.
tesseract --psm 0 "infile" "outfile" will create outfile.osd which contains the info:
Page number: 0
Orientation in degrees: 90
Rotate: 270
Orientation confidence: 27.93
Script: Latin
Script confidence: 6.55
man tesseract
...
--psm N
Set Tesseract to only run a subset of layout analysis and assume a certain form of image. The options for N are:
0 = Orientation and script detection (OSD) only.
1 = Automatic page segmentation with OSD.
2 = Automatic page segmentation, but no OSD, or OCR. (not implemented)
...
I have some images and want to detect around red-colored objects. but around blue object there is a red shade which is detected and is not true. how I can remove these red shades by filtering or processing image. any Matlab command or technical hints will be appreciated.
thanks
this is a sample object with unwanted red shade:
http://tinypic.com/view.php?pic=o7rmsg&s=8
I put a border around unwanted red shade here:
http://tinypic.com/view.php?pic=28jefec&s=8
I=imread('http://oi62.tinypic.com/o7rmsg.jpg');
I=imcrop(I,[200 100 400 250]);
Ir=I(:,:,1);
Ig=I(:,:,2);
Ib=I(:,:,3);
I1=Ib-Ir;
bw=im2bw(I1,graythresh(I1));
I2(:,:,1)=Ir.*uint8(bw);
I2(:,:,2)=Ig.*uint8(bw);
I2(:,:,3)=Ib.*uint8(bw);
imshow(I2)
I'm presuming that you are doing some sort of color segmentation and can get out a binary image (BW) showing all "red objects" detected in the image, some of which are your real objects, others which are the shades.
In this case it's fairly easy to do some checks on the nature of the detected objects, to filter out the incorrect matches, using regionprops.
stats = regionprops(BW,'basic'); % 'basic', 'all', or specific list of properties to measure
For example, if the "red shade" areas detected are always much smaller in overall than the real objects you're looking for, you can check the 'Area' property and remove any detected parts which don't fit. Or you can calculate some other measure of the shape ('Eccentricity' or ‘Solidity’ for example), - e.g. if your real objects are roughly circular and solid then it should be pretty easy to tell the difference between that and the sort of area you show in your example image.
Take your image and convert it into its gray scale equivalent.
Now apply a general threshold to this image or apply a threshold with a particular value/percentage.By doing so, the small unwanted red pixels gets eliminated and now convert your new image back into rgb format. You can also try using some filters.
i'm interested in some kind of charcoal-filters like the photoshop Photocopy-Filter or the note-paper.
Have someone a paper or some instructions how this filter works?
In best case i want to create the following:
input:
Output:
greetings
I think it's a process akin to pan-sharpening. I could get a quite similar image in gimp by:
Converting to gray
Duplicating into two layers
Lightly blurring one layer
Edge-detecting in the other layer with a DOG filter with large radius
Compositing the two layers, playing a bit with the transparency.
What this is doing is converting the color picture into a 0-1 bitmap picture.
They typically use a threshold function which returns 1 (white) for some values and 0 (black) for some other.
One simple function would be transform the image from color to gray-scale, and then select a shade of gray above which everything is white, and below it everything is black. The actual threshold you use could be made adaptive depending on the brightness of the picture (you want a certain percentage of pixels to be white).
It can also be adaptive based on the context within the picture (i.e. a dark area may still have some white pixels to show local contrast). The trees behind the house are not all black because the filtering is sensitive to the average darkness of the region.
Also note that the area close to the light gap in the tree has a cluster of dark pixels, because of its relative darkness. The edges of the home, the bench are also highlighted. There is an edge detection element at play.
I do not know exactly what effect you gave an example of but there are a variety that are similar to it. As VSOverFlow pointed out, thresholding an image would result in something very similar to that though I do not think it is what is being used. Open cv has a function for this, its documentation can be found here. You may also want to look into Otsu's method for thresholding.
Again as VSOverFlow pointed out, there is an edge detection element at play as well. You may want to investigate the Sobel and Prewitt filters. Those are 3 simple options that will give you something similar to the image you provided. Perhaps you could threshold the result from the Prewitt filter? I have no knowledge of how Photoshop implements its filters. If none of these options are close enough to what you are looking for I would recommend looking for information on the specific implementations of those filters in photoshop.
What is the best (result, not performance) algorithm to fetch dominant colors from an image. The algorithm should discard the background of the image.
I know I can build an array of colors and how many they appear in the image, but I need a way to determine what is the background and what is the foreground, and keep only the second (foreground) in mind while read the dominant colors.
The problem is very hard especially for gradient backgrounds or backrounds with patterns (not plain)
Isolating the foreground from the background is beyond the scope of this particular answer, but...
I've found that applying a pixelation filter to an image will draw out a really good set of 'average' colours.
Before
After
I sometimes use this approach to derive a pallete of colours with a particular mood. I first find a photograph with the general tones I'm after, pixelate and then sample from the resulting image.
(Thanks to Pietro De Grandi for the image, found on unsplash.com)
The colour summarizer is a pretty sweet spot for info on this subject, not to mention their seemingly free XML Web API that will produce descriptive colour statistics for an image of your choosing, reporting back the following formatted with swatches in HTML or as XML...
what is the average color hue, saturation and value in my image?
what is the RGB colour that is most representative of the image?
what do the RGB and HSV histograms look like?
what is the image's human readable colour description (e.g. dark pure blue)?
The purpose of this utility is to generate metadata that summarizes an
image's colour characteristics for inclusion in an image database,
such as Flickr. In particular this tool is being used to generate
metadata for Flickr's Color Fields group.
In my experience though.. this tool still misses the "human-readable" / obvious "main" color, A LOT of the time. Silly machines!
I would say this problem is closer to "impossible" than "very hard". The only approach to it that I can think of would be to make the assumption that the background of an image is likely to consist of solid blocks of similar colors, while the foreground is likely to consist of smaller blocks of dissimilar colors.
If this assumption is generally true, then you could scan through the whole image and weight pixels according to how similar or dissimilar they are to neighboring pixels. In other words, if a pixel's neighbors (within some arbitrary radius, perhaps) were all similar colors, you would not incorporate that pixel into the overall estimate. If the neighbors tend to be very different colors, you would weight the pixel heavily, perhaps in proportion to the degree of difference.
This may not work perfectly, but it would definitely at least tend to exclude large swaths of similar colors.
As far as my knowledge of image processing algorithms extends , there is no certain way to get the "foreground"; it is only possible to get the borders between objects. You'll probably have to make do with an average, or your proposed array count method. In that, you'll want to give colours with higher saturation a higher "score" as they're much more prominent.