How to fix png image edges in GIMP? - image

I am designing a logo for my business and I need to fix the edges of my image. (As you can see they are not straight lines).
I have installed GIMP because I am trying to do this for free and can't seem to fix this problem. I am very new to editing images.
Any way I can fix this image? Or maybe someone could do it easily for me for free?
https://i.stack.imgur.com/8emYc.png

If you do logos the best tool for that is a "vector graphics" editor (for free: inkscape, for Adobe: illustrator). Vector graphics can be scaled up properly (because it seems that your logo has been scaled up...)
If you want to pursue with bitmap graphics, the best way is to erase the lines and redo them properly:
Use guides (the dashed vertical/horizontal lines) to "prepare" the positions and make sure that everything is square and symmetrical
Create paths (the blue-whote lines) using these guides to indicate where lines are going to be
Render the lines by "stroking" the paths in Line mode.
Create a last path for the red bits (that are the corners of a square)
Bucket-fill the red square
Make a rectangle selection (easy using the existing guides) and delete the center (no need to be too accurate since the "inner grid" will hide the excess.
Transfer VH by copy/paste
Note that each element is on its own layer, makes everything a lot easier.
Final result:

Related

How to remove regular strips

I want to remove the regular strips of the image as shown as follow. I try many methods, and they do not work, such as image media filter and FFT filter.
Could you tell me how to remove the strips?
All that black is removing a ton of information from the image. You have two options available - either re-capture that missing information in a new shot, or attempt to invent / synthesize / extrapolate the missing information with software.
If you can re-shoot, get your camera as close to the mesh fence as you can, use the largest aperture your lens supports to have the shallowest possible depth of field, and set your focus point as deep as possible - this will minimize the appearance of the mesh.
If that is the only still you have to work with, you've got a few dozen hours of playing with the clone and blur tools in front of you in just about any image editing software package you like.
Photoshop would be my go to tool of choice for this. In Photoshop CS5 they introduced something called content aware fill. I'm not sure if it will help you in this specific case because there is SO MUCH black that Adobe's algorithm may think other parts of the mesh are valid sources for filling in the mesh you're trying to clear out.

Rectangle detection in image

I'd like to program a detection of a rectangular sheet of paper which doesn't absolutely need to be perfectly straight on each side as I may take a picture of it "in the air" which means the single sides of the paper might get distorted a bit.
The app (iOs and android) CamScanner does this very very good and Im wondering how this might be implemented. First of all I thought of doing:
smoothing / noise reduction
Edge detection (canny etc) OR thresholding (global / adaptive)
Hough Transformation
Detecting lines (only vertically / horizontally allowed)
Calculate the intercept point of 4 found lines
But this gives me much problems with different types of images.
And I'm wondering if there's maybe a better approach in directly detecting a rectangular-like shape in an image and if so, if maybe camscanner does implement it like this as well!?
Here are some images taken in CamScanner.
These ones are detected quite nicely even though in a) the side is distorted (but the corner still gets shown in the overlay but doesnt really fit the corner of the white paper) and in b) the background is pretty close to the actual paper but it still gets recognized correctly:
It even gets the rotated pictures correctly:
And when Im inserting some testing errors, it fails but at least detects some of the contour, but always try to detect it as a rectangle:
And here it fails completely:
I suppose in the last three examples, if it would do hough transformation, it could have detected at least two of the four sides of the rectangle.
Any ideas and tips?
Thanks a lot in advance
OpenCV framework may help your problem. Also, you can look to this document for the Android platform.
The full source code is available on Github.

remove object from image

I have few thousands of images from our vendors. They are models wearing fashion clothing. I need to take only the clothes part of the images and discard the rest and make them transparent background. All the images has one color background but they are in different colors. Currently we perform the following steps manually and I need suggestion and help if there is a way to do this automatically or is there a way to do the manual process faster. We used Gimps and script-fu for automating some part of this process (see below steps), but still the remaining manual part is very time consuming. Is there any tool or any programming language or script that can make this process faster?
This is the way we are doing now:
We use Gimps script-fu run in batch to make all images background transparent.
Load one by one each image into Gimps manually
Via Free select took, we mark around the clothes
Remove everything outside the marked clothes area
Export and save image into png format.
Run script-fu in batch to auto crop all the image
I haven't figure out a way (code or script) to do the step 3 automatically. Does anyone know if that even possible? If it is not, is there any tool that could combine step 4-6 into one control key so reduce the key strokes and any faster way to finish these images?
Thank you for your suggestions. This is what I am thinking to do for making my step 3 and 4 automated. Do you think if this approach would work. Is there better way to handle it?
All the images will have transparent background via our batch job. So the idea is to remove the body part now.
Auto crop all the images, so the head and feet to be the topmost and bottommost of the image.
Code a PHP program to detect the skin colors from list of database colors for skin.
Then go to each pixel of image and detect where the skin color starts.
Start from topmost of image, the first pixel has skin color must be the head or neck part. I remove everything above the starting first pixel, so I will be able to get rid of hairs if the image has model with full head. Anything below could be the face and neck, I will just replace the color with transparent background. I still don't know how to get rid of hair in right and left side of the face.
Searching from bottom of image pixels by pixel until match to skin color. I remove everything from that pixel to bottom of the page. This way I can get rid of shoes parts as well.
6.Replace remaining skin part with transparent background.
The problem is the hand sometimes cover the clothes and I am not sure how to handle that. Perhaps if the adjacent pixel is not transparent background then leave that part alone.
I also don't know how to handle the the clothes(dress or blouse) that may have the same color as skin?
Step 3 can be semi-automated. That is, done in such a way that it requires far less human interaction than you are currently using. I get the impression from your question, that you are not a programmer. So, I'll point you to a specific off the shelf tool called Power Stroke. It plugs into non-free tools. There may be a GIMP equivalent. I don't know.

Computer vision to calculate the digit (finger) ratio

If someone scans their right hand pressed against the glass of a scanner, the result would look like this:
(without the orange and white annotations). How could we determine someone's 2D:4D ratio from an image of their hand?
You've already tagged this opencv which is great - I'd highly recommend taking a look at openFrameworks and the openCV addon, as the basic examples there will give you some great starting blocks for this.
The general approach to this I would take is to first distill the image to light and dark areas, detect the edges of the hand and fingers, and then simplify your data until you have lines representing the edges and tips of the fingers. Finally, take the lower inseam between 2nd and 3rd finger, stopping at the tip of the 2nd, and the inseam of the 3rd and 4th, stopping at the tip of the 4th, which should give you your 2D:4D ratio.
First, you'll need to process your images to get to black and white images openCV can easily handle. You may have to play with various thresholds to get both the outline of the hand and the inseams of the fingers to be detected. (You may even need two passes to detect both the outline and inseams)
While there are many approaches to feature detection, OpenCV will generally return arrays of "blobs" detected. With the right thresholds, I believe you would be able to reliably and simply find contiguous horizontal blobs (or nearly contiguous, allowing for some distance between nearby blobs) for the inseams of each finger.
A simple algorithm for detecting the inseams would be to walk through the detected blobs starting from the top left and proceeding left-to-right through the image, as if reading a page. Assemble an array of detected horizontal lines from the blobs in your image, and play with various image processing thresholds, minimum accepted line length, and distance allowances between detected blobs which you still consider part of the same line until you're satisfied you're detecting the finger edges well.
Once you have detected the horizontal lines, you can process the blobs again, looking for the vertical lines that represent the tips of the fingers (stopping when you hit the previously detected horizontal lines)
Finally, find the lines which represent the correct inseams, measure them until they intersect with the appropriate fingertips, and you should have your ratio!
Interesting question. I'd go about it this way:
First, binarize the image by Otsu's thresholding. Then find the skeleton of the image using a Medial-Axis Transform (MAT). This would mean doing a distance transform on the image, then using adaptive thresholding to get the local maxima in the distance transform. This gives a rough and ready skeleton of your image. Sample code from here.
The obtained hand-skeleton may be slightly disconnected, in which case use the OpenCV morphology "CLOSE" (not "open") function can connect it into a single skeleton. Then checking convexity defects of the resulting hand should give an estimate.

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Resources