I get and error: IndexError: invalid index to scalar variable - for-loop

Hello I need help with this code why is telling an error. How can I fix the problem
This is My code
I am trying to make a program for the classification of images, the last row you see in the square serves not to appear too many squares but only one for a square object that has the safest percentage will qualify to stay.
This photo I just want to be with a square.

Related

Comparing two similar pictures to get similarity value

I am trying to create my own app and I need to compare two pictures.
A bit of clarification.
Picture will contain a symbol written on a piece of paper.
I will have the "Original" picture of piece of paper with symbol on it.
I need to compare newly captured picture of symbol to the Original picture and determine that if they are both same picture of same symbol.
The newly captured picture of symbol could be taken from a different angle.
I looked at OpenCV and Google Vision , but I am a bit lost, on how to do it.
My Question
I have the "Original" picture like this
I have newly captured picture with same symbol on it, but take from a different angle, like this
I need to determine if they are "same"(similar) or they are different.
Thank in advance.

Divide rectangle to equal square pieces each containing only one specific element

Problem description
On one of the sites you periodically meets captcha, the solution of which takes a long time. You want to write a robot to automate the task.
Captcha is a photo that should be divided into rectangular fragments. All fragments should have the same area, but there may be different sizes of sides. Each fragment must contain exactly one road sign.
Your colleagues have already preprocessed the photo. They divided it into small square parts and defined the main object for each of them.
The photo gets to you in the form of a line:
"TRABWARH
THSCAHAW
WWBSCWAA
CACACHCR"
Object designation:
S-road sign (sign),
T-tree (tree),
R-road (road),
B-building (building),
C-car (car),
A - pet (animal),
W-pond (water),
H-man (human)
Notes
The number of road signs in the photo is always more than 1 and less than 10.
Each fragment must be a rectangle.
The area of each of the resulting fragments should be the same, but the size may vary.
In the output array, the fragments should go from top to bottom and from left to right (relative to the upper left corner).
If there are multiple solutions, comparing the methods gives priority to the one with the largest width of the first different fragment.
If there is no solution, return an empty array.
Simple test cases:
"TRABWARH\nTHSCAHAW\nWWBSCWAA\nCACACHCR"
["TRABWARH\nTHSCAHAW","WWBSCWAA\nCACACHCR"]
"CSRARHAR\nCWAHCBSW\nABWBSWBA\nRBSBTABH"
["CSRARHAR","CWAHCBSW","ABWBSWBA","RBSBTABH"]
"HSRSTBHC\nCAWTRTBT\nWBATSTRA\nTWRBRTRR\nRWTABSHB\nTWCBWBCA"
["HS\nCA\nWB\nTW\nRW\nTW","RSTBHC\nWTRTBT","ATSTRA\nRBRTRR","TABSHB\nCBWBCA"]
"TSRSBWAC\nASCSWBTC\nTTAHTABC\nAHWTRWWA"
[]
My solution is just to compare all valid rectangles, but its time consuming. Any suggestions? Thank you.

Weird behaviour of bwareafilt in MATLAB and what algorithm does it use?

I have the following questions:
What is the algorithm that bwareafilt uses?
Weird behaviour: When the input matrix is totally black, I get following error
Error using bwpropfilt (line 73)
Internal error: p must be positive.
Error in bwareafilt (line 33)
bw2 = bwpropfilt(bw, 'area', p, direction, conn);
Error in colour_reception (line 95)
Iz=bwareafilt(b,1);
Actually, I am using this function to take snapshots from a webcam, but when I block my webcam totally, then I get above following error.
So I believe it is an error due to some internal implementation mistake. Is this the case? How do I overcome this?
Let's answer your questions one at a time:
What algorithm does bwareafilt use?
bwareafilt is a function from the image processing toolbox that accepts a binary image and determines unique objects in this image. To find unique objects, a connected components analysis is performed where each object is assigned a unique ID. You can think of this as performing a flood fill on each object individually. A flood fill can be performed using a variety of algorithms - among them is depth-first search where you can consider an image as a graph where edges are connected to each pixel. Flood fill in this case visits all of the pixels that are connected to each other until you don't have any more pixels to visit and that are localized within this object. You then proceed to the next object and repeat the same algorithm until you run out of objects.
After, it determines the "area" for each object by counting how many pixels belong to that object. Once we determine the area for each object, we can either output an image that retains the top n objects or filter the image so that only those objects that are within a certain range of areas get retained.
Given your code above, you are trying to output an image that is the largest object in the binary image. Therefore, you are using the former, not the latter where n=1.
Weird behaviour with bwareafilt
Given the above description of bwareafilt and your intended application:
Actually, I am using this function to take snapshots from a webcam, but when I block my webcam totally, then I get above following error.
... the error is self-explanatory. When you cover the webcam, the entire frame is black and there are no objects that are found in the image. Because there are no objects in the image, returning the object with the largest area makes no sense because there are no objects to return to begin with. That's why you are getting the error because you are trying to make bwareafilt return an image with the largest object but there aren't any objects in your image to begin with.
As such, if you want to use bwareafilt, what I suggest is you check to see if the entire image is black first. If it isn't black, then go ahead and use bwareafilt. If it is, then skip it.
Do something like this, assuming that b is the image you're trying to process:
if any(b(:))
Iz = bwareafilt(b, 1);
else
Iz = b;
end
The above code uses any to check to see if there are any white pixels in your image b that are non-zero. If there are, then bwareafilt should be appropriately called. If there aren't any white pixels in the image, then simply set the output to be what b originally was (which is a dark image anyway).
You can add Conditions to make your function robust to any inputs , for exemple by ading a simple condition to first treat the input image if it is all black or not, based on the condition yo use your function to filter objects.

Find the number of 'T'-s in a picture

Given the following picture :
How can I find the number of T's in the picture ?
I'm not after a matlab code , however I would appreciate for an algorithm or
some kind of an explanation how to approach the problem .
Regards
Simple template matching would probably do it. You simply cut out one of the Ts and then find the RMS error signal for each shift of the template (the T).
Pseudo code
for each x-position of T in image
for each y-position of T in image
err(x,y) = sqrt(sum(sum((T - image(x:x+Tsizex, y:y+Tsizey)).^2)))
end end
ErrBinary = err < detectionThreshold
Now, each 1 in the errBinary is a detection. Depending on the resolution of the image, you might get a number of 1s in a cluster for each T in the image. One way to fix that could to iteratively pick a 1, and then clear all other ones in the neighbourhood. In this way you are actually defining a limit for how close Ts can be in order to be detected as two individual Ts.
EDIT:
Explanation of template comparison:
Basically what this method does, is to compare the reference template (a small image of a T in this case) to every possible location in the original image. For every location the error is calculated as a scalar RMS value of the difference if the two. So, the two for loops simply pick all possible sub images with a size of the template from the original image and use them to build an error surface. A small value in this surface will mean a good match between the template and sub image for that particular location. The location of the match in the original image corresponds to the location of the minimum in the error surface.
Regards

Testing Object inside and object

I'm writing an image processing application which recognizes objects based on their shapes. The issue that I'm facing is that since one object can be composed of one or more subobjects eg. human face is an object which is composed of eyes, nose and mouth.
Applying image segmentation creates separate objects but does not tells whether one object is inside another object.
How can I check whether an object is contained inside another object effeciently.
For now my algoirthm is wat I would call 8 point test in which u chose 8 points at 8 corners and check whther all of them are inside the object.If they are in then u can more quite certain that entire object is inside another object... But it has got certain limitation or certain areas of failure...
Also just because inner object is inside another object means I should treat them to part of outer object????
One way to test whether one object is fully inside another is to convert both into binary masks using poly2mask (in case they aren't binary masks already), and to test that all pixels of one object are part of the other object.
%# convert object 1 defined by points [x1,y1] into mask
msk1 = poly2mask(x1,y1,imageSizeX,imageSizeY);
%# do the same for object 2
msk2 = poly2mask(x2,y2,imageSizeX,imageSizeY);
%# check whether object 1 is fully inside object 2
oneInsideTwo = all(msk2(msk1));
However, is this really necessary? The eyes should always be close to the center of the face, and thus, the 8-point-method should be fairly robust at identifying whether you found an eye that is part of the face or whether it is a segmentation artifact.
Also, if an eye is on a face, then yes, you would consider it as part of that face - unless you're analyzing pictures of people that are eating eyes, in which case you'd have to test whether the eye is in roughly the right position on the face.
In sum, the answer to your questions is a big "depends on the details of your application".

Resources