Solid spheres are penetrating solid walls of the alphabet L - animation

I have been trying tweaking properties and settings but cannot find the solution.
The problem is that the solid RED balls are visible through the GREEN walls which is really very unwanted.
Please have a look at the image for better understanding.
I want that the red balls should come out from the opening that has been made, and not from the walls of the alphabet L. please suggest appropriate measures and also explain why is this happening.

It might help to play around with the dynamics settings. You can get there by pressing CTRL-D for the project settings and selecting the "Dynamics" -> "Expert" tab. Try increaseing the "Steps per Frame" value. This sets how often the dynamics are calculated per frame and gives you a more precice calculation - of course it does take more time for the dynamics to calculate.

Related

recovering order from samples

I have a pool of balls of 30 different color patterns(solid green, green and red striped, etc), and I also have 6 boxes ordered from 1 to 6. Now randomly I select 6 balls out of the pool and put each ball in one box so that each box contains exactly one ball. And among the 6 balls, the color pattern of each ball can be different from or the same as color pattern of other balls in other boxes. Now I want you to guess the color pattern of the ball in each box by doing the following:
Every time you make a request to me and I will randomly select 3 balls and display the balls in front of you in the same order of the box order. You can make unlimited requests.
The problem is how to tell the color pattern of the ball in each box by making least requests, I feel like there should be a well-known algorithm for this problem, but I can not find any. Has anybody seen this before?
I think there is a lot of statistics in this. First of all, I would make the simplifying assumption that (if you don't know which colour balls are present) the only colours and patterns available are those which you have seen.
Now write down, or work out how to calculate, a formula that gives you the probability of the observed data given the listing of which balls are present in which boxes.
Now all you have to do is find the combination of balls in boxes that gives the highest probability to the observed data, and hope that as you get more and more data the right answer wins out.
You could think of this as a generic optimisation problem, and try hill-climbing from multiple random starts, or genetic programming, or whatever your favourite heuristic is.
Or you could do a bit more web-searching about statistics and recognise that this is a missing data problem, where the hidden data is the knowledge of which box each sampled ball came from. Statisticians often solve hidden data problems with the EM algorithm. There is an introduction for mathematicians at http://www.inf.ed.ac.uk/teaching/courses/pmr/docs/EM.pdf. Your problem can be thought of as a simple case of a hidden Markov model, with the hidden state being the box which produced a particular ball.

How to count the number of spots in this image?

I am trying to count the number of hairs transplanted in the following image. So practically, I have to count the number of spots I can find in the center of image.
(I've uploaded the inverted image of a bald scalp on which new hairs have been transplanted because the original image is bloody and absolutely disgusting! To see the original non-inverted image click here. To see the larger version of the inverted image just click on it). Is there any known image processing algorithm to detect these spots? I've found out that the Circle Hough Transform algorithm can be used to find circles in an image, I'm not sure if it's the best algorithm that can be applied to find the small spots in the following image though.
P.S. According to one of the answers, I tried to extract the spots using ImageJ, but the outcome was not satisfactory enough:
I opened the original non-inverted image (Warning! it's bloody and disgusting to see!).
Splited the channels (Image > Color > Split Channels). And selected the blue channel to continue with.
Applied Closing filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: Closing, Element: Square, Radius: 2px
Applied White Top Hat filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: White Top Hat, Element: Square, Radius: 17px
However I don't know what to do exactly after this step to count the transplanted spots as accurately as possible. I tried to use (Process > Find Maxima), but the result does not seem accurate enough to me (with these settings: Noise tolerance: 10, Output: Single Points, Excluding Edge Maxima, Light Background):
As you can see, some white spots have been ignored and some white areas which are not actually hair transplant spots, have been marked.
What set of filters do you advise to accurately find the spots? Using ImageJ seems a good option since it provides most of the filters we need. Feel free however, to advise what to do using other tools, libraries (like OpenCV), etc. Any help would be highly appreciated!
I do think you are trying to solve the problem in a bit wrong way. It might sound groundless, so I'd better show my results first.
Below I have a crop of you image on the left and discovered transplants on the right. Green color is used to highlight areas with more than one transplant.
The overall approach is very basic (will describe it later), but still it provides close to be accurate results. Please note, it was a first try, so there is a lot of room for enhancements.
Anyway, let's get back to the initial statement saying you approach is wrong. There are several major issues:
the quality of your image is awful
you say you want to find spots, but actually you are looking for hair transplant objects
you completely ignores the fact average head is far from being flat
it does look like you think filters will add some important details to your initial image
you expect algorithms to do magic for you
Let's review all these items one by one.
1. Image quality
It might be very obvious statement, but before the actual processing you need to make sure you have best possible initial data. You might spend weeks trying to find a way to process photos you have without any significant achievements. Here are some problematic areas:
I bet it is hard for you to "read" those crops, despite the fact you have the most advanced object recognition algorithms in your brain.
Also, your time is expensive and you still need best possible accuracy and stability. So, for any reasonable price try to get: proper contrast, sharp edges, better colors and color separation.
2. Better understanding of the objects to be identified
Generally speaking, you have a 3D objects to be identified. So you can analyze shadows in order to improve accuracy. BTW, it is almost like a Mars surface analysis :)
3. The form of the head should not be ignored
Because of the form of the head you have distortions. Again, in order to get proper accuracy those distortions should be corrected before the actual analysis. Basically, you need to flatten analyzed area.
3D model source
4. Filters might not help
Filters do not add information, but they can easily remove some important details. You've mentioned Hough transform, so here is interesting question: Find lines in shape
I will use this question as an example. Basically, you need to extract a geometry from a given picture. Lines in shape looks a bit complex, so you might decide to use skeletonization
All of a sadden, you have more complex geometry to deal with and virtually no chances to understand what actually was on the original picture.
5. Sorry, no magic here
Please be aware of the following:
You must try to get better data in order to achieve better accuracy and stability. The model itself is also very important.
Results explained
As I said, my approach is very simple: image was posterized and then I used very basic algorithm to identify areas with a specific color.
Posterization can be done in a more clever way, areas detection can be improved, etc. For this PoC I just have a simple rule to highlight areas with more than one implant. Having areas identified a bit more advanced analysis can be performed.
Anyway, better image quality will let you use even simple method and get proper results.
Finally
How did the clinic manage to get Yondu as client? :)
Update (tools and techniques)
Posterization - GIMP (default settings,min colors)
Transplant identification and visualization - Java program, no libraries or other dependencies
Having areas identified it is easy to find average size, then compare to other areas and mark significantly bigger areas as multiple transplants.
Basically, everything is done "by hand". Horizontal and vertical scan, intersections give areas. Vertical lines are sorted and used to restore the actual shape. Solution is homegrown, code is a bit ugly, so do not want to share it, sorry.
The idea is pretty obvious and well explained (at least I think so). Here is an additional example with different scan step used:
Yet another update
A small piece of code, developed to verify a very basic idea, evolved a bit, so now it can handle 4K video segmentation in real-time. The idea is the same: horizontal and vertical scans, areas defined by intersected lines, etc. Still no external libraries, just a lot of fun and a bit more optimized code.
Additional examples can be found on YouTube: RobotsCanSee
or follow the progress in Telegram: RobotsCanSee
I've just tested this solution using ImageJ, and it gave good preliminary result:
On the original image, for each channel
Small (radius 1 or 2) closing in order to get rid of the hairs (black part in the middle of the white one)
White top-hat of radius 5 in order to detect the white part around each black hair.
Small closing/opening in order to clean a little bit the image (you can also use a median filter)
Ultimate erode in order to count the number of white blob remaining. You can also certainly use a LoG (Laplacian of Gaussian) or a distance map.
[EDIT]
You don't detect all the white spots using the maxima function, because after the closing, some zones are flat, so the maxima is not a point, but a zone. At this point, I think that an ultimate opening or an ultimate eroded would give you the center or each white spot. But I am not sure that there is a function/pluggin doing it in ImageJ. You can take a look to Mamba or SMIL.
A H-maxima (after white top-hat) may also clean a little bit more your results and improve the contrast between the white spots.
As Renat mentioned, you should not expect algorithms to do magic for you, however I'm hopeful to come up with a reasonable estimate of the number of spots. Here, I'm going to give you some hints and resources, check them out and call me back if you need more information.
First, I'm kind of hopeful to morphological operations, but I think a perfect pre-processing step may push the accuracy yielded by them dramatically. I want you put my finger on the pre-processing step. Thus I'm going ti work with this image:
That's the idea:
Collect and concentrate the mass around the spot locations. What do I mean my concentrating the masses? Let's open the book from the other side: As you see, the provided image contains some salient spots surrounded by some noisy gray-level dots.
By dots, I mean the pixels that are not part of a spot, but their gray-value are larger than zero (pure black) - which are available around the spots. It is clear that if you clear these noisy dots, you surely will come up with a good estimate of spots using other processing tools such as morphological operations.
Now, how to make the image more sharp? What if we could make the dots to move forward to their nearest spots? This is what I mean by concentrating the masses over the spots. Doing so, only the prominent spots will be present in the image and hence we have made a significant step toward counting the prominent spots.
How to do the concentrating thing? Well, the idea that I just explained is available in this paper, which its code is luckily available. See the section 2.2. The main idea is to use a random walker to walk on the image for ever. The formulations is stated such that the walker will visit the prominent spots far more times and that can lead to identifying the prominent spots. The algorithm is modeled Markov chain and The equilibrium hitting times of the ergodic Markov chain holds the key for identifying the most salient spots.
What I described above is just a hint and you should read that short paper to get the detailed version of the idea. Let me know if you need more info or resources.
That is a pleasure to think on such interesting problems. Hope it helps.
You could do the following:
Threshold the image using cv::threshold
Find connected components using cv::findcontour
Reject the connected components of size larger than a certain size as you seem to be concerned about small circular regions only.
Count all the valid connected components.
Hopefully, you have a descent approximation of the actual number of spots.
To be statistically more accurate, you could repeat 1-4 for a range of thresholds and take the average.
This is what you get after applying unsharpen radius 22, amount 5, threshold 2 to your image.
This increases the contrast between the dots and the surrounding areas. I used the ballpark assumption that the dots are somewhere between 18 and 25 pixels in diameter.
Now you can take the local maxima of white as a "dot" and fill it in with a black circle until the circular neighborhood of the dot (a circle of radius 10-12) erases the dot. This should let you "pick off" the dots joined to each other in clusters more than 2. Then look for local maxima again. Rinse and repeat.
The actual "dot" areas are in stark contrast to the surrounding areas, so this should let you pick them off as well as you would by eyeballing it.

Arrange blocks by 2D property without overlap

My app needs to show several buttons, without overlap, and preferably without scrolling or zooming. They must be big enough to poke with a finger and read the text. Button width depends on its text length, and the height is constant. The screen size is known.
Each button represents a food about which I know some nutritional information. I'll calculate a protein:carb ratio and a fat content, both ranging from 0% to 100%.
I want to put the buttons close to a position that reflects their nutritional content: e.g. protein-rich at the top, carby at the bottom, fatty on the right and lean on the left. So cake would be bottom right and meats would be somewhere on the top edge.
Often, there'll be overlap and I'll have to nudge them away from each other.
The puzzle is to invent an algorithm for that nudging. The desiderata in order of priority are:
1) Readable and pokeable size, no overlap.
2) No scrolling or zooming required, although it'll happen when there are so many buttons that they could never fit on the screen even if we didn't care where they were.
3) Buttons should be close to where the user would look based on knowing the nutritional content of the food.
Incidentally, I'm using JS on a smartphone, not prolog or the like.
(There are some seeming dupes, but no solutions. One is about diagonal stalks, another just advocates throwing it at a game engine, but most are devoid of answers.)
Ther MArVL group at Monash University does work on constraint-based layout work. Some of their software might be applicable to your problem.

Alien tiles heuristic function

I am trying to find a good A* heuristic function for the problem "alien tiles", found at www.alientiles.com for a uni project.
In alien tiles you have a board with NxN tiles, all colored red. By clicking on a tile, all tiles in the same row and column advance by a color, the color order being red->green->blue->purple, resetting to red after purple. The goal is to change all tiles to the specified colors. The simplest goal state is all the tiles going from red to green, blue or purple. The board doesn't have to be 7x7 as the site suggests.
I've thought of summing the difference between each tile and the target tile and dividing by 2N-1 for an NxM board or or finding possible patterns of clicks as the minimum number of clicks, but neither has been working well. I can't think of a way to apply relaxation to the problem or divide it into sub-problems either, since a single click affects an entire row and column.
Of course I'm not asking for anyone to find a solution for me, but some tips or some relevant, simpler problems that I can look at (rubik's cube is such an example that I'm looking at).
Thanks in advance.
The problem you are trying to solve is similar to NIM FOCUS name. Please have a look at it. The solutions for it can be found in Stuart J. Russell book under heuristics section. Hope this helps
Although it is a relatively 'dumb' way of thinking around the problem, one heuristic mechanism i have found that drastically cuts down on the number of states that a star expands, tries to figure out a relationship between the cell that has been clicked most recently and the number of states that clicking on it again would expand. Its like telling a star: "If you have clicked on a cell in your last move, try clicking on another one this time." Obviously in special scenarios,
(e.g. having all the board on your target colour, say green, and only a purple cross where clicking on the center of the cross twice changes the cross colour to green and then you are done)
this way of thinking is actually detrimental. But, it is a place to start.
Please let me know if u figure anything out, as it is something i am working on as well.

Can i use red/green text colors to provide additional information?

I have an text outline tree where i display a lot of information There are already upto three icons for each item (one in front, one after the text and one at the left side). But i still need to show more.
So i used different text colors. Unfortunately only the red=stop,forbidden,protected and green=okay,valid,allowed is so universal known. I mix it with black/blue which stands for none available information items.
My question is: Do red/green blind people recognize the difference of this colored texts?
I don't really care about real color blind people as they are in the 0,001% range while red/green blindness is serious high in the one digit percent range.
EDIT:
I heared from someone that he is red/green blind and is able to tell me if this is a green text or a red text and only if this is a real mixed color salad they are not able to differenciate between it. So the question is: are text items in a GUI list/tree so far away from each other that red/green blinded people can see them as different shades in whatever color they see.
NO, we do NOT see the difference!
I worked at a company that used red/green for information regularly, and it was very confusing.
But, do as they do in traffic lights, and add a little blue to the green color.
FF0000 and FF00CC is easy to tell apart (for me, at least).
Also, its easier to tell larger text/things apart. Small pixels or pixel-thin lines are harder to see the color on.
Red/green alone is bad for this reason. My boss actually has red/green colorblindness, so he points out anytime we mess up and he can't distinguish something.
Consider an:
outline box (eg dashed)
Background color
text bold/not bold
Yet Another icon
A variation or overlay on an existing icon
A mouseover hint (title)
A mouseover is actually a generally useful thing - while an icon may be intuitive to you, it might not be to someone using your app for the first time.
I agree with the suggestion to make it black-and-white to try it out- though there's also a couple Firefox extensions that do this: https://addons.mozilla.org/en-US/firefox/search?q=colorblind&cat=all -- unfortunately, none seem to be updated for 3.5.
Colors may give cues, but they should not be the only way to find that information.
Personal
I have a red/green deficiency, here are some of my experiences. Keep in mind color vision is very individual even among normal-sighted people. Some of that may be true in general.
Telling colors apart is easier than identifying colors
closeby large areas are easier to tell apart
Trying to tell colors is strenuous, it's less of a cue, more of a task
(i.e. color coded UI exhausts rather than helps me)
I can tell RGB(255,255,0) from RGB(0,255,0) and ident when they are side-by-side,
and can tell you "which is greener". However, I can't play most puzzle games that
use these for color-coding pieces.
Mixed colors give the most problems.
An intern was once confused by my problem, because the desktop background - some paintshop landscape - was all pink. I didn't know.
Imagine a list control (like an windows explorer file list) where file names are black, red or green:
telling red apart from black is hard. I usually don't notice until someone tells me. Even on decent TFT's, from a slight angle they wash. I can identify correctly if someone tells me "some of these are red" and I look hard.
I once deleted 3 days of work because I failed to notice that the list of files on the left was red not black (this was a font with 2 pixel line width!).
having a single red or green item among black ones, it's hard to tell whether it's red or green.
People deal differently with defects, and though I've never found a color vision deficient person to be ashamed or hiding about it, we still don't like to have it rubbed in. Some people, incidentally, get very argumentative because they can't imagine I can tell apart X and Y easily, but not X and Z. My best way to quiet them is to ask them to explain the color "blue" without using any color names.
General thoughts
Well chosen colors make valuable cues, improving UI transparence immensely. Don't stop using them for good - just allow people who don't see them easily to still use your software.
Allow customization of colors, and provide color schemes to pick quickly. I really don't want to select 14 individual colors.
Using slightly-off colors often looks much more professional than the "primaries" (one or two channels at 255, the last at 0).
Color perception differes a lot. What looks good to you on your monitor might suck everywhere else.
First, there's color reproduction of the monitor - they have huge differences in linearity, balance and isotropy. Unless you use the same brand from the same batch, or use properly calibrated professional monitors, they make for the biggest difference.
Second, is your sensors. There's an about 40/60 distribution among people of red perceptors working at different wavelengths. There are many more subtle personal differences.
Third, it's your brain. A lot of color perception is learned and affected by cultural background. see e.g. here.
7% of American men have some form of Red-Green color blindness. The rate among women is much lower. Incidence rates in other countries vary, but part of that is due to the low rate of diagnosis.
So, for the sake of ADA-compliance and conformance with UI best practices, don't use red and green colors alone as an indicator.
Also, no one wants to use an application that looks like angry fruit salad.
Could you maybe use bold/not bold/<span style="color:silver">grayed out<span> instead? (Sorry couldn't make the text silver colored.)
I agree with a great deal of what you say.
Red and Green are a very recognizable combination in terms of meaning.
In terms of colour blindness, the shades of green and red differ greatly in how "bad" they are. If you want to make sure you have a usuable UI look at it in black and white. (I take a screenshot and give it 20 seconds in Photoshop.)
If you can use what you see, move on. Otherwise, address the problems.
Kindness,
Dan

Resources