What's the name of the algorithm concerning daily planner rendering? - algorithm

Is there a well-known algorithm that is able to take as input a collection of time-bound items (defined by a start time and an end time) and produce a "graphical" layout? By graphical I mean a bi-dimensional projection of those events (2d matrix, 2d space boundaries, whatever).
The output has to be bi-dimensional because the input may contain overlapping events (events beginning at the same time etc.). One dimension would be the time, of course, and the other one is an artificial one.
If we associate a vertical axis y with the time dimension and a horizontal one, x, with the artificial dimension, then I am thinking about an algorithm playing with X and Y tokens, about token requirements and tokens availability.
E.g. the algorithm used by Outlook to render the daily view of the calendar etc.
Thank you!
PS: I believe the term "projection" is not correct, because we are adding an artificial dimension :)
PPS: Maybe what I want is one of these?

These slides: http://www.cs.illinois.edu/class/fa07/cs473ug/Lectures/lecture2.pdf call that "interval partitioning" (second part of the slides - haven't found another reference to that term elsewhere) and give a proof that a greedy algorithm works: sort the items by start time; when processing an item, if you can put it in one of the "bins" already there, put it there, otherwise start a new bin and put the item there.

Related

How to detect similar objects in this picture?

I want to find patterns in image. Saying "to find patterns" I mean "to detect similar objects", thus these patterns shouldn't be some high-frequency info like noise.
For example, on this image I'd like to get pattern "window" with ROI/ellipse of each object:
I've read advices to use Autocorrelation, FFT, DCT for this problem. As far as I've understood, Autocorrelation and FFT are alternative, not complementary.
First, I don't know if it is possible to get such high-level info in frequency domain?
As I have FFT implemented, I tried to use it. This is spectrogram:
Could you suggest how to further analyze this spectogram to detect objects "window" with their spatial locations?
Is it needed to find the brightest points/lines on spectrogram?
Should the FFT be done for image chunks instead of whole image?
If that't not possible to find such objects with this approach, what would you advice?
Thanks in advance.
P.S. Sorry for large image size.
Beware this is not my cup of tea so read with extreme prejudice. IIRC for such task are usually used SIFT/SURF + RANSAC methods.
Identify key points of interest of image SIFT/SURF
This will get you list of 2D locations in your image with specific features (which you can handle as integer hash code). I think SIFT (Scale Invariant Feature transform) is ideal for this. They work similarly like our human vision works (identify specific change in some feature and "ignore" the rest of the image). So instead of matching all the pixels of the image we cross match only few of them.
sort by occurrence
each of the found SIFT points have some feature list. if we do a histogram of this features (count how many similar or identical feature points there are) then we can group points with the same occurrence. The idea is that if we got n object placings in the image each of its key points should be n times duplicated in the final image.
So if we have many points with some n times occurrence it hints we got n similar objects in the image. From this we select just these key points for the next step.
find object placings
each object can have different scale,position and orientation. Let assume they got the same aspect ratio. So the corresponding key points in each object should have the same relative properties between the objects (like relative angle between key points, normalized distance, etc).
So the task is to regroup our key points into each object so all the objects have the same key points and the same relative properties.
This can be done by brute force (testing all the combination and checking the properties) or by RANSAC or any other method.
Usually we select one first key point (no matter which) and find 2 others that form the same angle and relative distance ratio (in all of the objects)
so angle is the same and |p1-p0| / |p2-p0| is also the same or close. While grouping realize that key points within objects are more likely closer to each other ... so we can augment our search by distance from the first selected key point.... to decide to which object the key point probably belongs to (if try those first we got high probability we found our combination fast). All the other points pi we can add similarly one by one (using p0,p1,pi)
So I would starting by closest 2 key points ... (this sometimes can be fouled by overlapping or touching mirrored images) as the key point from the neighbor object can be sometimes closer that from the own one ...
After such regrouping just check if all the found objects have the same properties (aspect ratio) ... to visualize them you can find the OBB (Oriented Bounding Box) of the key points (which can be also used for the check)

Efficiently detecting rectangular regions (maps) corresponding to a given point

We have a (w times h) canvas (width w and height h), which is used as a drawing area. We can define 'maps' or regions in the drawing area, on which the user can click to perform some pre-defined tasks. Each region is defined by a bounding rectangle. An image-map is activated when the user clicks inside it. Two regions may have overlapping rectangles. Whenever the user clicks on a point the canvas, we are required to find the find the image-map(s) to which the point belongs and start the execution of the corresponding task. We can always use a linear list to find the image-maps. But is there a better way, a data structure that could be used to store the image maps so that we can figure out efficiently (in less than O(n) time) which image-maps are activated on user click?
Yes - use any type of 2D spatial index. The most common is the quad-tree, which has O(log(n)) lookup complexity and is also quite quick to build. Implementations are available in all major languages; it is extensively used for all types of mapping applications.
You may create some sort of optimization to make this work less time in average, but efficient will be still O(n) , because there are possibility that users click will execute all N possible tasks.

Can I link actions between two d3 charts?

Very casual javscript user here. Hopefully this makes sense.
I am tracking 20 different series on a stacked area chart with nvd3.js. These series fluctuate between positive and negative values, some on a huge base and some on a small base. The result is that - when one of the really big series is below the x axis - it pushes everything else underneath too, and the positive series won't appear above the x axis until you filter out the bigger players using the key.
The technically inelegant but good looking solution I have come up with is to split all of my negative values into one array, and all of my positives into another. The top half of the page is a positive values graph, the bottom half is negative values and they line up pretty nicely.
The weakness with this approach is when you go to interact with it as an end user. If I filter out a series (by unchecking it in the key) or change the graph mode (with the type selector) or zoom in on a series (by clicking it so the graph refocuses to that series only) then it will only affect whichever graph you clicked on. I would like to adjust these three click events (and any others I've missed?) so that your action is synchronised across both graphs.
Is this achievable? Any reading material I can dig through where somebody has done something similar? I imagine linking two representations of one data set (e.g a pie and column graph) is vaguely analogous.

Counting object on image algorithm

I got school task again. This time, my teacher gave me task to create algorithm to count how many ducks on picture.
The picture is similar to this one:
I think I should use pattern recognition for searching how many ducks on it. But I don't know which pattern match for each duck.
I think that you can solve this problem by segmenting the ducks' beaks and counting the number of connected components in the binary image.
To segment the ducks' beaks, first convert the image to HSV color space and then perform a binarization using the hue component. Note that the ducks' beaks hue are different from other parts of the image.
Here's one way:
Hough transform for circles:
Initialize an accumulator array indexed by (x,y,radius)
For each pixel:
calculate an edge (e.g. Sobel operator will provide both magnitude and direction), if magnitude exceeds some threshold then:
increment every accumulator for which this edge could possibly lend evidence (only the (x,y) in the direction of the edge, only radii between min_duck_radius and max_duck_radius)
Now smooth and threshold the accumulator array, and the coordinates of highest accumulators show you where the heads are. The threshold may leap out at you if you histogram the values in the accumulators (there may be a clear difference between "lots of evidence" and "noise").
So that's very terse, but it can get you started.
It might be just because I'm working with SIFT right now, but to me it looks like it could be good for your problem.
It is an algorithm that matches the same object on two different pictures, where the objects can have different orientations, scales and be viewed from different perspectives on the two pictures. It can also work when an object is partially hidden (as your ducks are) by another object.
I'd suggest finding a good clear picture of a rubber ducky ( :D ) and then use some SIFT implementation (VLFeat - C library with SIFT but no visualization, SIFT++ - based on VLFeat, but in C++ , Rob Hess in C with OpenCV...).
You should bear in mind that matching with SIFT (and anything else) is not perfect - so you might not get the exact number of rubber duckies in the picture.

What's the name of algorithm/field about positioning many small objects to form shapes?

sorry that English is not my native language. I would like to know what's the name of the algorithm/field about positioning small objects to form shapes?
I don't know what's the term for it, so let me give some examples.
e.g.1.
In cartoons, sometimes there will be a swarm of insects forming a skeleton head in the air
e.g.2.
In the wars in the 1700s, infantry units are a bunch of men standing together, forming columns or ranks, changing shapes as the battle rage on.
e.g.3
In opening ceremonies of Olympics, often there will be a bunch of dancers forming variou symbols on the field.
So bascally, numerous small objects beginning in arbitrary positions, moving to a new position such that they together form a shape in 2D or 3D.
What is such technique called?
In graphics, this would normally be called a "particle system" (and Googling for that should yield quite a few results that are at least reasonably relevant).
If you assume that the dancers/soldiers don't interfere when moving, then you can view the problem as a maximum matching problem.
For each person, you know their starting location, and you know the shape of the final pattern. You probably want to minimize the total time it takes to form the final shape from the start shape.
You can determine if it's possible to go from an initial state from a start state in time T by forming a bipartite graph. For each person and final position, if the person can reach the position in <= T, add an edge from the person to that position. Then run a maximum matching algorithm to see if everyone can find some position in the final locations within the time constraint.
Do a binary search on the time T and you'll have the minimum amount of time to go from one state to another.
http://en.wikipedia.org/wiki/Matching_(graph_theory)#Maximum_matchings_in_bipartite_graphs

Resources