Maintaining a list of regions without overlapps - algorithm

I have a list of integer axis aligned cuboids that is being built and then processed (a dirty region system).
Currently this will often have overlaps with some coordinates getting processed many times as a result (although still far less in total than the process everything due to 1 change approach). What I want to do is when adding a new region to the list, is to have a simple way to prevent any such resulting overlaps.
Due to the size of the data (iirc about 100 million cells), even though the coordinates are integers, I want to avoid a bool array of every coordinate to mark it uptodate/dirty. On the other hand, the actual number of regions in the list will generally be pretty small (most of the time only covering a fraction of the data set, with individual regions being 1000's of cells).
struct Region
{
int x, y, z;//corner coordinate
int w, h, d;//size
};
void addRegion(Region region)
{
regions.push_back(region);
}
So my current thinking is in addRegion to go through all the regions, find the overlapping ones and split them up appropriately. However even in 2D this seems tricky to come up with, so is there a known algorithm for this sort of thing?

You might be able to make use of an r-tree or r-tree variant, which is designed for indexing multidimensional data and has support for a fast intersection test; and given the size of your dataset, you might instead want to use a spatial database.

Related

how to plot variables with possibly wild variable values?

I want to build an application that would do something equivalent to running lsof (maybe changing it to output differently, because string processing may mean it is not real time enough) in a loop and then associate each line (entries) with what iteration it was present in, what I will be referring further as frames, as later on it will be better for understanding. My intention with it is that showing the times in which files are open by applications can reveal something about their structure, while not having big impact on their execution, which is often a problem. One problem I have is on processing the output, which would be a table relating "frames X entry", for that I am already anticipating that I will have wildly variable entry lengths. Which can fall in that problem of representing on geometry when you have very different scales, the smaller get infinitely small, while the bigger gets giant and fragmentation makes it even worse; so my question is if plotting libraries deal with this problem and how they do it
The easiest and most well-established technique for showing both small and large values in reasonable detail is a logarithmic scale. Instead of plotting raw values, plot their logarithms. This is notoriously problematic if you can have zero or even negative values, but as I understand your situations all your lengths would be strictly positive so this should work.
Another statistical solution you could apply is to plot ranks instead of raw values. Take all the observed values, and put them in a sorted list. When plotting any single data point, instead of plotting the value itself you look up that value in the list of values (possibly using binary search since it's a sorted list) then plot the index at which you found the value.
This is a monotonous transformation, so small values map to small indices and big values to big indices. On the other hand it completely discards the actual magnitude, only the relative comparisons matter.
If this is too radical, you could consider using it as an ingredient for something more tuneable. You could experiment with a linear combination, i.e. plot
a*x + b*log(x) + c*rank(x)
then tweak a, b and c till the result looks pleasing.

What is a good data structure to use to store rectangles when I want & efficiently search for all areas contains at least a given size

So I have a non-overlapping set of rectangles and I want to efficiently determine where a rectangle of a given size can fit. Oh, also it will need to be reasonably efficient at updating as I will be “allocating” the space based on some other constraints once I find the possible valid locations.
The “complicated part” is the rectangles can touch (for example a rectangle at (0,0) 100 units wide and 50 units tall and a second rectangle at (0,50) and 50x50 allows a fit of a 50 wide by 80 tall rectangle at (0,0) through (0,20). Finding a fit may involve “merging” more then two rectangles.
(note: I'll be starting with a small number of adjacent rectangles, approximately 3, and removing rectangular areas as I "allocate" them. I expect the vast number of these allocations will not exactly cover an existing rectangle, and will leave me with 2 more more newer rectangles.)
At first I thought I could keep two “views” of my rectangles, one preferring to break in the y-axis to keep the widest possible rectangles and another that breaks in the x-axis to keep the talles possible and then I could do...um...something clever to search.
Then I figured “um, people have been working on this stuff for a long time and just because I can’t figure out how to construct the right google query it doesn’t mean this isn’t some straightforward application of quad trees or r-lists I have somehow forgotten to know about”
So is there already a good solution to this problem?
(So what am I really doing? I'm laser cutting features into the floor of a box. Features like 'NxM circles 1.23" in diameter with 0.34" separations'. The floor starts as a rectangle with small rectangles already removed from the corners for supports. I'm currently keeping a list of unallocated rectangles sorted by y with x as the tie breaker, and in some limited cases I can do a merge between 2 rectangles in that list if it produces a large enough result to fit my current target into. That doesn't really work all that well. I could also just place the features manually, but I would rather write a program for it.)
(Also: how many “things” am I doing it to? So far my boxes have had 20 to 40 “features” to place, and computers are pretty quick so some really inefficient algorithm may well work, but this is a hobby project and I may as well learn something interesting as opposed to crudely lashing some code together)
Ok, so absent a good answer to this, I came at it from another angle.
I took a look at all my policies and came up with a pretty short list: "allocate anywhere it fits", "allocate at a specific x,y position", and "allocate anywhere with y>(specific value)".
I decided to test with a pretty simple data structure. A list of non-overlapping rectangles representing space that is allocatable. Not sorted, or merged or organized in any specific way. The closest to interesting is I track extents on rectangles to make retrieving min/max X or Y quick.
I made myself a little function that checks an allocation at a specific position, and returns a list of rectangles blocking the allocation all trimmed to the intersection with the prospective allocation (and produce a new free list if applicable). I used this as a primitive to implement all 3 policies.
"allocate at a specific x,y position" is trivial, use the primitive if you see no blocking rectangles that allocation is successful.
I implemented "allocate anywhere" as "allocate with y>N" where N is "minimum Y" from the free list.
"allocate where y>N" starts with x=min-X from the free list and checks for an allocation there. If it finds no blockers it is done. If it finds blockers it moves x to the max-X of the blocker list. If that places the right edge of the prospective allocation past max-X for the free list then x is set back to min-X for the free list and y is set to the minimum of all the max-Y's in all the blocking lists encountered since the last Y change.
For my usage patterns I also get some mileage from remembering the size of the last failed allocation (with N=minY), and fast failing any that are at least as wide/tall as the last failure.
Performance is fast enough for my usage patterns (free list starting with one to three items, allocations in the tens to forties).

Performing different tasks for different data items in OpenCL?

In summary, I'm looking for ways to deal with a situation where the very first step in the calculation is a conditional branch between two computationally expensive branches.
I'm essentially trying to implement a graphics filter that operates on an image and a mask - the mask is a bitmap array the same size as the image, and the filter performs different operations according to the value of the mask. So I basically want to do something like this for each pixel:
if(mask == 1) {
foo();
} else {
bar();
}
where both foo and bar are fairly expensive operations. As I understand it, when I run this code on the GPU it will have to calculate both branches for every pixel. (This gets even more expensive if there are more than two possible values for the mask.) Is there any way to avoid this?
One option I can think of would be to, in the host code, sort all the pixels into two 1-dimensional arrays based on the value of the mask at that point, and then entirely different kernels on them; then reconstruct the image from the two datasets afterwards. The problem with this is that, in my case, I want to run the filter iteratively, and both the image and the mask change with each iteration (the mask is actually calculated from the image). If I'm splitting the image into two buckets in the host code, I have to transfer each iteration of the image and mask from the GPU, and then the new buckets back to the GPU, introducing a new bottleneck to replace the old one.
Is there any other way to avoid this bottleneck?
Another approach might be to do a simple bucket sort within each work-group using the mask.
So add a local memory array and atomic counter for each value of mask. First read a pixel (or set of pixels might be better) for each work item, increment the appropriate atomic count and write the pixel address into that location in the array.
Then perform a work-group barrier.
Then as a final stage assign some set of work-items, maybe a multiple of the underlying vector size, to each of those arrays and iterate through it. Your operations will then be largely efficient, barring some loss at the ends, and if you look at enough pixels per work-item you may have very little loss of efficiency even if you assign the entire group to one mask value and then the other in turn.
Given that your description only has two values of the mask, fitting two arrays into local memory should be pretty simple and scale well.
Push demanding task of a thread to shared/local memory(synchronization slows the process) and execute light ones untill all light ones finish(so the slow sync latency is hidden by this), then execute heavier ones.
if(mask == 1) {
uploadFoo();//heavy, upload to __local object[]
} else {
processBar(); // compute until, then check for a foo() in local memory if any exists.
downloadFoo();
}
using a producer - consumer approach maybe.

Geohashes - Why is interleaving index values necessary?

I have had a look at this post about geohashes. According to the author, the final step in calculating the hash is interleaving the x and y index values. But is this really necessary? Is there a proper reason not to just concatenate these values, as long as the hash table is built according to that altered indexing rule?
From the wiki page
Geohashes offer properties like arbitrary precision and the
possibility of gradually removing characters from the end of the code
to reduce its size (and gradually lose precision).
If you simply concatenated x and y coordinates, then users would have to take a lot more care when trying to reduce precision by being careful to remove exactly the right number of characters from both the x and y coordinate.
There is a related (and more important) reason than arbitrary precision: Geohashes with a common prefix are close to one another. The longer the common prefix, the closer they are.
54.321 -2.345 has geohash gcwm48u6
54.322 -2.346 has geohash gcwm4958
(See http://geohash.org to try this)
This feature enables fast lookup of nearby points (though there are some complications), and only works because we interleave the two dimensions to get a sort of approximate 2D proximity metric.
As the wikipedia entry goes on to explain:
When used in a database, the structure of geohashed data has two
advantages. First, data indexed by geohash will have all points for a
given rectangular area in contiguous slices (the number of slices
depends on the precision required and the presence of geohash "fault
lines"). This is especially useful in database systems where queries
on a single index are much easier or faster than multiple-index
queries. Second, this index structure can be used for a
quick-and-dirty proximity search - the closest points are often among
the closest geohashes.
Note that the converse is not always true - if two points happen to lie on either side of a subdivision (e.g. either side of the equator) then they may be extremely close but have no common prefix. Hence the complications I mentioned earlier.

Data filtering or better LINQ query?

I am using the new WPF toolkit's Chart to plot large data sets. I also have a crosshair tracker that follows the mouse when it's over the chart area to tell exactly what is the value of the nearest data point (see Yahoo! Finance charts).
I use the following code to find the closest data point that is lower (or equal) to where the mouse is currently hovering (the nasty detail about the chart is that it actually interpolates the data to tell you what's the EXACT value where you hove your mouse over, even though the mouse is located between the data points):
TimeDataPoint point = mainSeries.Find(
new Predicate<TimeDataPoint>(
delegate(TimeDataPoint p) {
return xValue > p.Date && !mainSeries.Exists(new Predicate<TimeDataPoint>(
delegate(TimeDataPoint middlePoint) {
return middlePoint.Date > p.Date && xValue > middlePoint.Date;
}));
}));
[Here, mainSeries is simply a List<TimeDataPoint>]
This works very well for relatively small data sets, but once I go up to 12000+ points (this will increase rapidly), the code above slows down to a standstill (it does a run through data 12000+^2 times).
I am not very good at constructing queries so I am wondering if it is possible to use a better LINQ query to do this.
EDIT: Another idea that was inspired by #Randolpho comment is this: I will search for all points that are lower than given (this will be at most n (here: 12,000+)) and then select a Max<> (which should be also at most O(n)). This should produce the same result but only with order of n operations and thus should be at least a little bit faster...
My other alternative is to actually filter the data set and maintain an upper bound on the number of points depending on the level of details the user wants to see. I would rather not go down that road if there's a possibility of having a more efficient query.
Pre-compute the closest data points based on the known resolution of display/chart. Then, when you hover over a point, it's a simple lookup of the x/y coordinates against the known pre-computed value.
For performance reasons, do your pre-computation in a separate thread and do not allow the display of those values until the computation is completed. Re-compute every time the size of the chart is changed.
Bottom line: There is no LINQ query that will help you execute every time you do a mouse-over for large data sets. It just can't be done. You're looking at order N^2 no matter what. So pre-compute it and cache it, so you only do your computations once.
This is an intriguing idea but wouldn't I still need to do a look-up of x/y among 12000+ pairs? Could you elaborate on how I should store the pre-computed x/y pairs for a fast look-up? For example, I have data points at (200,300) and (250, 300) and user's mouse is at (225, 300). – Alexandra
Well, I guess that would depend on the graph. Based on your code and your mention of Yahoo Finance Charts, I'm assuming your data only varies by horizontal postion, i.e. for a given X value, you are computing the display data.
In that case, you can a simple Dictionary<int, TimeDataPoint> as your cache. The Key is the transformed X coordinate (i.e. in the coordinate space of your display graph), the Value is the pre-computed TimeDataPoint. The dictionary would have a record for every X coordinate in your display graph, so a 400-pixel-wide graph has 400 pre-computed data points.
If your data varies against both axes, you could instead use Dictionary<System.Windows.Point, TimeDataPoint>, in pretty much the same way, but this would increase the number of items in your Dictionary by an order of magnitude. A 400 by 300 graph would have 120000 entries in the dictionary, so the tradeoff is a higher memory footprint.
Pre-calculating your data is the tricky part; it'd have to be done differently from the way you're currently doing it. I'm going to assume here that xValue in your example is an interpolation of a Date based on the X value, since it's compared to p.Date.
This might work:
private Dictionary<int, TimeDataPoint> BuildCache(List<TimeDataPoint> mainSeries)
{
int xPrevious = 0;
int xCurrent = 0;
Dictionary<int, TimeDataPoint> cache = new Dictionary<int, TimeDataPoint>();
foreach(var p in mainSeries)
{
xCurrent = XFromDate(p.Date);
for(int val = xPrevious; val < xCurrent; val++)
{
cache.Add(val, p);
}
xPrevious = xCurrent;
}
return cache;
}
XFromDate would extract the X coordinate for a particular date. I'll leave doing that up to you. :)

Resources