Qsort comparison - go
I'm converting C++ code to Go, but I have difficulties in understanding this comparison function:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <iostream>
using namespace std;
typedef struct SensorIndex
{ double value;
int index;
} SensorIndex;
int comp(const void *a, const void* b)
{ SensorIndex* x = (SensorIndex*)a;
SensorIndex* y = (SensorIndex*)b;
return abs(y->value) - abs(x->value);
}
int main(int argc , char *argv[])
{
SensorIndex *s_tmp;
s_tmp = (SensorIndex *)malloc(sizeof(SensorIndex)*200);
double q[200] = {8.48359,8.41851,-2.53585,1.69949,0.00358129,-3.19341,3.29215,2.68201,-0.443549,-0.140532,1.64661,-1.84908,0.643066,1.53472,2.63785,-0.754417,0.431077,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256,-0.123256};
for( int i=0; i < 200; ++i ) {
s_tmp[i].value = q[i];
s_tmp[i].index = i;
}
qsort(s_tmp, 200, sizeof(SensorIndex), comp);
for( int i=0; i<200; i++)
{
cout << s_tmp[i].index << " " << s_tmp[i].value << endl;
}
}
I expected that the "comp" function would allow the sorting from the highest (absolute) value to the minor, but in my environment (gcc 32 bit) the result is:
1 8.41851
0 8.48359
2 -2.53585
3 1.69949
11 -1.84908
5 -3.19341
6 3.29215
7 2.68201
10 1.64661
14 2.63785
12 0.643066
13 1.53472
4 0.00358129
9 -0.140532
8 -0.443549
15 -0.754417
16 0.431077
17 -0.123256
18 -0.123256
19 -0.123256
20 -0.123256
...
Moreover one thing that seems strange to me is that by executing the same code with online services I get different values (cpp.sh, C++98):
0 8.48359
1 8.41851
5 -3.19341
6 3.29215
2 -2.53585
7 2.68201
14 2.63785
3 1.69949
10 1.64661
11 -1.84908
13 1.53472
4 0.00358129
8 -0.443549
9 -0.140532
12 0.643066
15 -0.754417
16 0.431077
17 -0.123256
18 -0.123256
19 -0.123256
20 -0.123256
...
Any help?
This behavior is caused by using abs, a function that works with int, and passing it double arguments. The doubles are being implicitly cast to int, truncating the decimal component before comparing them. Essentially, this means you take the original number, strip off the sign, and then strip off everything to the right of the decimal and compare those values. So 8.123 and -8.9 are both converted to 8, and compare equal. Since the inputs are reversed for the subtraction, the ordering is in descending order by magnitude.
Your cpp.sh output reflects this; all the values with a magnitude between 8 and 9 appear first, then 3-4s, then 2-3s, 1-2s and less than 1 values.
If you wanted to fix this to actually sort in descending order in general, you'd need a comparison function that properly used the double-friendly fabs function, e.g.
int comp(const void *a, const void* b)
{ SensorIndex* x = (SensorIndex*)a;
SensorIndex* y = (SensorIndex*)b;
double diff = fabs(y->value) - fabs(x->value);
if (diff < 0.0) return -1;
return diff > 0;
}
Update: On further reading, it looks like std::abs from <cmath> has worked with doubles for a long time, but std::abs for doubles was only added to <cstdlib> (where the integer abs functions dwell) in C++17. And the implementers got this stuff wrong all the time, so different compilers would behave differently at random. In any event, both the answers given here are right; if you haven't included <cmath> and you're on pre-C++17 compilers, you should only have access to integer based versions of std::abs (or ::abs from math.h), which would truncate each value before the comparison. And even if you were using the correct std::abs, returning the result of double subtraction as an int would drop fractional components of the difference, making any values with a magnitude difference of less than 1.0 appear equal. Worse, depending on specific comparisons performed and their ordering (since not all values are compared to each other), the consequences of this effect could chain, as comparison ordering changes could make 1.0 appear equal to 1.6 which would in turn appear equal to 2.5, even though 1.0 would be correctly identified as less than 2.5 if they were compared to each other; in theory, as long as each number is within 1.0 of every other number, the comparisons might evaluate as if they're all equal to each other (pathological case yes, but smaller runs of such errors would definitely happen).
Point is, the only way to figure out the real intent of this code is to figure out the exact compiler version and C++ standard it was originally compiled under and test it there.
There is a bug in your comparison function. You return an int which means you lose the distinction between element values whose absolute difference is less then 1!
int comp(const void* a, const void* b)
{
SensorIndex* x = (SensorIndex*)a;
SensorIndex* y = (SensorIndex*)b;
// what about differences between 0.0 and 1.0?
return abs(y->value) - abs(x->value);
}
You can fix it like this:
int comp(const void* a, const void* b)
{ SensorIndex* x = (SensorIndex*)a;
SensorIndex* y = (SensorIndex*)b;
if(std::abs(y->value) < std::abs(x->value))
return -1;
return 1;
}
A more modern (and safer) way to do this would be to use std::vector and std::sort:
// use a vector for dynamic arrays
std::vector<SensorIndex> s_tmp;
for(int i = 0; i < 200; ++i) {
s_tmp.push_back({q[i], i});
}
// use std::sort
std::sort(std::begin(s_tmp), std::end(s_tmp), [](SensorIndex const& a, SensorIndex const& b){
return std::abs(b.value) < std::abs(a.value);
});
Related
Optimizing bit-waste for custom data encoding
I was wondering what's a good solution to make it so that a custom data structure took the least amount of space possible, and I've been searching around without finding anything. The general idea is I may have a some kind of data structure with a lot of different variables, integers, booleans, etc. With booleans, it's fairly easy to use bitmasks/flags. For integers, perhaps I only need to use 10 of the numbers for one of the integers, and 50 for another. I would like to have some function encode the structure, without wasting any bits. Ideally I would be able to pack them side-by-side in an array, without any padding. I have a vague idea that I would have to have way of enumerating all the possible permutations of values of all the variables, but I'm unsure where to start with this. Additionally, though this may be a bit more complicated, what if I have a bunch of restrictions such as not caring about certain variables if other variables meet certain criteria. This reduces the amount of permutations, so there should be a way of saving some bits here as well? Example: Say I have a server for an online game, containing many players. Each player. The player struct stores a lot of different variables, level, stats, and a bunch of flags for which quests the player has cleared. struct Player { int level; //max is 100 int strength //max is int int // max is 500 /* ... */ bool questFlag30; bool questFlag31; bool questFlag32; /* ... */ }; and I want to have a function that takes an vector of Players called encodedData encode(std::vector<Player> players) and a function decodeData which returns a vector from the encoded data.
This is what I came up with; it's not perfect, but it's something: #include <vector> #include <iostream> #include <bitset> #include <assert.h> /* Data structure for packing multiple variables, without padding */ struct compact_collection { std::vector<bool> data; /* Returns a uint32_t since we don't want to store the length of each variable */ uint32_t query_bits(int index, int length) { std::bitset<32> temp; for (int i = index; i < index + length; i++) temp[i - index] = data[i]; return temp.to_ulong(); }; /* */ void add_bits(int32_t value, int32_t bits) { assert(std::pow(2, bits) >= value); auto a = std::bitset<32>(value).to_string(); for (int i = 32 - bits; i < 32; i++) data.insert(data.begin(), (a[i] == '1')); }; }; int main() { compact_collection myCollection; myCollection.add_bits(45,6); std::cout << myCollection.query_bits(0,6); std::cin.get(); return 0; }
Get a number value from Vector positions
I'm new here and actually I've got a problem in my mind, and it's like this: I get an input of a vector of any size, but for this case, let's take this one: vetor = {1, 2, 3, 4} Now, all I want to do is to take this numbers and sum each one (considering it's unity, tens, hundred, thousand) and register the result into a integer variable, for the case, 'int vec_value'. Considering the vector stated above, the answer should be: vec_value = 4321. I will leave the main.cpp attached to the post, however I will tell you how I calculated the result, but it gave me the wrong answer. vetor[0] = 1 vetor[1] = 2 vetor[2] = 3 vetor[3] = 4 the result should be = (1*10^0)+(2*10^1)+(3*10^2)+(4*10^3) = 1 + 20 + 300 + 4000 = 4321. The program is giving me the solution as 4320, and if I change the values randomly, the answer follows the new values, but with wrong numbers still. If anyone could take a look at my code to see what I'm doing wrong I'd appreciate it a lot! Thanks.. There's a link to a picture at the end of the post showing an example of wrong result. Keep in mind that sometimes the program gives me the right answer (what leaves me more confused) Code: #include <iostream> #include <ctime> #include <cstdlib> #include <vector> #include <cmath> using namespace std; int main() { vector<int> vetor; srand(time(NULL)); int lim = rand() % 2 + 3; //the minimum size must be 3 and the maximum must be 4 int value; for(int i=0; i<lim; i++) { value = rand() % 8 + 1; // I'm giving random values to each position of the vector vetor.push_back(value); cout << "\nPos [" << i << "]: " << vetor[i]; //just to keep in mind what are the elements inside the vector } int vec_value=0; for(int i=0; i<lim; i++) { vec_value += vetor[i] * pow(10, i); //here i wrote the formula to sum each element of the vector with the correspondent unity, tens, hundreds or thousands } cout << "\n\nValor final: " << vec_value; //to see what result the program will give me return 0; } Example of the program
Try this for the main loop: int power = 1; for(int i=0; i<lim; i++) { vec_value += vetor[i] * power; power *= 10; } This way, all the computations are in integers, you are not affected by floating point rounding.
Insert into host_vector using thrust
I'm trying to insert one value into the third location in a host_vector using thrust. static thrust::host_vector <int *> bins; int * p; bins.insert(3, 1, p); But am getting errors: error: no instance of overloaded function "thrust::host_vector<T, Alloc>::insert [with T=int *, Alloc=std::allocator<int *>]" matches the argument list argument types are: (int, int, int *) object type is: thrust::host_vector<int *, std::allocator<int *>> Has anyone seen this before, and how can I solve this? I want to use a vector to pass information into the GPU. I was originally trying to use a vector of vectors to represent spatial cells that hold different numbers of data, but learned that wasn't possible with thrust. So instead, I'm using a vector bins that holds my data, sorted by the spatial cell (first 3 values might correspond to the first cell, the next 2 to the second cell, the next 0 to the third cell, etc.). The values held are pointers to particles, and represent the numbers of particles in the spatial cell (which is not known before runtime).
As noted in comments, thrust::host_vector is modelled directly on std::vector and the operation you are trying to use requires an iterator for the position argument, which is why you get a compilation error. You can see this if you consult the relevant documentation: http://en.cppreference.com/w/cpp/container/vector/insert https://thrust.github.io/doc/classthrust_1_1host__vector.html#a9bb7c8e26ee8c10c5721b584081ae065 A complete working example of the code snippet you showed would look like this: #include <iostream> #include <thrust/host_vector.h> int main() { thrust::host_vector <int *> bins(10, reinterpret_cast<int *>(0)); int * p = reinterpret_cast<int *>(0xdeadbeef); bins.insert(bins.begin()+3, 1, p); auto it = bins.begin(); for(int i=0; it != bins.end(); ++it, i++) { int* v = *it; std::cout << i << " " << v << std::endl; } return 0; } Note that this requires that C++11 language features are enabled in nvcc (so use CUDA 8.0): ~/SO$ nvcc -std=c++11 -arch=sm_52 thrust_insert.cu ~/SO$ ./a.out 0 0 1 0 2 0 3 0xdeadbeef 4 0 5 0 6 0 7 0 8 0 9 0 10 0
How to partly sort arrays on CUDA?
Problem Provided I have two arrays: const int N = 1000000; float A[N]; myStruct *B[N]; The numbers in A can be positive or negative (e.g. A[N]={3,2,-1,0,5,-2}), how can I make the array A partly sorted (all positive values first, not need to be sorted, then negative values)(e.g. A[N]={3,2,5,0,-1,-2} or A[N]={5,2,3,0,-2,-1}) on the GPU? The array B should be changed according to A (A is keys, B is values). Since the scale of A,B can be very large, I think the sort algorithm should be implemented on GPU (especially on CUDA, because I use this platform). Surely I know thrust::sort_by_key can do this work, but it does muck extra work since I do not need the array A&B to be sorted entirely. Has anyone come across this kind of problem? Thrust example thrust::sort_by_key(thrust::device_ptr<float> (A), thrust::device_ptr<float> ( A + N ), thrust::device_ptr<myStruct> ( B ), thrust::greater<float>() );
Thrust's documentation on Github is not up-to-date. As #JaredHoberock said, thrust::partition is the way to go since it now supports stencils. You may need to get a copy from the Github repository: git clone git://github.com/thrust/thrust.git Then run scons doc in the Thrust folder to get an updated documentation, and use these updated Thrust sources when compiling your code (nvcc -I/path/to/thrust ...). With the new stencil partition, you can do: #include <thrust/partition.h> #include <thrust/execution_policy.h> #include <thrust/iterator/zip_iterator.h> #include <thrust/tuple.h> struct is_positive { __host__ __device__ bool operator()(const int &x) { return x >= 0; } }; thrust::partition(thrust::host, // if you want to test on the host thrust::make_zip_iterator(thrust::make_tuple(keyVec.begin(), valVec.begin())), thrust::make_zip_iterator(thrust::make_tuple(keyVec.end(), valVec.end())), keyVec.begin(), is_positive()); This returns: Before: keyVec = 0 -1 2 -3 4 -5 6 -7 8 -9 valVec = 0 1 2 3 4 5 6 7 8 9 After: keyVec = 0 2 4 6 8 -5 -3 -7 -1 -9 valVec = 0 2 4 6 8 5 3 7 1 9 Note that the 2 partitions are not necessarily sorted. Also, the order may differ between the original vectors and the partitions. If this is important to you, you can use thrust::stable_partition: stable_partition differs from partition in that stable_partition is guaranteed to preserve relative order. That is, if x and y are elements in [first, last), such that pred(x) == pred(y), and if x precedes y, then it will still be true after stable_partition that x precedes y. If you want a complete example, here it is: #include <thrust/host_vector.h> #include <thrust/device_vector.h> #include <thrust/partition.h> #include <thrust/iterator/zip_iterator.h> #include <thrust/tuple.h> struct is_positive { __host__ __device__ bool operator()(const int &x) { return x >= 0; } }; void print_vec(const thrust::host_vector<int>& v) { for(size_t i = 0; i < v.size(); i++) std::cout << " " << v[i]; std::cout << "\n"; } int main () { const int N = 10; thrust::host_vector<int> keyVec(N); thrust::host_vector<int> valVec(N); int sign = 1; for(int i = 0; i < N; ++i) { keyVec[i] = sign * i; valVec[i] = i; sign *= -1; } // Copy host to device thrust::device_vector<int> d_keyVec = keyVec; thrust::device_vector<int> d_valVec = valVec; std::cout << "Before:\n keyVec = "; print_vec(keyVec); std::cout << " valVec = "; print_vec(valVec); // Partition key-val on device thrust::partition(thrust::make_zip_iterator(thrust::make_tuple(d_keyVec.begin(), d_valVec.begin())), thrust::make_zip_iterator(thrust::make_tuple(d_keyVec.end(), d_valVec.end())), d_keyVec.begin(), is_positive()); // Copy result back to host keyVec = d_keyVec; valVec = d_valVec; std::cout << "After:\n keyVec = "; print_vec(keyVec); std::cout << " valVec = "; print_vec(valVec); } UPDATE I made a quick comparison with the thrust::sort_by_key version, and the thrust::partition implementation does seem to be faster (which is what we could naturally expect). Here is what I obtain on NVIDIA Visual Profiler, with N = 1024 * 1024, with the sort version on the left, and the partition version on the right. You may want to do the same kind of tests on your own.
How about this?: Count how many positive numbers to determine the inflexion point Evenly divide each side of the inflexion point into groups (negative-groups are all same length but different length to positive-groups. these groups are the memory chunks for the results) Use one kernel call (one thread) per chunk pair Each kernel swaps any out-of-place elements in the input groups into the desired output groups. You will need to flag any chunks that have more swaps than the maximum so that you can fix them during subsequent iterations. Repeat until done Memory traffic is swaps only (from original element position, to sorted position). I don't know if this algorithm sounds like anything already defined...
You should be able to achieve this in thrust simply with a modification of your comparison operator: struct my_compare { __device__ __host__ bool operator()(const float x, const float y) const { return !((x<0.0f) && (y>0.0f)); } }; thrust::sort_by_key(thrust::device_ptr<float> (A), thrust::device_ptr<float> ( A + N ), thrust::device_ptr<myStruct> ( B ), my_compare() );
Fast block placement algorithm, advice needed?
I need to emulate the window placement strategy of the Fluxbox window manager. As a rough guide, visualize randomly sized windows filling up the screen one at a time, where the rough size of each results in an average of 80 windows on screen without any window overlapping another. If you have Fluxbox and Xterm installed on your system, you can try the xwinmidiarptoy BASH script to see a rough prototype of what I want happening. See the xwinmidiarptoy.txt notes I've written about it explaining what it does and how it should be used. It is important to note that windows will close and the space that closed windows previously occupied becomes available once more for the placement of new windows. The algorithm needs to be an Online Algorithm processing data "piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start." The Fluxbox window placement strategy has three binary options which I want to emulate: Windows build horizontal rows or vertical columns (potentially) Windows are placed from left to right or right to left Windows are placed from top to bottom or bottom to top Differences between the target algorithm and a window-placement algorithm The coordinate units are not pixels. The grid within which blocks will be placed will be 128 x 128 units. Furthermore, the placement area may be further shrunk by a boundary area placed within the grid. Why is the algorithm a problem? It needs to operate to the deadlines of a real time thread in an audio application. At this moment I am only concerned with getting a fast algorithm, don't concern yourself over the implications of real time threads and all the hurdles in programming that that brings. And although the algorithm will never ever place a window which overlaps another, the user will be able to place and move certain types of blocks, overlapping windows will exist. The data structure used for storing the windows and/or free space, needs to be able to handle this overlap. So far I have two choices which I have built loose prototypes for: 1) A port of the Fluxbox placement algorithm into my code. The problem with this is, the client (my program) gets kicked out of the audio server (JACK) when I try placing the worst case scenario of 256 blocks using the algorithm. This algorithm performs over 14000 full (linear) scans of the list of blocks already placed when placing the 256th window. For a demonstration of this I created a program called text_boxer-0.0.2.tar.bz2 which takes a text file as input and arranges it within ASCII boxes. Issue make to build it. A little unfriendly, use --help (or any other invalid option) for a list of command line options. You must specify the text file by using the option. 2) My alternative approach. Only partially implemented, this approach uses a data structure for each area of rectangular free unused space (the list of windows can be entirely separate, and is not required for testing of this algorithm). The data structure acts as a node in a doubly linked list (with sorted insertion), as well as containing the coordinates of the top-left corner, and the width and height. Furthermore, each block data structure also contains four links which connect to each immediately adjacent (touching) block on each of the four sides. IMPORTANT RULE: Each block may only touch with one block per side. This is a rule specific to the algorithm's way of storing free unused space and bears no factor in how many actual windows may touch each other. The problem with this approach is, it's very complex. I have implemented the straightforward cases where 1) space is removed from one corner of a block, 2) splitting neighbouring blocks so that the IMPORTANT RULE is adhered to. The less straightforward case, where the space to be removed can only be found within a column or row of boxes, is only partially implemented - if one of the blocks to be removed is an exact fit for width (ie column) or height (ie row) then problems occur. And don't even mention the fact this only checks columns one box wide, and rows one box tall. I've implemented this algorithm in C - the language I am using for this project (I've not used C++ for a few years and am uncomfortable using it after having focused all my attention to C development, it's a hobby). The implementation is 700+ lines of code (including plenty of blank lines, brace lines, comments etc). The implementation only works for the horizontal-rows + left-right + top-bottom placement strategy. So I've either got to add some way of making this +700 lines of code work for the other 7 placement strategy options, or I'm going to have to duplicate those +700 lines of code for the other seven options. Neither of these is attractive, the first, because the existing code is complex enough, the second, because of bloat. The algorithm is not even at a stage where I can use it in the real time worst case scenario, because of missing functionality, so I still don't know if it actually performs better or worse than the first approach. The current state of C implementation of this algorithm is freespace.c. I use gcc -O0 -ggdb freespace.c to build, and run it in an xterm sized to atleast 124 x 60 chars. What else is there? I've skimmed over and discounted: Bin Packing algorithms: their emphasis on optimal fit does not match the requirements of this algorithm. Recursive Bisection Placement algorithms: sounds promising, but these are for circuit design. Their emphasis is optimal wire length. Both of these, especially the latter, all elements to be placed/packs are known before the algorithm begins. What are your thoughts on this? How would you approach it? What other algorithms should I look at? Or even what concepts should I research seeing as I've not studied computer science/software engineering? Please ask questions in comments if further information is needed. Further ideas developed since asking this question Some combination of my "alternative algorithm" with a spatial hashmap for identifying if a large window to be placed would cover several blocks of free space.
I would consider some kind of spatial hashing structure. Imagine your entire free space is gridded coarsely, call them blocks. As windows come and go, they occupy certain sets of contiguous rectangular blocks. For each block, keep track of the largest unused rectangle incident to each corner, so you need to store 2*4 real numbers per block. For an empty block, the rectangles at each corner have size equal to the block. Thus, a block can only be "used up" at its corners, and so at most 4 windows can sit in any block. Now each time you add a window, you have to search for a rectangular set of blocks for which the window will fit, and when you do, update the free corner sizes. You should size your blocks so that a handful (~4x4) of them fit into a typical window. For each window, keep track of which blocks it touches (you only need to keep track of extents), as well as which windows touch a given block (at most 4, in this algorithm). There is an obvious tradeoff between the granularity of the blocks and the amount of work per window insertion/removal. When removing a window, loop over all blocks it touches, and for each block, recompute the free corner sizes (you know which windows touch it). This is fast since the inner loop is at most length 4. I imagine a data structure like struct block{ int free_x[4]; // 0 = top left, 1 = top right, int free_y[4]; // 2 = bottom left, 3 = bottom right int n_windows; // number of windows that occupy this block int window_id[4]; // IDs of windows that occupy this block }; block blocks[NX][NY]; struct window{ int id; int used_block_x[2]; // 0 = first index of used block, int used_block_y[2]; // 1 = last index of used block }; Edit Here is a picture: It shows two example blocks. The colored dots indicate the corners of the block, and the arrows emanating from them indicate the extents of the largest-free-rectangle from that corner. You mentioned in the edit that the grid on which your windows will be placed is already quite coarse (127x127), so the block sizes would probably be something like 4 grid cells on a side, which probably wouldn't gain you much. This method is suitable if your window corner coordinates can take on a lot of values (I was thinking they would be pixels), but not so much in your case. You can still try it, since it's simple. You would probably want to also keep a list of completely empty blocks so that if a window comes in that is larger than 2 block widths, then you look first in that list before looking for some suitable free space in the block grid.
After some false starts, I have eventually arrived here. Here is where the use of data structures for storing rectangular areas of free space have been abandoned. Instead, there is a 2d array with 128 x 128 elements to achieve the same result but with much less complexity. The following function scans the array for an area width * height in size. The first position it finds it writes the top left coordinates of, to where resultx and resulty point to. _Bool freespace_remove( freespace* fs, int width, int height, int* resultx, int* resulty) { int x = 0; int y = 0; const int rx = FSWIDTH - width; const int by = FSHEIGHT - height; *resultx = -1; *resulty = -1; char* buf[height]; for (y = 0; y < by; ++y) { x = 0; char* scanx = fs->buf[y]; while (x < rx) { while(x < rx && *(scanx + x)) ++x; int w, h; for (h = 0; h < height; ++h) buf[h] = fs->buf[y + h] + x; _Bool usable = true; w = 0; while (usable && w < width) { h = 0; while (usable && h < height) if (*(buf[h++] + w)) usable = false; ++w; } if (usable) { for (w = 0; w < width; ++w) for (h = 0; h < height; ++h) *(buf[h] + w) = 1; *resultx = x; *resulty = y; return true; } x += w; } } return false; } The 2d array is zero initialized. Any areas in the array where the space is used are set to 1. This structure and function will work independently from the actual list of windows that are occupying the areas marked with 1's. The advantages of this method are its simplicity. It only uses one data structure - an array. The function is short, and should not be too difficult to adapt to handle the remaining placement options (here it only handles Row Smart + Left to Right + Top to Bottom). My initial tests also look promising on the speed front. Though I don't think this would be suitable for a window manager placing windows on, for example, a 1600 x 1200 desktop with pixel accuracy, for my purposes I believe it is going to be much better than any of the previous methods I have tried. Compilable test code here: http://jwm-art.net/art/text/freespace_grid.c (in Linux I use gcc -ggdb -O0 freespace_grid.c to compile)
#include <limits.h> #include <stdbool.h> #include <stddef.h> #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <string.h> #define FSWIDTH 128 #define FSHEIGHT 128 #ifdef USE_64BIT_ARRAY #define FSBUFBITS 64 #define FSBUFWIDTH 2 typedef uint64_t fsbuf_type; #define TRAILING_ZEROS( v ) __builtin_ctzl(( v )) #define LEADING_ONES( v ) __builtin_clzl(~( v )) #else #ifdef USE_32BIT_ARRAY #define FSBUFBITS 32 #define FSBUFWIDTH 4 typedef uint32_t fsbuf_type; #define TRAILING_ZEROS( v ) __builtin_ctz(( v )) #define LEADING_ONES( v ) __builtin_clz(~( v )) #else #ifdef USE_16BIT_ARRAY #define FSBUFBITS 16 #define FSBUFWIDTH 8 typedef uint16_t fsbuf_type; #define TRAILING_ZEROS( v ) __builtin_ctz( 0xffff0000 | ( v )) #define LEADING_ONES( v ) __builtin_clz(~( v ) << 16) #else #ifdef USE_8BIT_ARRAY #define FSBUFBITS 8 #define FSBUFWIDTH 16 typedef uint8_t fsbuf_type; #define TRAILING_ZEROS( v ) __builtin_ctz( 0xffffff00 | ( v )) #define LEADING_ONES( v ) __builtin_clz(~( v ) << 24) #else #define FSBUFBITS 1 #define FSBUFWIDTH 128 typedef unsigned char fsbuf_type; #define TRAILING_ZEROS( v ) (( v ) ? 0 : 1) #define LEADING_ONES( v ) (( v ) ? 1 : 0) #endif #endif #endif #endif static const fsbuf_type fsbuf_max = ~(fsbuf_type)0; static const fsbuf_type fsbuf_high = (fsbuf_type)1 << (FSBUFBITS - 1); typedef struct freespacegrid { fsbuf_type buf[FSHEIGHT][FSBUFWIDTH]; _Bool left_to_right; _Bool top_to_bottom; } freespace; void freespace_dump(freespace* fs) { int x, y; for (y = 0; y < FSHEIGHT; ++y) { for (x = 0; x < FSBUFWIDTH; ++x) { fsbuf_type i = FSBUFBITS; fsbuf_type b = fs->buf[y][x]; for(; i != 0; --i, b <<= 1) putchar(b & fsbuf_high ? '#' : '/'); /* if (x + 1 < FSBUFWIDTH) putchar('|'); */ } putchar('\n'); } } freespace* freespace_new(void) { freespace* fs = malloc(sizeof(*fs)); if (!fs) return 0; int y; for (y = 0; y < FSHEIGHT; ++y) { memset(&fs->buf[y][0], 0, sizeof(fsbuf_type) * FSBUFWIDTH); } fs->left_to_right = true; fs->top_to_bottom = true; return fs; } void freespace_delete(freespace* fs) { if (!fs) return; free(fs); } /* would be private function: */ void fs_set_buffer( fsbuf_type buf[FSHEIGHT][FSBUFWIDTH], unsigned x, unsigned y1, unsigned xoffset, unsigned width, unsigned height) { fsbuf_type v; unsigned y; for (; width > 0 && x < FSBUFWIDTH; ++x) { if (width < xoffset) v = (((fsbuf_type)1 << width) - 1) << (xoffset - width); else if (xoffset < FSBUFBITS) v = ((fsbuf_type)1 << xoffset) - 1; else v = fsbuf_max; for (y = y1; y < y1 + height; ++y) { #ifdef FREESPACE_DEBUG if (buf[y][x] & v) printf("**** over-writing area ****\n"); #endif buf[y][x] |= v; } if (width < xoffset) return; width -= xoffset; xoffset = FSBUFBITS; } } _Bool freespace_remove( freespace* fs, unsigned width, unsigned height, int* resultx, int* resulty) { unsigned x, x1, y; unsigned w, h; unsigned xoffset, x1offset; unsigned tz; /* trailing zeros */ fsbuf_type* xptr; fsbuf_type mask = 0; fsbuf_type v; _Bool scanning = false; _Bool offset = false; *resultx = -1; *resulty = -1; for (y = 0; y < (unsigned) FSHEIGHT - height; ++y) { scanning = false; xptr = &fs->buf[y][0]; for (x = 0; x < FSBUFWIDTH; ++x, ++xptr) { if(*xptr == fsbuf_max) { scanning = false; continue; } if (!scanning) { scanning = true; x1 = x; x1offset = xoffset = FSBUFBITS; w = width; } retry: if (w < xoffset) mask = (((fsbuf_type)1 << w) - 1) << (xoffset - w); else if (xoffset < FSBUFBITS) mask = ((fsbuf_type)1 << xoffset) - 1; else mask = fsbuf_max; offset = false; for (h = 0; h < height; ++h) { v = fs->buf[y + h][x] & mask; if (v) { tz = TRAILING_ZEROS(v); offset = true; break; } } if (offset) { if (tz) { x1 = x; w = width; x1offset = xoffset = tz; goto retry; } scanning = false; } else { if (w <= xoffset) /***** RESULT! *****/ { fs_set_buffer(fs->buf, x1, y, x1offset, width, height); *resultx = x1 * FSBUFBITS + (FSBUFBITS - x1offset); *resulty = y; return true; } w -= xoffset; xoffset = FSBUFBITS; } } } return false; } int main(int argc, char** argv) { int x[1999]; int y[1999]; int w[1999]; int h[1999]; int i; freespace* fs = freespace_new(); for (i = 0; i < 1999; ++i, ++u) { w[i] = rand() % 18 + 4; h[i] = rand() % 18 + 4; freespace_remove(fs, w[i], h[i], &x[i], &y[i]); /* freespace_dump(fs); printf("w:%d h:%d x:%d y:%d\n", w[i], h[i], x[i], y[i]); if (x[i] == -1) printf("not removed space %d\n", i); getchar(); */ } freespace_dump(fs); freespace_delete(fs); return 0; } The above code requires one of USE_64BIT_ARRAY, USE_32BIT_ARRAY, USE_16BIT_ARRAY, USE_8BIT_ARRAY to be defined otherwise it will fall back to using only the high bit of an unsigned char for storing the state of grid cells. The function fs_set_buffer will not be declared in the header, and will become static within the implementation when this code gets split between .h and .c files. A more user friendly function hiding the implementation details will be provided for removing used space from the grid. Overall, this implementation is faster without optimization than my previous answer with maximum optimization (using GCC on 64bit Gentoo, optimization options -O0 and -O3 respectively). Regarding USE_NNBIT_ARRAY and the different bit sizes, I used two different methods of timing the code which make 1999 calls to freespace_remove. Timing main() using the Unix time command (and disabling any output in the code) seemed to prove my expectations correct - that higher bit sizes are faster. On the other hand, timing individual calls to freespace_remove (using gettimeofday) and comparing the maximum time taken over the 1999 calls seemed to indicate lower bit sizes were faster. This has only been tested on a 64bit system (Intel Dual Core II).