What is the difference between 1D-, 2D and 3D Bin packing problem? - algorithm

What does increasing the dimensions produce? Every box or object will always have 3 dimensions: length, width and height in addition to its weight. So what do the dimensions refer to?

No expert on the matter so read with prejudice. The dimensionality stands for the dimensionality of packed items and space we work with. Bin packing itself is about 2 core problems:
place as many items into confined space (bin)
divide items along more bins so they have as least waste space as possible
While the #2 is more or less the same for any dimensionality the #1 gets significantly harder as for each item we need to fit:
1D x
2D x,y,axy
3D x,y,z,axy,ayz,azx
4D x,y,z,w,axy,axz,axw,ayz,ayw,azw
where x,y,z,... are position coordinates and axy,ayz,azx,... are rotation angles (along major planes)
so for n items and mp possible positions and ma possible angles and single bin we have:
1D: O(n.mp)
2D: O(n.mp^2.ma)
3D: O(n.mp^3.ma^3)
4D: O(n.mp^4.ma^6)
as you can see its growing fast with dimensionality and even 2D is very hard. To improve speed heuristics (like precomputed placement patterns,align to some major side of object, limiting angular positions etc...) is usually used and or different approaches like field or gravity simulation based...

Related

How to quickly pack spheres in 3D?

I'm looking for an algorithm for random close packing of spheres in 3D. The trick is that I'd like to pack spheres around a certain number of existing spheres. So for example, given somewhere between 100 and 1000 spheres in 3D (which have fixed positions and sizes; they may overlap, and may be different sizes), I'd like to pack spheres (all same size, positions can be chosen freely) around them (with no overlaps).
The metric for a good quality of packing is the packing density or void fraction. Essentially I'd like the fixed spheres and the packed spheres to occupy a compact volume of space (eg roughly ~spherical, or packed in layers around the fixed spheres) with as few voids in it as possible.
Is there an off the shelf algorithm that does this? How would you approach it in a way that balances speed of calculation with packing quality?
UPDATE Detail on packing density: this depends on what volume is chosen for the calculation. For this, we're looking to pack a certain number of layers of spheres around the fixed ones. Form a surface of points which are exactly a distance d to the surface of the closest fixed sphere; the packing density should be calculated within the volume enclosed by that surface. It's convenient if d = some multiple of the size of the packed spheres. (Assume we can place at least as many free spheres as needed to fill that volume; there may be excess ones, it doesn't matter where they're placed)
The fixed and all the variable spheres are all pretty similar sizes (let's say within 2x range from smallest to largest). In practice the degree of overlap of the fixed spheres is also limited: no fixed sphere is closer than a certain distance (around 0.2-0.3 diameters) of any other fixed sphere (so it is guaranteed that they are spread out, and/or only overlap a few neighbors rather than all overlapping each other)
Bounty posted!
Use a lattice where each point is equidistant by the diameter of the fill sphere. Any lattice shape, meeting the above definition will suffice.
Orient the translation and rotation of the lattice to minimize the center offsets of the fixed spheres to produce the world transform.
Fixed Pass 1:
Create a list of any lattice points within the fixed spheres radii + the diameter of the fill spheres.
For the latter keep the positional (origin - point) difference vectors in a list.
Flag in the lattice points(removal) in the list.
Lattice Pass 1:
Combine,i.e. re-base origin to overlap point(either a true overlap or extended to fill radius), any overlapping Fixed sphere's distance vectors. Storing the value on one side and flagging it on the other, to permit multiple overlaps.
This is where a decision is needed:
Optimize space over time, Computationally slow:
Add points from the adjusted origin radius + fill radius. Then iterating over lattice points moving one point at a time away from other points until all spacing conditions are met. If the lattice points implement spring logic, an optimal solution is produced, given enough iterations(N^2+ N). ?? Stop Here.... Done.
Pull the remaining points in lattice to fill the void:
Warp the lattice near(The size is as large as needed) each overlap point or origin, if no overlap exists pulling the points, to fill the gap.
Lattice Pass 2:
Add missing, i.e. no other point within fill radius + 1, and not near fixed sphere (other radius + fill radius) flagged points as removed. This should be a small amount of points.
Lattice Pass 3:
Adjust all lattice positions to move closer to the proper grid spacing. This will be monotonically decreasing the distances between points, limited to >= radius1 + radius2.
Repeat 3-4(or more) times. Applying a tiny amount of randomness(1 to -1 pixel max per dimension) bias offset to the first pass of the created points to avoid any equal spacing conflicts after the warp. If no suitable gap is created the solution may settle to a poorly optimized solution.
Each fill sphere is centered on a lattice grid point.
I can see many areas of improvement and optimization, but the point was to provide a clear somewhat fast algorithm that is good, but not guaranteed optimal.
Note the difference between 1 and 2:
Number 1 creates a sphere colliding with other spheres and requires all fills to move multiple times to resolve the collisions.
Number 2 only creates new spheres in empty spaces, and moves the rest inward to adapt, resulting in much faster convergence, since there are no collisions to resolve.

Optimally filling a 3D sphere with smaller spheres

I'm trying to optimally fill a 3D spherical volume with "particles" (represented by 3D XYZ vectors) that need to maintain a specific distance from each other, while attempting to minimize the amount of free space present in-between them.
There's one catch though- The particles themselves may fall on the boundary of the spherical volume- they just can't exist outside of it. Ideally, I'd like to maximize the number of particles that fall on this boundary (which makes this a kind of spherical packing problem, I suppose) and then fill the rest of the volume inwards.
Are there any kinds of algorithms out there that can solve this sort of thing? It doesn't need to be exact, but the key here is that the density of the final solution needs to be reasonably accurate (+/- ~5% of a "perfect" solution).
There is not a single formula which fills a sphere optimally with n spheres. On this wikipedia page you can see the optimal configurations for n <= 12. For the optimal configurations for n <= 500 you can view this site. As you can see on these sites different numbers of spheres have different optimal symmetry groups.
your constraints are a bit vague so hard to say for sure but I would try field approach for this. First see:
Computational complexity and shape nesting
Path generation for non-intersecting disc movement on a plane
How to implement a constraint solver for 2-D geometry?
and sub-links where you can find some examples of this approach.
Now the algo:
place N particles randomly inside sphere
N should be safely low so it is smaller then your solution particles count.
start field simulation
so use your solution rules to create attractive and repulsive forces and drive your particles via Newton D'Alembert physics. Do not forget to add friction (so movement will stop after time) and sphere volume boundary.
stop when your particles stop moving
so if max(|particles_velocity|)<threshold stop.
now check if all particles are correctly placed
not breaking any of your rules. If yes then remember this placement as solution and try again from #1 with N+1 particles. If not stop and use last correct solution.
To speed this up you can add more particles instead of using (N+1) similarly to binary search (add 32 particles until you can ... then just 16 ... ). Also you do not need to use random locations in #1 for the other runs. you can let the other particles start positions where they were placed in last run solution.
How to determine accuracy of the solution is entirely different matter. As you did not provide exact rules then we can only guess. I would try to estimate ideal particle density and compute the ideal particle count based on sphere volume. You can use this also for the initial guess of N and then compare with the final N.

3D FFT decomposition in 2D FFT

Basically I am solving the diffusion equation in 3D using FFT and one of the ways to parallelise this is to decompose the 3D FFT in 2D FFTs.
As described in this paper: https://cmb.ornl.gov/members/z8g/csproject-report.pdf
The way to decompose a 3d fft would be by doing:
2d fft in xy direction
global transpose
1d fft in z direction
Basically, my problem is that I am not sure how to do this global transpose (as I assume it's transposing a 3d array I suppose). Anyone has came accross this? Thanks a lot.
Think of a 3d cube with nx*ny*nz elements. The 3d FFT of these elements is mathematically 3 stages of 1-d FFTs, one along each axis:
Do ny*nz transforms along the X axis, each transform handles nx elements
nx*nz transforms along the Y axis
nx*ny transforms along the Z axis
More generally, an N-dimensional FFT (N>1) is composed of many (N-1)-dimensional FFTs along that axis.
If the signal is real and you have an FFT that can return the half spectrum, then stage 1 would be about half as expensive (real FFT is cheaper), the remaining stages need to be complex, but they only need to have about half as many transforms. So the cost is roughly half.
If your 1d FFT can read input elements that are strided and pack the output into a contiguous buffer, then you end up doing a transposition at each stage.
This is how kissfft performs multi-dimensional FFTs.
P.S. When I need to get a mental pictures of higher dimensions, I think of:
sheets of paper with matrices of numbers (2d), in folders of numbered papers (3d), in numbered filing cabinets (4d), in numbered rooms (5d), in numbered buildings (6d), and so on ... So I can visualize the "filing cabinet" dimension
The "global transposition" mentioned in the paper is not a mathematical operation, but a rearrangement of data between the distributed memory machines.
The data calculated on one machine in step 1 has to be transferred to all other machines, vice versa, for step to. It has nothing to do with a matrix transposition.

Find rectangle in 2d space

I have set of rectangles of various sizes in 2D space. Number of rectangles may be changed dynamically from 10 to 100 000, their position, as well as their sizes are often updated.
Which spatial structure would you recommend to find rectangle at given point (x,y)? Assuming that search operation also performed very often (on mouse move for example). If you could give a reference to various spatial indexing algorithms comparison or compare their search/build/update performance here - that would be lovely.
I would suggest R-Tree. It is primarily designed for rectangles (or N-dimensional axis aligned cubes).
Use a quadtree (http://en.wikipedia.org/wiki/Quadtree).
Determine all possible X and Y values at which rectangles start and end. Then build a quadtree upon these values. In each leaf of the quadtree, store which rectangles overlap with the coordinate-ranges of the leaf. Finding which rectangles overlap is then just a matter of finding the leaf containing the coordinate.

Area divide algorithm

Is there any algorithm to find a distribution of area into n sub-regions, where each sub-region might have different area.
To formally put the problem statement: Suppose you have a rectangular plot. How will you divide the region into n rectangles. The sum of area of these sub-rectangles will be equal to original rectangular plot(So there wouldn't be any overlaps between the rectangles)
And the area of each of these smaller n rectangles is given before hand.
Restriction is on width of each sub-rectangle.
This subdivision has to be displayed on may be a computer screen which is divided into pixels. So I don't want any areas any dimension to be smaller than a pixel(or maybe 10), which might be of no use to display as such.
I was looking at a rectangle packing algorithm here but this seems to be wasting space which I don't want. Does there exist any algorithm to solve this problem.
Backtracking doesn't seem to be a good solution in this case as the sub-rectangles area is only specified, not the dimensions, or is it?
Example 1:
Example 2:
The integral of a function is the area bound by the limits, the curve of the function, and the x-axis. Define one side of the rectangle as the x-axis, then find the boundaries for the others. There are plenty of numerical integration libraries around in the language of your choice.
EDIT: some difficulties in trying to illustrate in words...
Assuming, at least, that the containing rectangle has an area larger than the sum of the areas of the sub-regions; and there is no requirement of a certain order of containment:
Contain the largest sub-region first with edges on the axes.
Pick the next smaller sub-region.
Create the function (integral) to calculate the free area as seen from each axes.
With windows/limits equal to the length on the sub-region's sides (facing the axes), slide these windows along the axes away from the origin.
Create the function for finding the free space bounded by the outside arms of the cross formed by the windows as they slide along the axes. Efficiency in the use of space is found in the region where free space is minimal (differentiation).
Rotate the sub-region by 90 degrees and repeat from step 3.
Place the sub-region in the orientation and location where most efficient.
Repeat step 2. Stop when sliding windows report negative
free space for the entire domain (allocated space overlaps the placeholder made by the windows).
In theory, this will systematically try to squeeze in sub-regions. Sketch and pseudocode to follow if time permits.

Resources