Estimate Feasibility of filling Main-Cube interior surfaces with some sub-cubes without Volume Interference - algorithm

I have a stupid question but it's really hard I think to compute and it's part of my project to estimate the feasibility of layout these small boxes on the interior surfaces of the big empty box("Main-Cube") at the beginning of process ... So in my problem I have a big extruded cut cube (like empty box - that we called it "Main-Cube" as shown in the below fig ) and variable quantity small solid boxes (quantity of these small boxes is predefined by end user as input data and we called them "sub-cube" ) that stick to interior surfaces ( mate/projection/connected to surface) ... So I want to know when this problem Certainly has no answer (because of crowding small boxes on each 6 side of main cube interior surfaces ) and so won't let the end user to start computing at first time and tell the user that this problem Certainly can't have any feasible answer ( feasible answer means has no interference/violation in volume) so the user won't waste time for further computation .
for example the Main-Cube dimensions is 1000mm x 1000mm x 1000mm
I think the total Main-cube volume is (without considering 10mm for Mantle/thickness of cube) is Approx : 1 Cubic meters or 1,000,000,000 Cubic millimeters
I also find the total small box volume so the fraction of :
Total Sum of small boxes Volume / Main-cube total volume
is not really Useful !
I also try the sum of minimum area for each small boxes and then calculation this fraction :
Sum of Min All small boxes / 6 x (1000mm x 1000mm)
and because of common edges of Main cube it can't be useful too .
So any Idea or concept accepted :) that can Ensure that with this used area definitely no solution can find for this problem unless reducing some small box and repeat the process check ...
Crowding scenario
Low dispersion for simple scenario
P.S : As you seen in the below table (contain small and big box dimensions) the results said that roughly 44% interior surfaces area and 26% volume is filled (or used by 90 small boxes) how ever now the problem rarely has feasible solution and it's really hard to say if we increase amount from 90 boxes to 100 We can find one feasible solution but 26% or 44% don't represent this!
Specifications Table

Related

Formula for procedurally generating the location of planets in a game

I want to develop a game where the universe is a maximum 65536 x 65536 grid. I do not want the universe to be random, what I want is for it to be procedurally generated according to location. What it should generate is a number from 0 to 15.
0 means empty space. Most of the universe (probably 50-80%) is empty space.
1 - 9 a planet of that technology level
10-15 various anomalies (black hole, star, etc.)
Given an address from 0x8000-0xFFFF or 0 or 1-0x7fff for the X address, and the same range for the Y address, returns a number from 0 to 15. Presumably this would place planets nearer to 0,0 more plentiful than those at a distance of
The idea being, the function is called passing the two values and returns the planet number. I used to have a function to do this, but it has gotten lost over various moves.
While the board could be that big, considering how easy it would be to get lost, I'll probably cut the size to 1200 in both directions, -600 to +600. Even that would be huge.
I've tried a number of times, but I've come to the conclusion that I lack the sufficient math skills to do this. It's probably no more than 10 lines. As it is intended to be multiplayer, it'll probably be either a PHP application on the back end or a desktop application connecting to a server.
Any help would be appreciated. I can probably read any commonly used programming language you might use.
Paul Robinson
See How to draw sky chart? for the planetary position math. Especially pay attention to the image with equations you can use it to compute period of your planet based on its distance and mass to system central mass... for simple circular orbit just match the centripedal force with gravity like I did in here:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
So for example:
G = 6.67384e-11;
v = sqrt(G*M/a); // orbital speed
T = sqrt((4.0*M_PI*M_PI*a*a*a)/(G*(m+M))); // orbital period
pos = (a,0,0); // start position
vel = (0,sqrt(G*M/a),0); // start speed
The distribution of planets and their sizes complies specific (empirically obtained) rules (that is one of the reasons why we are still looking for 10th planet). I can't remember the name of the rule, however from quick look on Google Image from here can be used too:
Distribution of the planets in solar system according to their mass
and their distances from the Sun. The distances (X-axis) are in AU and
masses (Y-axis) in Yotta (10^24)
Jupiter mass is M=1,898e27 kg so mass units are in 10^24 kg.
So just match your PRNG generation to such curve and be done with it.

Reserve-Step Algorithm / Anti-Banding Algorithm for Terrain?

I took the data from here (Nasa - Topography) and here (Nasa - Bathymetric) and used it to create a 3D model of the Earth's entire surface (both above and below water).
Here's what I got:
As you can see, it is super jagged.
The problem is that due to the fact that I'm using greyscale images, I only have 512 distinct levels to work with (256*2). When going from the ocean floor to the highest peak, you're obviously going to hit more than 512 distinct elevations. So it's basically an unwanted step-function.
Had they used all RGB channels this wouldn't be a problem, but then the image wouldn't be very "human-readable"
Smoothing in general is a possibility, but not a great possibility because it will lower the quality of cliffs, peaks, canyons, ect. drastically.
Here's the thing: we know that each pixel is within (maxHeight-minHeight)/512 (=maxoffset) of the actually correct value: as stated it has pretty much gone through an unwanted step function. Of course, mathematically a step function is irreversible - however, that doesn't stop us from trying!
Here are some of my thoughts on how this might work:
Find the average height of surrounding pixels, for some radius. Calculate the difference between this pixel's current value and the calculated average. Do nothing with this value yet.
While calculating this, store which pixel has the greatest difference.
Then, "normalize" all values such that this greatest difference is (maxHeight-minHeight)/512: the maxoffset: the max that a pixel could be off. Due to outliers, this "normalization" shouldn't be linear, but such that the average is 85% (or something) of this maxoffset.
Peaks (pixels that are higher than all surrounding pixels) and Basins (same idea except lower) get excluded from this process, as they'll be outliers and shouldn't change much anyhow (or undergo a process of their own).
That might not work. I could still use basic "average smoothing" except with the following rules:
No smoothing of peaks (pixels that are higher than all surrounding pixels), basins (same idea except lower), cliffs (this is way more difficult and may not happen - but the idea is to check if pixels have a drop on one side and roughly the same height pixels on the other side for some distance).
If the pixel has significantly more pixels around the same height than not, give greater weight to those nearly-same-height pixels.
I'm also looking into finding better "data", but I'm not confident that I will due to the fact that I require bathymetric data: most GPS APIs are Topography exclusive. In any case, this is an interesting problem nonetheless and I'm curious if are already some good algorithms.

Kalman Filter on a set of points belonging to the same object?

Let's say you're tracking a set of 20 segments with the same length belonging to the same 3D plane.
To visualize, imagine that you're drawing a set of segments of length 10 cm randomly on a sheet of paper. And make someone move this sheet in front of the camera.
Let's say those segments are represented by two points A and B.
Let's assume we manage to track A_t and B_t for all the segments. The tracked points aren't stable from frame to frame resulting in occasional jitter which might be solved by a Kalman filter.
My questions are concerning the state vector:
A Kalman filter for A and B for each segment (with 20 segments this results in 40 KF) is an obvious solution but it looks too heavy (knowing that this should run in real-time).
Since all the tracked points have the same properties (belonging to the same 3D plane, have the same length) isn't it possible to create one big KF with all those variables?
Thanks.
Runtime: keep in mind that the kalman equations involve matrix multiplications and one inversion. So having 40 states means having some 40x40 matrices. That will always take longer to calculate than running 40 one-state filters, where your matrices are 1x1 (scalar). Anyway, running the big filter only makes sense if you do know of a mathematical relationship between your states (=correlation), otherwise its output wise the same like running the 40 one-state filters.
With the information given thats really hard to tell. E.g. if your segments are always a polyline you could describe that differently in contrast to knowing nothing about the shape.

how to imitate water on a landscape

I have a double array that contains the ground height and the water height of each 'block' of land, and I am trying to create a function move_water() that will mutate this array so that repeated calls of the function will imitate water moving along the terrain...
My first instinct was:
For each block, look at the nearby 4 other blocks and compare water levels.
Give 1/2 of the water from the middle block to the other 4 blocks (split evenly, but only if they are lower).
This doesn't really work very well though and creates some weird wave patterns as the water level on any given block seems to oscillate between 2 values
The water simulation doesn't have to be perfect, I just want it to flow to the lowest point
Since you say it doesn't have to be perfect, updating in steps defined in terms of how much water has moved, might not be a problem - even though the amount of time it takes for half the water to move will vary according to the slope and the amount of water. It may still look odd therefore that half of a large amount of water on a steep slope takes the same amount of time as a smaller amount on a less steep slope. But your method may still have potential.
Its not clear to me though if you update one block per call or all of them for each call to move_water, I'm going to assume its not just one because that will look odd.
Assuming you process all the blocks, your rule will give different results depending on the order you process the blocks. If you just process them in order of increasing x coordinate, I can imagine why you might see unnatural waves (A lower block can gain from another block, then give to another block then gain again). If on the other hand you processed the highest points first, or processed in order of the highest height difference, you may get better results.
You need to consider the combined height of the land and water, and I would suggest trying moving half of the height difference, not half of the total water.
If you haven't already done this, you might find it helps to consider 1 dimension, flat terrain, placing different amounts of water in the block to start - just to make it easier to work out what's happening.
Finally just moving water to 4 of the surrounding blocks will look a bit odd, if you mean up, down left, and right without water moving diagonally. Once you've got the flow working well in one dimension consider moving to all 8 nearby blocks in the 2D case (assuming the blocks are in a rectangular grid)
If you are not concerned about erosion or where the sources of the water are located then I'd go with the simple solution you got from your last question. You'd have to build a one-dimensional array from your landscape and after you got the new mean (see my answer there) you run through your two-dimensional array and adjust the heights that fall below that mean value.

What's a good way to generate random clusters and paths?

I'm toying around with writing a random map generator, and am not quite sure how to randomly generate realistic landscapes. I'm working with these sorts of local-scale maps, which presents some interesting problems.
One of the simplest cases is the forest:
Sparse Medium Dense
Typical trees 50% 70% 80%
Massive trees — 10% 20%
Light undergrowth 50% 70% 50%
Heavy undergrowth — 20% 50%
Trees and undergrowth can exist in the same space, so an average sparse forest has 25% typical trees and light undergrowth, 25% typical trees, 25% light undergrowth, and 25% open space. Medium and dense forests will take a bit more thinking, but it's not where my problem lies either, as it's all evenly dispersed.
My problem lies in generating clusters and paths, while keeping the percentage constraints. Marshes are a good example of this:
Moor Swamp
Shallow bog 20% 40%
Deep bog 5% 20%
Light undergrowth 30% 20%
Heavy undergrowth 10% 20%
Deep bog squares are usually clustered together and surrounded by an irregular ring of shallow bog squares.
An additional map element, a hedgerow, may also be present, as well as a path of open ground, snaking through the bog. Both of these types of map elements (clusters and paths) present problems, as the total composition of the map should contain X% of the element, but it's not evenly distributed. Other elements, such as streams, ponds, and quicksand need either a cluster or path-type generation as well.
What technique can I use to generate realistic maps given these constraints?
I'm using C#, FYI (but this isn't a C#-specific question.)
Realistic "random" distribution is often done using Perlin Noise, which can be used to give a distribution with "clumps" like you mention. It works by summing/combining multiple layers of linearly interpolated values from random data points. Each layer (or "octave") has twice as many data points as the last, and confined to a narrower range of values. The result is "realistic" looking random texture.
Here is a beautiful demonstration of the theory behind Perlin Noise by Hugo Elias.
Here is the first thing I found on Perlin Noise in C#.
What you can do is generate a Perlin Noise image and set a "threshold", where anything above a value is "on" and everything below it is "off". What you will end up with is clumps where things are above the threshold, which look irregular and awesome. Simply assign the ones above the threshold to where you want your terrain feature to be.
Here is a demonstration if a program generating a Perlin Noise bitmap and then adjusting the cut-off threshold over time. A clear "clumping" is visible. It could be just what you wanted.
Notice that, with a high threshold, very few points are above it, and it's sparse. But as the threshold lowers, those points "grow" into clumps (by the nature of perlin noise), and some of these clumps will join eachother, and basically create something very natural and terrain-like.
Note that you could also set the "clump factor", or the tendency of features to clump, by setting the "turbulence" of your Perlin Noise function, which basically causes peaks and valleys of your PN function to be accentuated and closer together.
Now, where to set the threshold? The higher the threshold, the lower the percentage of the feature on the final map. The lower the threshold, the higher the percentage. You can mess around with them. You could probably get exact percentages by fiddling around with a little math (it seems that the distribution of values follows a Normal Distribution; I could be wrong). Tweak it until it's just right :)
EDIT As pointed out in the comments, you can find the exact percentage by creating a cumulative histogram (index of what % of the map is under a threshold) and pick the threshold that gives you the percent you need.
The coolest thing here is that you can create features that clump around certain other features (like your marsh features) trivially here -- just use the same Perlin Noise map twice -- the second time, lowering the threshold. The first one will be clumpy, and the second one will be clumpy around the same areas, but with the clumps enlarged (refer to the flash animation posted earlier).
As for other features like hedgerows, you could try modeling simple random walk lines that have a higher tendency to go straight than turn, and place them anywhere randomly on your perlin-based map.
samples
Here is a sample 50x50 tile Sparse Forest Map. The undergrowth is colored brown and the trees are colored blue (sorry) to make it clear which is which.
For this map I didn't make exact thresholds to match 50%; I only set the threshold at 50% of the maximum. Statistically, this will average out to exactly 50% every time. But it might not be exact enough for your purposes; see the earlier note for how to do this.
Here is a demo of your Marsh features (not including undergrowth, for clarity), with shallow marsh in grey and deep marsh in back:
This is just 50x50, so there are some artifacts from that, but you can see how easily you can make the shallow marsh "grow" from the deep marsh -- simply by adjusting the threshold on the same Perlin map. For this one, I eyeballed the threshold level to give the most eye-pleasing results, but for your own purposes, you could do what was mentioned before.
Here is a marsh map generated from the same Perlin Noise map, but on stretched out over 250x250 tiled map instead:
I've never done this sort of thing, but here are some thoughts.
You can obtain clusters by biasing random selection to locations on the grid that are close to existing elements of that type. Assign a default value of 1 to all squares. For squares with existing clustered elements, add clustering value to to adjacent squares (the higher the clustering value, the stronger the clustering will be). Then do random selection for the next element of that type on the probability distribution function of all the squares.
For paths, you could have a similar procedure, except that paths would be extended step-wise (probability of path is finite at squares next to the end of the path and zero everywhere else). Directional paths could be done by increasing the probability of selection in the direction of the path. Meandering paths could have a direction that changes over the course of random extension (new_direction = mf * old_direction + (1-mf) * rand_direction, where mf is a momentum factor between 0 and 1).
To expand on academicRobot's comments, you could start with a default marsh or forest seed in some of the grid cells and let them grow from the source using a correlated random number. For instance a bog might have eight adjacent grid cells each of which has a 90% probability of also being a bog, but a 10% probability of being something else. You can let the ecosytem form from the seed and adjust the correlation until you get something that looks right. Probably pretty easy to implement even in a spreadsheet.
You could start reading links here. I remember looking at much better document. Will post it if I find it (it was also based on L-systems).
But that's on the general side; on the particular problem you face I guess you should model it in terms of
percentages
other rules (clusters and paths)
The point is that even though you don't know how to construct the map with given properties, if you are able to evaluate the properties (clustering ratio; path niceness) and score on them you can then brute force or do some other problem space transversal.
If you still want to do generative approach then you will have to examine generative rules a bit closer; here's an idea that I would pursue
create patterns of different terrains and terrain covers that have required properties of 'clusterness', 'pathness' or uniformity
create the patterns in such a way that the values for deep bog are not discreet, but assign probability value; after the pattern had been created you can normalize this probability in such a way that it will produce required percentage of cover
mix different patterns together
You might have some success for certain types of area with a Voronoi pattern. I've never seen it used to create maps but I have seen it used in a number of similar fields.

Resources