How do I synchronize scale and position of map and point layers in d3.js? - d3.js

I've seen many example maps in d3 where points added to a map automatically align as expected, but in code I've adapted from http://bl.ocks.org/bycoffe/3230965 the points I've added do not line up with the map below.
Example here: https://naltmann.github.io/d3-geo-collision/
(the points should match up with some major US cities)
I'm pretty sure the difference is due to the code around scale/range, but I don't know how to unify them between the map and points.

Aligning geographic features geographically with your example will be challenging - first you are projecting points and then scaling x,y:
node.cx = xScale(projection(node.coordinates)[0]);
node.cy = yScale(projection(node.coordinates)[1]);
The ranges for the scales is interesting in that both limits of both ranges are negatives, this might be an attempt to rectify the positioning of points due to the cumulative nature of forces on the points:
.on('tick', function(e) {
k = 10 * e.alpha;
for (i=0; i < nodes.length; i++) {
nodes[i].x += k * nodes[i].cx
nodes[i].y += k * nodes[i].cy
This is challenging as if we remove the scales, the points move farther and farther right and down. This cumulative nature means that with each tick the points drift further and further from recognizable geographic coordinates. This is fine when dealing with a set of geographic data that undergoes the same transformation, but when dealing with a background that doesn't undergo the same transformation, it's a bit hard.
I'll note that if you want a map width of 1800 and a height of 900, you should set the mercator projection's translate to [1800/2,900/2] and the scale to something like 1800/Math.PI/2
The disconnection between geographic coordinates and force coordinates appears to be very difficult to rectify. Any solution for this particular layout and dimensions is likely to fail on different layouts and dimensions.
Instead I'd suggest attempting to use only a projection to place coordinates and not cumulatively adding force changes to each point. This is the short answer to your question.
For a longer answer, my first thought was to get rid of the collision function and use an anchor point linked to a floating point for each city, only drawing the floating point (using link distance to keep them close). This is likely a cleaner solution, but one that is unfortunately completely different than what you've attempted.
However, my second thoughts were more towards keeping your example, but removing the scales (and the cumulative forces) and reducing the forces to zero so that the collision function can work without interference. Based on those thoughts, here's a demonstration of a possible solution.

Related

Which way is my yarn oriented?

I have an image processing problem. I have pictures of yarn:
The individual strands are partly (but not completely) aligned. I would like to find the predominant direction in which they are aligned. In the center of the example image, this direction is around 30-34 degrees from horizontal. The result could be the average/median direction for the whole image, or just the average in each local neighborhood (producing a vector map of local directions).
What I've tried: I rotated the image in small steps (1 degree) and calculated statistics in the vertical vs horizontal direction of the rotated image (for example: standard deviation of summed rows or summed columns). I reasoned that when the strands are oriented exactly vertically or exactly horizontally the difference in statistics would be greatest, and so that angle of rotation is the correct direction in the original image. However, for at least several kinds of statistical properties I tried, this did not work.
I further thought that perhaps this wasn't working because there were too many different directions at the same time in the whole image, so I tired it in a small neighborhood. In this case, there is always a very clear preferred direction (different for each neighborhood), but it is not the direction that the fibers really go... I can post my sample code but it is basically useless.
I keep thinking there has to be some kind of simple linear algebra/statistical property of the whole image, or some value derived from the 2D FFT that would give the correct direction in one step... but how?
What probably won't work: detecting individual fibers. They are not necessarily the same color, and the image can shade from light to dark so edge detectors don't work well, and the image may not even be in focus sometimes. Because of that, it is not always even possible to see individual fibers for a human (see top-right in the example), they kinda have to be detected as preferred direction in a statistical sense.
You might try doing this in the frequency domain. The output of a Fourier Transform is orientation dependent so, if you have some kind of oriented pattern, you can apply a 2D FFT and you will see a clustering around a specific orientation.
For example, making a greyscale out of your image and performing FFT (with ImageJ) gives this:
You can see a distinct cluster that is oriented orthogonally with respect to the orientation of your yarn. With some pre-processing on your source image, to remove noise and maybe enhance the oriented features, you can probably achieve a much stronger signal in the FFT. Once you have a cluster, you can use something like PCA to determine the vector for the major axis.
For info, this is a technique that is often used to enhance oriented features, such as fingerprints, by applying a selective filter in the FFT and then taking the inverse to obtain a clearer image.
An alternative approach is to try a series of Gabor filters see here pre-built with a selection of orientations and frequencies and use the resulting features as a metric for identifying the most likely orientation. There is a scikit article that gives some examples here.
UPDATE
Just playing with ImageJ to give an idea of some possible approaches to this - I started with the FFT shown above, then - in the following image, I performed these operations (clockwise from top left) - Threshold => Close => Holefill => Erode x 3:
Finally, rather than using PCA, I calculated the spatial moments of the lower left blob using this ImageJ Plugin which handily calculates the orientation of the longest axis based on the 2nd order moment. The result gives an orientation of approximately -38 degrees (with respect to the X axis):
Depending on your frame of reference you can calculate the approximate average orientation of your yarn from this rather than from PCA.
I tried to use Gabor filters to enhance the orientations of your yarns. The parameters I used are:
phi = x*pi/16; % x = 1, 3, 5, 7
theta = 3;
sigma = 0.65*theta;
filterSize = 3;
And the imag part of the convoluted image are shown below:
As you mentioned, the most orientations lies between 30-34 degrees, thus the filter with phi = 5*pi/16 in left bottom yields the best contrast among the four.
I would consider using a Hough Transform for this type of problem, there is a nice write-up here.

distinguishing objects with opencv

I want to identify lego bricks for building a lego sorting machine (I use c++ with opencv).
That means I have to distinguish between objects which look very similar.
The bricks are coming to my camera individually on a flat conveyer. But they might lay in any possible way: upside down, on the side or "normal".
My approach is to teach the sorting machine the bricks by taping them with the camera in lots of different positions and rotations. Features of each and every view are calculated by surf-algorythm.
void calculateFeatures(const cv::Mat& image,
std::vector<cv::KeyPoint>& keypoints,
cv::Mat& descriptors)
{
// detector == cv::SurfFeatureDetector(10)
detector->detect(image,keypoints);
// extractor == cv::SurfDescriptorExtractor()
extractor->compute(image,keypoints,descriptors);
}
If there is an unknown brick (the brick that i want to sort) its features also get calculated and matched with known ones.
To find wrongly matched features I proceed as described in the book OpenCV 2 Cookbook:
with the matcher (=cv::BFMatcher(cv::NORM_L2)) the two nearest neighbours in both directions are searched
matcher.knnMatch(descriptorsImage1, descriptorsImage2,
matches1,
2);
matcher.knnMatch(descriptorsImage2, descriptorsImage1,
matches2,
2);
I check the ratio between the distances of the found nearest neighbours. If the two distances are very similar it's likely that a false value is used.
// loop for matches1 and matches2
for(iterator matchIterator over all matches)
if( ((*matchIterator)[0].distance / (*matchIterator)[1].distance) > 0.65 )
throw away
Finally only symmatrical match-pairs are accepted. These are matches in which not only n1 is the nearest neighbour to feature f1, but also f1 is the nearest neighbour to n1.
for(iterator matchIterator1 over all matches)
for(iterator matchIterator2 over all matches)
if ((*matchIterator1)[0].queryIdx == (*matchIterator2)[0].trainIdx &&
(*matchIterator2)[0].queryIdx == (*matchIterator1)[0].trainIdx)
// good Match
Now only pretty good matches remain. To filter out some more bad matches I check which matches fit the projection of img1 on img2 using the fundamental matrix.
std::vector<uchar> inliers(points1.size(),0);
cv::findFundamentalMat(
cv::Mat(points1),cv::Mat(points2), // matching points
inliers,
CV_FM_RANSAC,
3,
0.99);
std::vector<cv::DMatch> goodMatches
// extract the surviving (inliers) matches
std::vector<uchar>::const_iterator itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator itM= allMatches.begin();
// for all matches
for ( ;itIn!= inliers.end(); ++itIn, ++itM)
if (*itIn)
// it is a valid match
The result is pretty good. But in cases of extreme alikeness faults still occur.
In the picture above you can see that a similar brick is recognized well.
However in the second picture a wrong brick is recognized just as well.
Now the question is how I could improve the matching.
I had two different ideas:
The matches in the second picture trace back to the features really fitting, but only if the visual field is intensely changed. To recognize a brick I have to compare it in many different positions anyway (at least as shown in figure three). This means I know that I am only allowed to minimally change the visual field. The information how intensely the visual field is changed should be hidden in the fundamental matrix. How can I read out of this matrix how far the position in the room has changed? Especially the rotation and strong scaling should be of interest; if the brick once is taped farer on the left side this shouldn't matter.
Second idea:
I calculated the fundamental matrix out of 2 pictures and filtered out features that don't fit the projections - shouldn't there be a way to do the same using three or more pictures? (keyword Trifocal tensor). This way the matching should become more stable. But I neither know how to do this using OpenCV nor could I find any information on this on google.
I don't have a complete answer, but I have a few suggestions.
On the image analysis side:
It looks like your camera setup is pretty constant. Easy to just separate the brick from the background. I also see your system finding features in the background. This is unnecessary. Set all non-brick pixels to black to remove them from the analysis.
When you have located just the brick, your first step should be to just filter likely candidates based on the size (i.e. number of pixels) in the brick. That way the example faulty match you show is already less likely.
You can take other features into account such as the aspect ratio of the bounding box of the brick, the major and minor axes (eigevectors of the covariance matrix of the central moments) of the brick etc.
These simpler features will give you a reasonable first filter to limit your search space.
On the mechanical side:
If bricks are actually coming down a conveyor you should be able to "straighten" the bricks along a straight edge using something like a rod that lies at an angle to the direction of the conveyor across the belt so that the bricks arrive more uniformly at your camera like so.
Similar to the previous point, you could use something like a very loose brush suspended across the belt to topple bricks standing up as they pass.
Again both these points will limit your search space.

Path Tracing algorithm - Need help understanding key point

So the Wikipedia page for path tracing (http://en.wikipedia.org/wiki/Path_tracing) contains a naive implementation of the algorithm with the following explanation underneath:
"All these samples must then be averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be 1)."
The part I'm having trouble understanding is the part in bold. I am familiar with PDFs but I am not quite sure how they fit into here. If we stick to the mirror example, what would be the PDF value we would divide by? Why? How would I go about finding the PDF value to divide by if I was using an arbitrary BRDF value such as a Phong reflection model or Cook-Torrance reflection model, etc? Lastly, why do we divide by the PDF instead of multiply? If we divide, don't we give more weight to a direction with a lower probability?
Let's assume that we have only materials without color (greyscale). Then, their BDRF at each point can be expressed as a single valued function
float BDRF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit);
Here, phi and theta are the azimuth and zenith angles of the two rays under consideration. For pure Lambertian reflection, this function would look like this:
float lambertBRDF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit)
{
return albedo*1/pi*cos(theta_out);
}
albedo ranges from 0 to 1 - this measures how much of the incoming light is reemitted. The factor 1/pi ensures that the integral of BRDF over all outgoing vectors does not exceed 1. With the naive approach of the Wikipedia article (http://en.wikipedia.org/wiki/Path_tracing), one can use this BRDF as follows:
Color TracePath(Ray r, depth) {
/* .... */
Ray newRay;
newRay.origin = r.pointWhereObjWasHit;
newRay.direction = RandomUnitVectorInHemisphereOf(normal(r.pointWhereObjWasHit));
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*lambertBDRF(r.phi,r.theta,newRay.phi,newRay.theta,r.pointWhereObjWasHit);
}
As mentioned in the article and by Ross, this random sampling is unfortunate because it traces incoming directions (newRay's) from which little light is reflected with the same probability as directions from which there is lots of light. Instead, directions whence much light is reflected to the observer should be selected preferentially, to have an equal sample rate per contribution to the final color over all directions. For that, one needs a way to generate random rays from a probability distribution. Let's say there exists a function that can do that; this function takes as input the desired PDF (which, ideally should be be equal to the BDRF) and the incoming ray:
vector RandomVectorWithPDF(function PDF(p_i,t_i,p_o,t_o,point x), Ray incoming)
{
// this function is responsible to create random Rays emanating from x
// with the probability distribution PDF. Depending on the complexity of PDF,
// this might somewhat involved. It is possible, however, to do it for Lambertian
// reflection (how exactly is math, not programming):
vector randomVector;
if(PDF==lambertBDRF)
{
float phi = uniformRandomNumber(0,2*pi);
float rho = acos(sqrt(uniformRandomNumber(0,1)));
float theta = pi/2-rho;
randomVector = getVectorFromAzimuthZenithAndNormal(phi,zenith,normal(incoming.whereObjectWasHit));
}
else // deal with other PDFs
return randomVector;
}
The code in the TracePath routine would then simply look like this:
newRay.direction = RandomVectorWithPDF(lambertBDRF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected;
Because the bright directions are preferred in the choice of samples, you do not have to weight them again by applying the BDRF as a scaling factor to reflected. However, if PDF and BDRF are different for some reason, you would have to scale down the output whenever PDF>BDRF (if you picked to many from the respective direction) and enhance it when you picked to little .
In code:
newRay.direction = RandomVectorWithPDF(PDF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*BDRF(...)/PDF(...);
The output is best, however, if BDRF/PDF is equal to 1.
The question remains why can't one always choose the perfect PDF which is exactly equal to the BDRF? First, some random distributions are harder to compute than others. For example, if there was a slight variation in the albedo parameter, the algorithm would still do much better for the non-naive sampling than for uniform sampling, but the correction term BDRF/PDF would be needed for the slight variations. Sometimes, it might even be impossible to do it at all. Imagine a colored object with different reflective behavior of red green and blue - you could either render in three passes, one for each color, or use an average PDF, which fits all color components approximately, but none perfectly.
How would one go about implementing something like Phong shading? For simplicity, I still assume that there is only one color component, and that the ratio of diffuse to specular reflection is 60% / 40% (the notion of ambient light makes no sense in path tracing). Then my code would look like this:
if(uniformRandomNumber(0,1)<0.6) //diffuse reflection
{
newRay.direction=RandomVectorWithPDF(lambertBDRF,r);
reflected = TracePath(newRay,depth+1)/0.6;
}
else //specular reflection
{
newRay.direction=RandomVectorWithPDF(specularPDF,r);
reflected = TracePath(newRay,depth+1)*specularBDRF/specularPDF/0.4;
}
return emittance + reflected;
Here specularPDF is a distribution with a narrow peak around the reflected ray (theta_in=theta_out,phi_in=phi_out+pi) for which a way to create random vectors is available, and specularBDRF returns the specular intensity from Phong's model (http://en.wikipedia.org/wiki/Phong_reflection_model).
Note how the PDFs are modified by 0.6 and 0.4 respectively.
I'm by no means an expert in ray tracing, but this seems to be classic Monte Carlo:
You have lots of possible rays, and you choose one uniformly at random and then average over lots of trials.
The distribution you used to choose one of the rays was uniform (they were all equally as likely)
so you don't have to do any clever re-normalising.
However, Perhaps there are lots of possible rays to choose, but only a few would possibly lead to useful results.We therefore bias towards picking those 'useful' possibilities with higher probability, and then re-normalise (we are not choosing the rays uniformly any more, so we can't just take the average). This is
importance sampling.
The mirror example seems to be the following: only one possible ray will give a useful result.
If we choose a ray at random then the probability we hit that useful ray is zero: this is a property
of conditional probability on continuous spaces (it's not actually continuous, it's implicitly discretised
by your computer, so it's not quite true...): the probability of hitting something specific when there are infinitely many things must be zero.
Thus we are re-normalising by something with probability zero - standard conditional probability definitions
break when we consider events with probability zero, and that is where the problem would come from.

Looking for a good world map generation algorithm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a Civilization-like game and I'm looking for a good algorithm for generating Earth-like world maps. I've experimented with a few alternatives, but haven't hit on a real winner yet.
One option is to generate a heightmap using Perlin noise and add water at a level so that about 30% of the world is land. While Perlin noise (or similar fractal-based techniques) is frequently used for terrain and is reasonably realistic, it doesn't offer much in the way of control over the number, size and position of the resulting continents, which I'd like to have from a gameplay perspective.
A second option is to start with a randomly positioned one-tile seed (I'm working on a grid of tiles), determine the desired size for the continent and each turn add a tile that is horizontally or vertically adjacent to the existing continent until you've reached the desired size. Repeat for the other continents. This technique is part of the algorithm used in Civilization 4. The problem is that after placing the first few continents, it's possible to pick a starting location that's surrounded by other continents, and thus won't fit the new one. Also, it has a tendency to spawn continents too close together, resulting in something that looks more like a river than continents.
Does anyone happen to know a good algorithm for generating realistic continents on a grid-based map while keeping control over their number and relative sizes?
You could take a cue from nature and modify your second idea. Once you generate your continents (which are all about the same size), get them to randomly move and rotate and collide and deform each other and drift apart from each other. (Note: this may not be the easiest thing ever to implement.)
Edit: Here's another way of doing it, complete with an implementation — Polygonal Map Generation for Games.
I've created something similar to your first image in JavaScript. It's not super sophisticated but it works :
http://jsfiddle.net/AyexeM/zMZ9y/
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<style type="text/css">
#stage{
font-family: Courier New, monospace;
}
span{
display: none;
}
.tile{
float:left;
height:10px;
width:10px;
}
.water{
background-color: #55F;
}
.earth{
background-color: #273;
}
</style>
</head>
<body>
<div id="stage">
</div>
<script type="text/javascript">
var tileArray = new Array();
var probabilityModifier = 0;
var mapWidth=135;
var mapheight=65;
var tileSize=10;
var landMassAmount=2; // scale of 1 to 5
var landMassSize=3; // scale of 1 to 5
$('#stage').css('width',(mapWidth*tileSize)+'px');
for (var i = 0; i < mapWidth*mapheight; i++) {
var probability = 0;
var probabilityModifier = 0;
if (i<(mapWidth*2)||i%mapWidth<2||i%mapWidth>(mapWidth-3)||i>(mapWidth*mapheight)-((mapWidth*2)+1)){
// make the edges of the map water
probability=0;
}
else {
probability = 15 + landMassAmount;
if (i>(mapWidth*2)+2){
// Conform the tile upwards and to the left to its surroundings
var conformity =
(tileArray[i-mapWidth-1]==(tileArray[i-(mapWidth*2)-1]))+
(tileArray[i-mapWidth-1]==(tileArray[i-mapWidth]))+
(tileArray[i-mapWidth-1]==(tileArray[i-1]))+
(tileArray[i-mapWidth-1]==(tileArray[i-mapWidth-2]));
if (conformity<2)
{
tileArray[i-mapWidth-1]=!tileArray[i-mapWidth-1];
}
}
// get the probability of what type of tile this would be based on its surroundings
probabilityModifier = (tileArray[i-1]+tileArray[i-mapWidth]+tileArray[i-mapWidth+1])*(19+(landMassSize*1.4));
}
rndm=(Math.random()*101);
tileArray[i]=(rndm<(probability+probabilityModifier));
}
for (var i = 0; i < tileArray.length; i++) {
if (tileArray[i]){
$('#stage').append('<div class="tile earth '+i+'"> </div>');
}
else{
$('#stage').append('<div class="tile water '+i+'"> </div>');
}
}
</script>
</body>
</html>
I'd suggest you back up and
Think about what makes "good" continents.
Write an algorithm that can tell a good continental layout from a bad one.
Refine the algorithm so that you can quantify how good a good layout is.
Once you have that in place, you can start to implement an algorithm which should be shaped like this:
Generate crappy continents and then improve them.
For improvement you can try all sorts of standard optimization tricks, whether it's simulated annealing, genetic programming, or something completely ad hoc, like moving a randomly chosen edge square from whereever it is on the continent to the edge opposite the continent's center of mass. But the key is to be able to write a program that can tell good continents from bad ones. Start out with hand-drawn continents as well as your test continents, until you get something you like.
I wrote something similar to what you're after for an automated screensaver-style clone of Civilization 1. For the record I wrote this in VB.net but since you don't mention anything about language or platform in your question I'll keep it abstract.
The "map" specifies the number of continents, continent size variance (eg 1.0 would keep all continents with the same approximate land area, down to 0.1 would allow continents to exist with 1/10th the mass of the largest continent), maximum land area (as a percentage) to generate, and the central land bias. A "seed" is distributed randomly around the map for each continent, weighted towards the centre of the map as per the central bias (eg a low bias produces distributed continents more similar to Earth, where as a high central bias will resemble more of a Pangaea). Then for each iteration of growth, the "seeds" assign land tiles according to a distribution algorithm (more on that later) until a maximum land area has been reached.
The land distribution algorithm can be as precise as you want but I found more interesting results applying various genetic algorithms and rolling the dice. Conway's "Game of Life" is a really easy one to start out with. You'll need to add SOME globally aware logic to avoid things like continents growing into each other but for the most part things take care of themselves. The problem I found with more fractal-based approaches (which was my first inclination) was the results either looked too patterned, or lead to too many scenarios requiring hacky-feeling workaround rules to get a result which still didn't feel dynamic enough. Depending on the algorithm you use, you may want to apply a "blurring" pass over the result to eliminate things like abundant single-square ocean tiles and checkered coastlines. In the event something like a continent being spawned surrounded by several others and having nowhere left to grow, relocate the seed to a new point on the map and continue the growth passes. Yes, it can mean you sometimes end up with more continents than planned, but if it's really something you firmly don't want then another way to help avoid it is bias the growth algorithms so they favour growth in the direction with least proximity to other seeds. At worst (in my opinion anyway), you can flag a series as invalid when a seed has nowhere left to grow and generate a new map. Just make sure you set a maximum number of attempts so if anything unrealistic is specified (like fitting 50 even-weighted continents on a 10x10 board) it doesn't spend forever trying to find a valid solution.
I can't vouch for how Civ etc do it, and of course doesn't cover things like climate, land age etc but by playing around with the seed growth algorithm you can get pretty interesting results that resemble continents, archipelagos etc. You can use the same approach to produce 'organic' looking rivers, mountain ranges etc too.
Just thinking off the cuff here:
Pick some starting points, and assign each a randomly drawn (hoped for) size. You can can maintain a separate size draw for planned continents and planned islands if you want.
Loop over the land elements, and where they are not yet at the planned size add one square. But the fun part is weighing the chance that each neighboring element will be the one. Some suggested thing that might factor in:
Distance to the nearest "other" land. Further is better generates wide oceanic spaces. Nearer is better makes narrow channels. You have to decide if you're going to let bits merge as well.
Distance from the seed. Nearer is better means compact land masses, farther is better means long strung out bits
Number of existing land squares adjacent. Weighting in favor of many adjacent squares gives you smooth coast, preferring few gives you lots of inlets and peninsulas.
Presence of "resources" squares nearby? Depends on the game rules, when you generate resource square, and if you want to make it easy.
Will you allow bits to approach or join with the poles?
??? don't know what else
Continue until all land masses have reached the planned size or can't grow anymore for some reason.
Notice that diddling the parameter to these weighting factors allows you to tune the kind of world generated , which is a feature I liked about some of the Civs.
This way you'll need to do terrain generation on each bit separately.
You could try a diamond square algorithm or perlin noise to generate something like a height map. Then, assign ranges values to what shows up on the map. If your "height" goes from 0 to 100, then make 0 - 20 water, 20 - 30 beach, 30 - 80 grass, 80 - 100 mountains. I think notch did something similar to this in minicraft, but I'm not an expert, I'm just in a diamond square mindset after finally getting it working.
I think you can use "dynamic programming" style approach here.
Solve small problems first and combine
solutions smartly to solve bigger
problem.
A1= [elliptical rectangular random ... ]// list of continents with area A1 approx.
A2= [elliptical rectangular random ... ]// list of continents with area A2 approx.
A3= [elliptical rectangular random ... ]// list of continents with area A3 approx.
...
An= [elliptical rectangular random ... ]// list of continents with area An approx.
// note that elliptical is approximately elliptical in shape and same for the other shapes.
Choose one/more randomly from each of the lists (An).
Now you have control over number and area of continents.
You can use genetic algorithm for positioning them
as you see "fit" ;)
It will be very good to take a look at some "Graph Layout Algorithms"
Force Based Algorithms
Genetic Algorithm for Graph Layout
You can modify these to suit your purpose.
I had an idea for map creation similar to the tectonic plates answer. It went something like this:
sweep through the grid squares giving each square a "land" square if rnd <= 0.292 (the actual percentage of dry land on planet earth).
Migrate each land chunk one square toward its nearest larger neighbour. If neighbours are equidistant, go toward the larger chunk. If chunks are equal size, choose one randomly.
if two land squares touch, group them into a chunk, moving all squares as one from now on.
repeat from step 2. Stop when all chunks are connected.
This is similar to how gravity works in a 3D space. It's pretty complicated. A simpler algorithm for your needs would work as follows:
Drop in n starter land squares at random x,y positions and acceptable distances from each other. These are seeds for your continents. (Use the Pythagorean theorem to ensure the seeds have a minimum distance between themselves and all others.)
spawn a land square from an existing land square in a random direction, if that direction is an ocean square.
repeat step 2. Stop when land squares fill 30% of total map size.
if continents are close enough to each other, drop in land bridges as desired to simulate a Panama type effect.
Drop in smaller, random islands as desired for a more natural look.
for each extra "island" square you add, cut out inland seas and lake squares from the continents using the same algorithm in reverse. This will maintain the land percentage at the desired amount.
Let me know how this works out. I've never tried it myself.
PS. I see this is similar to what you tried. Except it sets up all the seeds at once, before beginning, so the continents will be far enough apart and will stop when the map is sufficiently filled.
I haven't actually tried this but it was inspired by David Johnstone's answer regarding tectonic plates. I tried implementing it myself in my old Civ project and when it came to handling collisions I had another idea. Instead of generating tiles directly, each continent consists of nodes. Distribute mass to each node then generate a series of "blob" continents using a 2D metaball approach. Tectonics and continental drift would be ridiculously easy to "fake" simply by moving the nodes around. Depending on how complex you want to go, you could even apply things like currents to handle the node movement and generate mountain ranges that correspond to plate boundaries overlapping. Probably wouldn't add that much to the gameplay side of things, but it could make for an interesting map generation from a purely academic perspective :)
A good explanation of metaballs if you haven't worked with them before:
http://www.gamedev.net/page/resources/_//feature/fprogramming/exploring-metaballs-and-isosurfaces-in-2d-r2556
Here's what I'm thinking, since I'm about to implement something like this that I have for a game in development. :
The world divided into regions. depending on the size of the world, it will determine how many regions. For this example, we'll assume a medium sized world, with 6 regions. Each grid zone breaks into 9 grid zones. those grid zones break into 9 grids each. (this is not for character movement, but merely for map creation) The Grids are for biomes, grid zones are for over arching land features, (continent vs ocean) and the regions are for overall climate. The grids break down into tiles.
Randomly generated, the regions get assigned logical climate sets. Grid zones get randomly assigned to, for instance; ocean or land. Grids get assigned biomes randomly with modifiers based on their grid zones and climate, these being forest, desert, plains, glacial, swamp or volcanic. Once all those basics are assigned, it's time to blend them together, using a random percentage based function that fills in tile sets. For example; if you have a forest biome, next to a desert biome, you have an algorithm that decreases the likely hood that a tile will be "foresty" and increases that it will be "deserty." So, about half way between them, you'll see a sort of blended affect combining the two biomes to off a somewhat smooth transition between them. Transition from one grid zone to the next would probably take a little more work to insure logic landmass formations.Like, for example, a biome from one grid zone that touches the biome from another, instead of having a simple switching percentage based on proximity. For example, there are 50 tiles from the center of the biome to the edge of the biome, meaning, there are 50 from the edge it touches to the center of the next biome. That would logically leave a 100% change from one biome to the next. So as the tiles get nearer to the border of the two biomes, the percentage narrows out to around 60% or so. It'd, I think, be unwise to give too much probability of crossing biomes far from the border, but you'll want the border to be somewhat blended. For the grid zones, the percentage change will be much more pronounced. Instead of the % going down to around 60%, it'd only drop down to around 80%. And a secondary check would then have to be performed to ensure that there's not a random water tile in the middle of a land biome next to the ocean without some logic to it. So, either, connect that water tile to the ocean mass to make a channel to explain the water tile, or remove it altogether. Land in a water based biome is easier to explain using rock outcrops and such.
Oh, kinda dumb, sorry.
I'd place fractal terrain according to some layout that you know "works" (e.g. 2x2 grid, diamond, etc, with some jitter) but with a Gaussian distribution damping peaks down towards the edges of the continent centers. Place the water level lower so that is mostly land until you get near the edges.

Raytracing (LoS) on 3D hex-like tile maps

Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.

Resources