Calculate the size of windows tiling - algorithm

I have a number of windows and I'd like to tile them to cover the entire workingarea of the screen. If there are less windows, the individual windows are bigger. The windows are almost squares -- an example is 800x585. They always scale with a fixed ratio.
In this example I only have 4 windows, so my calculation should figure out that filling the screen is done with a 2x2 grid.
In this example I have 8 windows, but instead of 4 cols x 2 rows (which would leave a huge gap underneath the 2nd row because of the fixed ratio) the windows are divided in 3x3 with one empty spot.
The basic idea is to leave as little uncovered screen space as possible. I'm trying to do this in AutoIt, but if someone can explain this in C# or Python I am equally happy :)

A brute force algorithm in pseudo-code:
Begin:
Let n be the number of windows.
Find s such that:
squareroot s is a positive integer
s >= n
Let wasted-area = the actual wasted area in the square grid of s slots.
Let x = square root s
Let y = square root s
For each (i, j) where:
i and j are positive integers
i * j = n --------------> i and j are factors of n
Let a = the actual wasted area of the rectangular grid (i, j)
When a < wasted-area then
set x to i
set y to j
set wasted-area to a
Next (i, j)
Tile screen with (x, y)
End
Note: that if some assumptions can be made about he ratio of the window and the ratio of the screen, then some pairs of factors can be excluded. If no assumptions can be made, then brute force is as good as I can do. Someone with a stronger math background might do better.
Keeping in mind that on a real computer n may seldom be large in an absolute sense, brute force probably is acceptable for many situations.

Related

How many paths of length n with the same start and end point can be found on a hexagonal grid?

Given this question, what about the special case when the start point and end point are the same?
Another change in my case is that we must move at every step. How many such paths can be found and what would be the most efficient approach? I guess this would be a random walk of some sort?
My think so far is, since we must always return to our starting point, thinking about n/2 might be easier. At every step, except at step n/2, we have 6 choices. At n/2 we have a different amount of choices depending on if n is even or odd. We also have a different amount of choices depending on where we are (what previous choices we made). For example if n is even and we went straight out, we only have one choice at n/2, going back. But if n is even and we didn't go straight out, we have more choices.
It is all the cases at this turning point that I have trouble getting straight.
Am I on the right track?
To be clear, I just want to count the paths. So I guess we are looking for some conditioned permutation?
This version of the combinatorial problem looks like it actually has a short formula as an answer.
Nevertheless, the general version, both this and the original question's, can be solved by dynamic programming in O (n^3) time and O (n^2) memory.
Consider a hexagonal grid which spans at least n steps in all directions from the target cell.
Introduce a coordinate system, so that every cell has coordinates of the form (x, y).
Let f (k, x, y) be the number of ways to arrive at cell (x, y) from the starting cell after making exactly k steps.
These can be computed either recursively or iteratively:
f (k, x, y) is just the sum of f (k-1, x', y') for the six neighboring cells (x', y').
The base case is f (0, xs, ys) = 1 for the starting cell (xs, ys), and f (0, x, y) = 0 for every other cell (x, y).
The answer for your particular problem is the value f (n, xs, ys).
The general structure of an iterative solution is as follows:
let f be an array [0..n] [-n-1..n+1] [-n-1..n+1] (all inclusive) of integers
f[0][*][*] = 0
f[0][xs][ys] = 1
for k = 1, 2, ..., n:
for x = -n, ..., n:
for y = -n, ..., n:
f[k][x][y] =
f[k-1][x-1][y] +
f[k-1][x][y-1] +
f[k-1][x+1][y] +
f[k-1][x][y+1]
answer = f[n][xs][ys]
OK, I cheated here: the solution above is for a rectangular grid, where the cell (x, y) has four neighbors.
The six neighbors of a hexagon depend on how exactly we introduce a coordinate system.
I'd prefer other coordinate systems than the one in the original question.
This link gives an overview of the possibilities, and here is a short summary of that page on StackExchange, to protect against link rot.
My personal preference would be axial coordinates.
Note that, if we allow standing still instead of moving to one of the neighbors, that just adds one more term, f[k-1][x][y], to the formula.
The same goes for using triangular, rectangular, or hexagonal grid, for using 4 or 8 or some other subset of neighbors in a grid, and so on.
If you want to arrive to some other target cell (xt, yt), that is also covered: the answer is the value f[n][xt][yt].
Similarly, if you have multiple start or target cells, and you can start and finish at any of them, just alter the base case or sum the answers in the cells.
The general layout of the solution remains the same.
This obviously works in n * (2n+1) * (2n+1) * number-of-neighbors, which is O(n^3) for any constant number of neighbors (4 or 6 or 8...) a cell may have in our particular problem.
Finally, note that, at step k of the main loop, we need only two layers of the array f: f[k-1] is the source layer, and f[k] is the target layer.
So, instead of storing all layers for the whole time, we can store just two layers, as we don't need more: one for odd k and one for even k.
Using only two layers is as simple as changing all f[k] and f[k-1] to f[k%2] and f[(k-1)%2], respectively.
This lowers the memory requirement from O(n^3) down to O(n^2), as advertised in the beginning.
For a more mathematical solution, here are some steps that would perhaps lead to one.
First, consider the following problem: what is the number of ways to go from (xs, ys) to (xt, yt) in n steps, each step moving one square north, west, south, or east?
To arrive from x = xs to x = xt, we need H = |xt - xs| steps in the right direction (without loss of generality, let it be east).
Similarly, we need V = |yt - ys| steps in another right direction to get to the desired y coordinate (let it be south).
We are left with k = n - H - V "free" steps, which can be split arbitrarily into pairs of north-south steps and pairs of east-west steps.
Obviously, if k is odd or negative, the answer is zero.
So, for each possible split k = 2h + 2v of "free" steps into horizontal and vertical steps, what we have to do is construct a path of H+h steps east, h steps west, V+v steps south, and v steps north. These steps can be done in any order.
The number of such sequences is a multinomial coefficient, and is equal to n! / (H+h)! / h! / (V+v)! / v!.
To finally get the answer, just sum these over all possible h and v such that k = 2h + 2v.
This solution calculates the answer in O(n) if we precalculate the factorials, also in O(n), and consider all arithmetic operations to take O(1) time.
For a hexagonal grid, a complicating feature is that there is no such clear separation into horizontal and vertical steps.
Still, given the starting cell and the number of steps in each of the six directions, we can find the final cell, regardless of the order of these steps.
So, a solution can go as follows:
Enumerate all possible partitions of n into six summands a1, ..., a6.
For each such partition, find the final cell.
For each partition where the final cell is the cell we want, add multinomial coefficient n! / a1! / ... / a6! to the answer.
Just so, this takes O(n^6) time and O(1) memory.
By carefully studying the relations between different directions on a hexagonal grid, perhaps we can actually consider only the partitions which arrive at the target cell, and completely ignore all other partitions.
If so, this solution can be optimized into at least some O(n^3) or O(n^2) time, maybe further with decent algebraic skills.

Optimal ice cream cone filling process on a conveyor belt with arbitrary distributed cones [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an algorithmic question that arose from a real-life production problem.
Setting. Empty ice cream cones are randomly distributed across a moving conveyor belt. The batcher equipment has a hosepipe that can move above the belt within some limits (considerably smaller than the length of the belt). In order to fill an empty cone the hosepipe is placed right above the cone and locked on it for some time until the filling process is over. So this means that the cone must remain in the hosepipe reach area while the filling is in progress. After it's done the hosepipe can move on to another cone. Clearly, if the speed is not big enough and filling process takes some time the system would miss some of the cones if cones are many enough and inconveniently placed. So the problem is to fill as much cones as possible by scheduling the order of filling beforehand.
Formally we have as input:
U — speed of the belt
V — speed of the hosepipe
W — width of the belt
L — length of the belt
P — length of the hosepipe reach area
T — time of the filling process
cones — array of coordinates of cones on the belt
An output is ideally the list of cones to fill successively that ensures the maximum number of filled cones. Or at least an estimation of the maximal number of cones that are possible to fill.
Any suggestions on how to tackle this problem would be greatly appreciated!
Optimal cone-filling with a forward-only dispenser
Let's suppose the conveyor belt moves right-to-left. Below I'll describe a way to formulate and solve the problem in a way that fills the maximum possible number of cones, under the assumption that the dispenser never moves to the left faster than the conveyor belt does. For n cones, the basic algorithm has a very loose (see later) upper bound of O(n^3) time and O(n^2) space -- this should be feasible for up to 1000 cones or so. If you have more cones than this, you can break them into blocks of at most this size and simply process each block one after the other. There's also a way to relax the never-move-left-fast restriction somewhat, and thus potentially fill more cones, without the whole problem becoming exponential-time -- I'll describe this later.
Let's suppose that all cones have positive x co-ordinates, and that the hosepipe reach area, which initially extends from x = 0 leftwards to x = -P, moves rightwards over the cones, which themselves remain stationary. So at time t, the hosepipe reach area will extend from x = U * t leftwards to x = U * t - P. When describing the position of the dispenser, I'll always use the same (that is, absolute) co-ordinate system; we'll ensure that it remains a valid position (inside the hosepipe reach area) by ensuring that at any time t, its x location is between U * t - P and U * t. Notice that a (time, cone ID) pair is enough to completely determine the positions of both the hosepipe reach area and the dispenser, if we interpret it to mean that the dispenser is directly over the given cone at the given time. (Later this will help in simplifying the description of the system state.) Finally, I'll call any motion of the dispenser that does not decrease its absolute x co-ord (this includes any backward motion, relative to its enclosure, that is lower in speed than U, and also no motion at all) a "forward" motion, and any that does a "backward" motion.
Dynamic programming formulation
Sort the cones by increasing x position, breaking ties arbitrarily. Let (x_i, y_i) be the position of the ith cone in this sorted order.
Let e(i) be the earliest time at which we could feasibly position the dispenser over cone i if it was the only cone we cared about, and the dispenser was already "waiting" at the correct vertical position (namely, y_i) at the rightmost end of the hosepipe reach area: this is simply x_i / U.
Let m(i, j) be the time needed to move the dispenser from cone i to cone j, assuming that it's possible to do so without having to wait for either one to "scroll into view": this can easily be calculated for any pair (i, j) from their co-ordinates and the speeds V and U (this remains true even if the dispenser can simultaneously move at arbitrary speeds V_x and V_y in the x and y directions).
Now we come to the function that is the key to efficient solution of this problem:
Let f(i, j) be the earliest time at which we could finish filling cone i such that we have filled exactly j cones so far (including this one, so 1 <= j <= i), or infinity if this is not feasible. Let g(i, j) be a helper function that is defined the same way, except that we allow the last cone-filling step to push the dispenser too far to the left (you'll see why in a minute). We can calculate g(i, j) and, more importantly, f(i, j) as follows:
g(i, j) = max(e(i), minimum of f(k, j-1) + m(k, i) over all k s.t. j <= k < i) + T
f(i, j) = IF U * g(i, j) - P <= x_i THEN g(i, j) ELSE infinity
What a mess! Let's go part by part.
The f(k, j-1) + m(k, i) term is the smallest amount of time it takes to fill j-1 cones, ending with cone k, then move the dispenser to cone i. The max(e(i), ...) around this ensures that, if the movement implied by the above term would cause the dispenser to move too far to the right (i.e., to some x-co-ord > U * t), it won't be taken. Instead, we'll move the dispenser to (U * t, y_i) -- that is, to the correct y co-ord for cone i and as far right as possible -- then wait for cone i to scroll in (and thus appear directly below the dispenser) at time e(i). Regardless of which of these actions we take, it then takes a further T time units to fill cone i.
(Technically, the above calculation assumes that, if it's possible to move the dispenser to (x_i, y_i) by some given time t, then it's also possible to move it to (U * t < x_i, y_i) by that same time at the latest. But since our starting x location is <= U * t, the only way this could fail to hold is if the function describing the time needed to move between 2 given points violates the Triangle Inequality -- something which doesn't happen when the hosepipe moves relative to its enclosure at a constant speed V, or independently in 2 directions at constant speeds V_x and V_y, or indeed uses any non-crazy drive system.)
What about the left edge of the hosepipe reach area? U * g(i, j) - P is the position of the left edge of the this area at the time g(i, j). Since that time is the earliest possible time that we could have finished the task of filling j cones, the last of which is cone i, that expression gives the leftmost possible position that the left edge of the hosepipe reach area could be in when the task is completed. So if that position is still to the left of x_i, it means we can feasibly fill cone i after those j-1 earlier cones -- but if it isn't, we know that trying to do so will force the dispenser too far left (this might happen while trying to move to cone i, or while filling it -- it doesn't matter). So in the latter case we slam the time cost associated with task f(i, j) all the way to infinity, guaranteeing it won't be used as part of the solution to any larger subproblem.
Time and space usage
Calculating any particular f(i, j) value takes O(n) time, so calculating all O(n^2) of these values takes O(n^3) time. However in practice, we will hardly ever need to consider all possible values of k less than i in the above minimum. In addition to ensuring that the sequence of movements implied by f(i, j) remains feasible, the max(e(i), ...) is also the key to a big practical speedup: as soon as we happen on a k that causes the e(i) term to "kick in" (become the larger of the two terms compared by max()), it will remain the best feasible option -- since any subsequent k that purports to allow a faster completion of the task necessarily involves pushing the dispenser too far to the right in the final step. That means that we don't need to try any of those other k values: e(i) is indeed the real minimum.
If all we wanted to calculate was the minimum time needed to fill some given number of cones, we could actually do it in just O(n) space, by making use of the fact that when calculating f(i, j), we only ever access previous values of f() having second argument equal to j-1. But since what we actually really want is the sequence of actions corresponding to such a minimum time, we will need to record a table of predecessors p[i][j], and this does require O(n^2) space.
Pseudocode
Sort cone[1 .. n] by increasing x co-ord.
Compute e[i] for all 1 <= i <= n.
Set f[i][1] = e[i] + T for all 1 <= i <= n.
Set f[i][j] = infinity for all 1 <= i <= n, 2 <= j <= i.
maxCones = 0.
bestTime = infinity.
# Compute f(i, j) for all i, j.
For j from 2 up to n:
For i from j up to n:
g = infinity. # Best time for f(i, j) so far.
For k from j up to i-1:
z = f[k][j-1] + m(k, i) + T.
If z < g:
p[i][j] = k.
If z < e[i] + T:
g = e[i] + T.
Break out of innermost (k) loop.
Else:
g = z.
If U * g - P <= cone[i].x:
f[i][j] = g.
If maxCones < j or (maxCones == j and g < bestTime):
maxCones = j. # New record!
bestI = i.
bestTime = g.
Else:
f[i][j] = infinity.
# Trace back through p[][] to find the corresponding action sequence.
For j from maxCones down to 1:
fill[j] = bestI.
bestI = p[bestI][j].
After running this, maxCones will contain the maximum number of cones that can feasibly be filled, and if this is >= 1, then fill[1] through fill[maxCones] will contain a corresponding sequence of maxCone cone IDs (positions in the sorted sequence) to fill, and the total time needed will be in bestTime.
Possible enhancements
The above algorithm only solves the problem optimally under the restriction that the dispenser never moves backwards "too fast". This could be quite restrictive in practice: For example, a pattern of cones like the following
X X X X
X X X X
will cause the dispenser make a long vertical move between every cone it fills (assuming it's able to fill all of them). Filling several cones in the same row and only then moving to the other row would save a lot of time.
The difficulty in solving the problem optimally without the restriction above is that it starts looking very much like certain NP-hard problems, like the Euclidean TSP problem. I don't have time to look for a formal reduction, but I'm confident that the unrestricted version of your problem is NP-hard, so the best we can hope to do with a polynomial-time algorithm is to look for good heuristics. To that end:
The DP solution above basically finds, for each cone i, the best way to fill j cones in total, ending at cone i and using only other cones to its left. We can solve a slightly more general problem by breaking the sorted sequence of cones into contiguous blocks of b cones, and then finding, for each cone i, the best way to fill j cones in total that ends at cone i and uses only the cones that are either (a) in an earlier block (these cones must be to the left of i) or (b) in the same block as i (these cones aren't, necessarily). The only solutions overlooked by this approach are those that would require us to fill a cone in some block and afterwards fill a cone in an earlier block (this includes, in particular, all solutions where we fill a cone in some block, a cone in a different block, and then another cone in the first block again -- at least one of the two moves between blocks must be a move to a previous block).
Obviously, if we pick b = n then this will find the overall optimum (in a million years), but b doesn't need to be anywhere near this large to get an optimal solution. Using a variation of the O(n^2*2^n) DP algorithm for solving TSP to assist in computing within-block optimal paths, choosing b = 10, say, would be quite feasible.
One more suggestion is that instead of fixing the block size at exactly b, cones could first be more intelligently split into blocks of size at most b, that is, in such a way that the (unknown) optimal solution seldom needs to fill a cone in a previous block. In fact, provided that it's possible to heuristically score breakpoint "quality" (e.g. by using the minimum distance between any pair of points in 2 blocks), calculating a blocking pattern that maximises the score can easily be done in O(bn) time, using a (different) DP!

Labelling a grid using n labels, where every label neighbours every other label

I am trying to create a grid with n separate labels, where each cell is labelled with one of the n labels such that all labels neighbour (edge-wise) all other labels somewhere in the grid (I don't care where). Labels are free to appear as many times as necessary, and I'd like the grid to be as small as possible. As an example, here's a grid for five labels, 1 to 5:
3 2 4
5 1 3
2 4 5
While generating this by hand is not too bad for small numbers of labels, it appears to be very hard to generate a grid of reasonable size for larger numbers and so I'm looking to write a program to generate them, without having to resort to a brute-force search. I imagine this must have been investigated before, but the closest I've found are De Bruijn tori, which are not quite what I'm looking for. Any help would be appreciated.
EDIT: Thanks to Benawii for the following improved description:
"Given an integer n, generate the smallest possible matrix where for every pair (x,y) where x≠y and x,y ∈ {1,...,n} there exists a pair of adjacent cells in the matrix whose values are x and y."
You can experiment with a simple greedy algorithm.
I don't think that I'm able to give you a strict mathematical prove, at least because the question is not strictly defined, but the algorithm is quite intuitive.
First, if you have 1...K numbers (K labels) then you need at least K*(K-1)/2 adjacent cells (connections) for full coverage. A matrix of size NxM generates (N-1)*M+(M-1)*N=2*N*M-(N+M) connections.
Since you didn't mention what you understand under 'smallest matrix', let's assume that you meant the area. In that case it is obvious that for the given area the square matrix will generate bigger number of connections because it has more 'inner' cells adjacent to 4 others. For example, for area 16 the matrix 4x4 is better than 2x8. 'Better' is intuitive - more connections and more chances to reach the goal. So lets use target square matrixes and expand them if needed. The above formula will become 2*N*(N-1).
Then we can experiment with the following greedy algorithm:
For input number K find the N such that 2*N*(N-1)>K*(K-1)/2. A simple school equation.
Keep an adjacency matrix M, set M[i][i]=1 for all i, and 0 for the rest of the pairs.
Initialize a resulting matrix R of size NxN, fill with 'empty value' markers, for example with -1.
Start from top-left corner and iterate right-down:
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
R[i][j];
for each such R[i][j] (which is -1 now) find such a value which will 'fit best'. Again, 'fit best' is an intuitive definition, here we understand such a value that will contribute to a new unused connection. For that reason create the set of already filled cell neighbor numbers - S, its size is 2 at most (upper and left neighbor). Then find first k such that M[x][k]=0 for both numbers x in S. If no such number then try at least one new connection, if no number even for one then both neighbors are completely covered, put some number from uncovered here - probably the one in 'worst situation' - such x where Sum(M[x][i]) is the smallest. You should also choose the one in 'worst situation' when there are several ones to choose from in any case.
After setting the value for R[i][j] don't forget to mark the new connections with numbers x from S - M[R[i][j]][x] = M[x][R[i][j]] = 1.
If the matrix is filled and there are still unmarked connections in M then append another row to the matrix and continue. If all the connections are found before the end then remove extra rows.
You can check this algorithm and see what will happen. Step 5 is the place for playing around, particularly in guessing which one to choose in equal situation (several numbers can be in equally 'worst situation').
Example:
for K=6 we need 15 connections:
N=4, we need 4x4 square matrix. The theory says that 4x3 matrix has 17 connections, so it can possibly fit, but we will try 4x4.
Here is the output of the algorithm above:
1234
5615
2413
36**
I'm not sure if you can do by 4x3, maybe yes... :)

Approximation of a common divisor closest to some value?

Say we have two numbers (not necessarily integers) x1 and x2. Say, the user inputs a number y. What I want to find, is a number y' close to y so that x1 % y' and x2 % y' are very small (smaller than 0.02, for example, but lets call this number LIMIT). In other words, I don't need an optimal algorithm, but a good approximation.
I thank you all for your time and effort, that's really kind!
Let me explain what the problem is in my application : say, a screen size is given, with a width of screenWidth and a height of screenHeight (in pixels). I fill the screen with squares of a length y'. Say, the user wants the square size to be y. If y is not a divisor of screenWidth and/or screenHeight, there will be non-used space at the sides of the screen, not big enough to fit squares. If that non-used space is small (e.g. one row of pixels), it's not that bad, but if it's not, it won't look good. How can I find common divisors of screenWidth and screenHeight?
I don't see how you can ensure that x1%y' and x2%y' are both below some value - if x1 is prime, nothing is going to be below your limit (if the limit is below 1) except x1 (or very close) and 1.
So the only answer that always works is the trivial y'=1.
If you are permitting non-integer divisors, then just pick y'=1/(x1*x2), since the remainder is always 0.
Without restricting the common divisor to integers, it can be anything, and the whole 'greatest common divisor' concept goes out the window.
x1 and x2 are not very large, so a simple brute force algorithm should be good enough.
Divide x1 and x2 to y and compute floor and ceiling of the results. This gives four numbers: x1f, x1c, y1f, y1c.
Choose one of these numbers, closest to the exact value of x1/y (for x1f, x1c) or x2/y (for y1f, y1c). Let it be x1f, for example. Set y' = x1/x1f. If both x1%y' and y1%y' are not greater than limit, success (y' is the best approximation). Otherwise add x1f - 1 to the pool of four numbers (or y1f - 1, or x1c + 1, or y1c + 1), choose another closest number and repeat.
You want to fit the maximum amount of evenly spaced squares inside a fixed area. It's possible to find the optimal solution for your problem with some simple math.
Lets say you have a region with width = W and height = H, and you are trying to fit squares with sides of length = x. The maximum number of squares horizontaly and verticaly, that I will call max_hor and max_vert respectively, are max_hor=floor(W/x) and max_vert=floor(H/x) . If you draw all the squares side by side, without any spacement, there will be a rest in each line and each column. Lets call the horizontal/vertical rests respectively by rest_w and rest_h. This case is illustrated in the figure below:
Note that rest_w=W-max_horx and rest_h=H-max_vertx.
What you want is divide rest_w and rest_h equaly, generating small horizontal and vertical spaces of sizes space_w and space_h like the figure below:
Note that space_w=rest_w/(max_hor+1) and space_h=rest_h/(max_vert+1).
Is that the number you are looking for?
I believe I made a mistake, but I don't see why. Based on Phil H's answer, I decided to restrict to integer values, but multiply x1 and x2 by a power of 10. Afterwards, I'd divide the common integer divisors by that number.
Online, I found a common factors calculator. Experimenting with it made me realize it wouldn't give me any common divisors... I tried multiple cases (x1 = 878000 and x2 = 1440000 and some others), and none of them had good results.
In other words, you probably have to multiply with very high numbers to achieve results, but that would make the calculation very, very slow.
If anyone has a solution to this problem, that would be awesome. For now though, I decided to take advantage of the fact that screenWidth and screenHeight are good numbers to work with, since they are the dimension of a computer screen. 900 and 1440 have more than enough common divisors, so I can work with that...
Thank you all for your answers on this thread and on my previous thread about an optimal algorithm for this problem.

What's the algorithm behind minesweeper generation

Well I have been through many sites teaching on how to solve it, but was wondering how to create it. I am not interested much in the coding aspects of it, but wanted to know more on the algorithms behind it. For example, when the grid is generated with 10 mines or so, I would use any random function to distribute itself across the grid, but then again how do I set the numbers associated to it and decide which box to be opened? I couldn't frame any generic algorithm on how would I go about doing that.
Perhaps something in the lines of :
grid = [n,m] // initialize all cells to 0
for k = 1 to number_of_mines
get random mine_x and mine_y where grid(mine_x, mine_y) is not a mine
for x = -1 to 1
for y = -1 to 1
if x = 0 and y = 0 then
grid[mine_x, mine_y] = -number_of_mines // negative value = mine
else
increment grid[mine_x + x, mine_y + y] by 1
That's pretty much it...
** EDIT **
Because this algorithm could lead into creating a board with some mines grouped too much together, or worse very dispersed (thus boring to solve), you can then add extra validation when generating mine_x and mine_y number. For example, to ensure that at least 3 neighboring cells are not mines, or even perhaps favor limiting the number of mines that are too far from each other, etc.
** UPDATE **
I've taken the liberty of playing a little with JS bin here came up with a functional Minesweeper game demo. This is simply to demonstrate the algorithm described in this answer. I did not optimize the randomness of the generated mine position, therefore some games could be impossible or too easy. Also, there are no validation as to how many mines there are in the grid, so you can actually create a 2 by 2 grid with 1000 mines.... but that will only lead to an infinite loop :) Enjoy!
You just seed the mines and after that, you traverse every cell and count the neighbouring mines.
Or you set every counter to 0 and with each seeded mine, you increment all neighbouring cells counters.
If you want to place m mines on N squares, and you have access to a random number generator, you just walk through the squares remaining and for each square compute (# mines remaining)/(# squares remaining) and place a mine if your random number is equal to or below that value.
Now, if you want to label every square with the number of adjacent mines, you can just do it directly:
count(x,y) = sum(
for i = -1 to 1
for j = -1 to 1
1 if (x+i,y+j) contains a mine
0 otherwise
)
or if you prefer you can start off with an array of zeros and increment each by one in the 3x3 square that has a mine in the center. (It doesn't hurt to number the squares with mines.)
This produces a purely random and correctly annotated minesweeper game. Some random games may not be fun games, however; selecting random-but-fun games is a much more challenging task.

Resources