How many paths of length n with the same start and end point can be found on a hexagonal grid? - algorithm

Given this question, what about the special case when the start point and end point are the same?
Another change in my case is that we must move at every step. How many such paths can be found and what would be the most efficient approach? I guess this would be a random walk of some sort?
My think so far is, since we must always return to our starting point, thinking about n/2 might be easier. At every step, except at step n/2, we have 6 choices. At n/2 we have a different amount of choices depending on if n is even or odd. We also have a different amount of choices depending on where we are (what previous choices we made). For example if n is even and we went straight out, we only have one choice at n/2, going back. But if n is even and we didn't go straight out, we have more choices.
It is all the cases at this turning point that I have trouble getting straight.
Am I on the right track?
To be clear, I just want to count the paths. So I guess we are looking for some conditioned permutation?

This version of the combinatorial problem looks like it actually has a short formula as an answer.
Nevertheless, the general version, both this and the original question's, can be solved by dynamic programming in O (n^3) time and O (n^2) memory.
Consider a hexagonal grid which spans at least n steps in all directions from the target cell.
Introduce a coordinate system, so that every cell has coordinates of the form (x, y).
Let f (k, x, y) be the number of ways to arrive at cell (x, y) from the starting cell after making exactly k steps.
These can be computed either recursively or iteratively:
f (k, x, y) is just the sum of f (k-1, x', y') for the six neighboring cells (x', y').
The base case is f (0, xs, ys) = 1 for the starting cell (xs, ys), and f (0, x, y) = 0 for every other cell (x, y).
The answer for your particular problem is the value f (n, xs, ys).
The general structure of an iterative solution is as follows:
let f be an array [0..n] [-n-1..n+1] [-n-1..n+1] (all inclusive) of integers
f[0][*][*] = 0
f[0][xs][ys] = 1
for k = 1, 2, ..., n:
for x = -n, ..., n:
for y = -n, ..., n:
f[k][x][y] =
f[k-1][x-1][y] +
f[k-1][x][y-1] +
f[k-1][x+1][y] +
f[k-1][x][y+1]
answer = f[n][xs][ys]
OK, I cheated here: the solution above is for a rectangular grid, where the cell (x, y) has four neighbors.
The six neighbors of a hexagon depend on how exactly we introduce a coordinate system.
I'd prefer other coordinate systems than the one in the original question.
This link gives an overview of the possibilities, and here is a short summary of that page on StackExchange, to protect against link rot.
My personal preference would be axial coordinates.
Note that, if we allow standing still instead of moving to one of the neighbors, that just adds one more term, f[k-1][x][y], to the formula.
The same goes for using triangular, rectangular, or hexagonal grid, for using 4 or 8 or some other subset of neighbors in a grid, and so on.
If you want to arrive to some other target cell (xt, yt), that is also covered: the answer is the value f[n][xt][yt].
Similarly, if you have multiple start or target cells, and you can start and finish at any of them, just alter the base case or sum the answers in the cells.
The general layout of the solution remains the same.
This obviously works in n * (2n+1) * (2n+1) * number-of-neighbors, which is O(n^3) for any constant number of neighbors (4 or 6 or 8...) a cell may have in our particular problem.
Finally, note that, at step k of the main loop, we need only two layers of the array f: f[k-1] is the source layer, and f[k] is the target layer.
So, instead of storing all layers for the whole time, we can store just two layers, as we don't need more: one for odd k and one for even k.
Using only two layers is as simple as changing all f[k] and f[k-1] to f[k%2] and f[(k-1)%2], respectively.
This lowers the memory requirement from O(n^3) down to O(n^2), as advertised in the beginning.
For a more mathematical solution, here are some steps that would perhaps lead to one.
First, consider the following problem: what is the number of ways to go from (xs, ys) to (xt, yt) in n steps, each step moving one square north, west, south, or east?
To arrive from x = xs to x = xt, we need H = |xt - xs| steps in the right direction (without loss of generality, let it be east).
Similarly, we need V = |yt - ys| steps in another right direction to get to the desired y coordinate (let it be south).
We are left with k = n - H - V "free" steps, which can be split arbitrarily into pairs of north-south steps and pairs of east-west steps.
Obviously, if k is odd or negative, the answer is zero.
So, for each possible split k = 2h + 2v of "free" steps into horizontal and vertical steps, what we have to do is construct a path of H+h steps east, h steps west, V+v steps south, and v steps north. These steps can be done in any order.
The number of such sequences is a multinomial coefficient, and is equal to n! / (H+h)! / h! / (V+v)! / v!.
To finally get the answer, just sum these over all possible h and v such that k = 2h + 2v.
This solution calculates the answer in O(n) if we precalculate the factorials, also in O(n), and consider all arithmetic operations to take O(1) time.
For a hexagonal grid, a complicating feature is that there is no such clear separation into horizontal and vertical steps.
Still, given the starting cell and the number of steps in each of the six directions, we can find the final cell, regardless of the order of these steps.
So, a solution can go as follows:
Enumerate all possible partitions of n into six summands a1, ..., a6.
For each such partition, find the final cell.
For each partition where the final cell is the cell we want, add multinomial coefficient n! / a1! / ... / a6! to the answer.
Just so, this takes O(n^6) time and O(1) memory.
By carefully studying the relations between different directions on a hexagonal grid, perhaps we can actually consider only the partitions which arrive at the target cell, and completely ignore all other partitions.
If so, this solution can be optimized into at least some O(n^3) or O(n^2) time, maybe further with decent algebraic skills.

Related

Finding closest pair of points in the plane with non-distinct x-coordinates in O(nlogn)

Most of the implementations of the algorithm to find the closest pair of points in the plane that I've seen online have one of two deficiencies: either they fail to meet an O(nlogn) runtime, or they fail to accommodate the case where some points share an x-coordinate. Is a hash map (or equivalent) required to solve this problem optimally?
Roughly, the algorithm in question is (per CLRS Ch. 33.4):
For an array of points P, create additional arrays X and Y such that X contains all points in P, sorted by x-coordinate and Y contains all points in P, sorted by y-coordinate.
Divide the points in half - drop a vertical line so that you split X into two arrays, XL and XR, and divide Y similarly, so that YL contains all points left of the line and YR contains all points right of the line, both sorted by y-coordinate.
Make recursive calls for each half, passing XL and YL to one and XR and YR to the other, and finding the minimum distance, d in each of those halves.
Lastly, determine if there's a pair with one point on the left and one point on the right of the dividing line with distance smaller than d; through a geometric argument, we find that we can adopt the strategy of just searching through the next 7 points for every point within distance d of the dividing line, meaning the recombination of the divided subproblems is only an O(n) step (even if it looks n2 at first glance).
This has some tricky edge cases. One way people deal with this is sorting the strip of points of distance d from the dividing line at every recombination step (e.g. here), but this is known to result in an O(nlog2n) solution.
Another way people deal with edge cases is by assuming each point has a distinct x-coordinate (e.g. here): note the snippet in closestUtil which adds to Pyl (or YL as we call it) if the x-coordinate of a point in Y is <= the line, or to Pyr (YR) otherwise. Note that if all points lie on the same vertical line, this would result us writing past the end of the array in C++, as we write all n points to YL.
So the tricky bit when points can have the same x-coordinate is dividing the points in Y into YL and YR depending on whether a point p in Y is in XL or XR. The pseudocode in CLRS for this is (edited slightly for brevity):
for i = 1 to Y.length
if Y[i] in X_L
Y_L.length = Y_L.length + 1;
Y_L[Y_L.length] = Y[i]
else Y_R.length = Y_R.length + 1;
Y_R[Y_R.length] = Y[i]
However, absent of pseudocode, if we're working with plain arrays, we don't have a magic function that can determine whether Y[i] is in X_L in O(1) time. If we're assured that all x-coordinates are distinct, sure - we know that anything with an x-coordinate less than the dividing line is in XL, so with one comparison we know what array to partition any point p in Y into. But in the case where x-coordinates are not necessarily distinct (e.g. in the case where they all lie on the same vertical line), do we require a hash map to determine whether a point in Y is in XL or XR and successfully break down Y into YL and YR in O(n) time? Or is there another strategy?
Yes, there are at least two approaches that work here.
The first, as Bing Wang suggests, is to apply a rotation. If the angle is sufficiently small, this amounts to breaking ties by y coordinate after comparing by x, no other math needed.
The second is to adjust the algorithm on G4G to use a linear-time partitioning algorithm to divide the instance, and a linear-time sorted merge to conquer it. Presumably this was not done because the author valued the simplicity of sorting relative to the previously mentioned algorithms in most programming languages.
Tardos & Kleinberg suggests annotating each point with its position (index) in X.
You could do this in N time, or, if you really, really want to, you could do it "for free" in the sorting operation.
With this annotation, you could do your O(1) partitioning, and then take the position pr of the right-most point in Xl in O(1), using it to determine weather a point in Y goes in Yl (position <= pr), or Yr (position > pr). This does not require an extra data structure like a hash map, but it does require that those same positions are used in X and Y.
NB:
It is not immediately obvious to me that the partitioning of Y is the only problem that arises when multiple points have the same coordinate on the x-axis. It seems to me that the proof of linearity of the comparisons neccesary across partitions breaks, but I have seen only the proof that you need only 15 comparisons, not the proof for the stricter 7-point version, so i cannot be sure.

What is a sorting algorithm that is robust to a faulty comparison?

I want to sort a list of n items with a comparison sort. However, one of the comparisons made by the algorithm will be flipped from what it's supposed to be. Specifically, there is one pair of items for which the comparator function consistently gives the wrong result.
What is a efficient n*log(n) sorting algorithm that will be robust to this faulty comparison? By robust, I mean that every item is off by at most k spots from its true position, for some reasonably small k.
If possible, I'd like it to be robust in the worst case (faulty comparison chosen adversarially), but I'll settle for robust in the average case.
An example robust algorithm (that's not efficient), would be to make all n*(n-1)/2 pairwise comparisons, and place each item by how many of the comparisons they won. Then, no matter what comparison the adversary makes, each items index will be off by no more than k=1.
An example of a NON-robust algorithm is quicksort, because the adversary could just choose the largest item to be on the wrong side of the first pivot, making it on average n/2 spots off from its correct index.
TL;DR: It's possible to modify quicksort to get the following guarantee: in (expected) time O(n log n), we can do one of the following, depending on which comparison is flipped.
Perfectly sort the array.
Perfectly sort the array, except that an adjacent pair of items somewhere in the array is swapped.
Perfectly sort the array, except that three consecutive items in the array, which can be identified, are permuted.
This guarantees a maximum displacement of 2, which is as good as is theoretically possible.
I mulled over this problem for a couple of hours and everything I'm doing connects back to tournaments.
I'd like to begin by trying to reframe the question as follows. If you have a set of n items and you know the "true" results of the comparisons between them, you can represent that result as a directed graph with one node per item and edges indicating when one item compares less than another. This type of digraph is called a "tournament," since you can think of it as encoding the result of a round-robin tournament where each player plays each other player.
In the case of an honest comparator, our tournament will be acyclic, and in particular it will have the following key property: there's exactly one node of each outdegree 0, 1, 2, ..., n - 1. The idea here is that the smallest element will have outdegree n - 1 (it's smaller than everything else), while the largest element will have outdegree 0 (it's bigger than everything else). And in fact, there's a theorem that a tournament is acyclic if and only if each node in the tournament has a different outdegree. Another useful fact: in an acyclic tournament, there's an edge from U to V if and only if outdeg(U) > outdeg(V).
In the case of a "dishonest comparator," we essentially start with an acyclic tournament, then flip a single edge. Your question asked about doing approximate sorting based on this comparator, but I'd like to step back and ask a different question, which I think can then be used to answer yours more precisely. In what cases can you figure out which edge was flipped? If we can do that, then we can do even better than approximate sorting - we can "unflip" the edge and sort perfectly. On the other hand, in which cases can you not figure out which edge was flipped, and when that happens, how far from sorted will we end up? That corresponds to having to do an approximate sort because we can't recover the original ordering.
Here's a useful fact:
Theorem: Begin with an acyclic tournament and flip a single edge. Then it's possible to determine which edge was flipped if and only if the outdegrees of the two endpoints of the flipped edge originally differ by at least three.
To prove this, we'll show both directions of implication.
First, suppose that we flip an edge between two nodes X and Y whose outdegrees differ by one. When we're done, we're left with a tournament where all nodes have different outdegrees (all other nodes have their outdegrees unchanged, and if we flipped the edge (X, Y), then X and Y swap outdegrees because one goes up by one and one goes down by one). We're now left with another acyclic tournament. And in particular, we can't tell which edge we flipped, because we could have just as well flipped any edge between any pair of nodes whose outdegrees differ by one.
Next, suppose we flip an edge between nodes X and Y where the outdeg(X) = k+1 and outdeg(Y) = k-1. We now have outdeg(X) = k = outdeg(Y), and somewhere else to begin with there must have been some node Z with outdegree k as well. So at this point, we have three nodes of outdegree k (namely, X, Y, and Z), and we know that we must have flipped one of the three edges between them. But we can't tell which one it was. Specifically, flipping the XY edge, or the XZ edge, or the YZ edge would all give back acyclic tournaments. So in that case, there's no way to undo the transform. That means that any sorted ordering we get from this comparator will have those two items out of place, so we'd have a maximum distance of at least 1.
An important note for this particular case: this corresponds to the comparator creating a tournament with exactly one cycle containing the nodes X, Y, and Z. Specifically, it'll take on the form X, Z, Y, X. The problem is we can't tell whether the original ordering was (X, Z, Y), or (Z, Y, X), or (Y, X, Z), and so we'd have a maximum distance of at least 2.
And finally, suppose that we have two nodes X and Y and flip the edge XY in the case where outdeg(X) = k, outdeg(Y) = m, and k ≥ m + 3. We're now left with a tournament in which two nodes have outdegree k - 1 and two nodes have outdegree m + 1. But of those four nodes, it's guaranteed that there's exactly one pair of them that can be flipped back to produce an acyclic tournament. One way to see this: take the four nodes that now have repeated outdegrees; call them X and Y (as above) and also W and Z, and suppose we have the cycle X, W, Z, Y, X, where the only flipped edge from the original is (Y, X). What will this cycle look like? Well, since (X, W), (W, Z), and (Z, Y) are edges in the tournament that weren't flipped, back in the original tournament we have outdeg(X) > outdeg(W) > outdeg(Z) > outdeg(Y). That means that we have to have X and W having outdegree k - 1 in the new graph and Z and Y having outdegree m + 1 in the new graph. Therefore, only flipping the edge from Y to X will increase the degree of one of the degree-(k-1) nodes back up to k while also decreasing the degree of one of the degree-(m+1) nodes down to m.
Summarizing:
Theorem: The faulty comparator will either
Behave as a real comparator, in which case we swapped two adjacent elements in the original sequence and we will never know which.
Have exactly one cycle of length three of elements whose original ordering can never be known, or
Have a cycle of length four, in which case we can identify which comparison is reversed.
With this in mind, it seems reasonable to reframe your problem in the following way:
Goal: Design an algorithm that, in time O(n log n), does one of the following things to a list of n elements given a faulty comparator that returns the wrong result when comparing two fixed elements X and Y against one another:
Perfectly sort the list.
Perfectly sort the list, except with two adjacent items swapped.
Perfectly sort the list, except with three adjacent items permuted.
Here's one possible algorithm that does this in expected O(n log n) time that's based on quicksort. The basic idea is the following: we run more or less a regular quicksort, at each point in time checking to see whether we found a triangle. If not, then either we're in case (1) or case (2). If we do find a triangle, we see whether we can identify which comparison got reversed. If we can, then we rerun quicksort, except that we "fix" the comparator in this broken case. If we can't, then we're in case (3) and just finish quicksort as usual.
The specific technique we'll use to detect a triangle works like this. Begin with a regular, vanilla quicksort: pick a pivot, partition the array into things less than the pivot and things bigger than the pivot, then recursively sort the two smaller subarrays. However, after doing so, we do one additional step: assuming the subarray we're sorting has three or more elements in it, look at the pivot p and the element just before and just after it (call those s, p, g for "smaller," "pivot," and "greater"). Then if the comparator says s < p < g < s, we've found a triangle. And in fact, we have something stronger.
Suppose that at some point in quicksort comparator does indeed compare X and Y, the mismatched items. We're assuming X < Y, but that the comparator incorrectly reports that Y < X. The only way that two items can be compared in quicksort is if one of them is a pivot element at a time when the other is in the current subarray. Without loss of generality, let's assume that X was the pivot, and that Y was compared against it.
What should happen here, assuming the comparator was honest, is that Y would be found to be larger than X, and therefore would be placed into the "bigger" subarray. But because the comparator is a lying liar who lies, instead Y gets placed into the "smaller" subarray. If we then recursively sort the "smaller" subarray and the "bigger" subarray, think about where Y will end up. It's in the "smaller" subarray but is actually bigger than X, which means it'll compare larger than everything in that "smaller" subarray. Consequently, Y will appear just before X. Now, look at the items in the "bigger" subarray. There are two options. The first is that in the "real" ordering, there's at least one value between X and Y. That value would then appear in the "bigger" subarray because it's larger than X, and in particular the first element of the "bigger" subarray would compare smaller than Y. That would mean that Y, then X, then the item immediately after X after sorting would form a triangle. The other option is that X and Y are adjacent in the true sorted ordering, which case we'd never find out (as mentioned above). This, combined with the above insight, means that
Theorem: Suppose we run quicksort, and after recursively sorting the left and right subarrays we look at the three items consisting of the pivot, the item just before it, and the item just after it to see if they form a triangle. Then if this algorithm detects a triangle, a triangle exists. Moreover, if this algorithm does not detect a triangle, then either (1) no triangle exists or (2) a triangle does exist, but the comparator was never applied to the bad pair (X, Y) and so the sorted order is correct.
With all this said and done, we can state the full algorithm that, in expected O(n log n) time, sorts the array as best as is possible.
function modifiedQuicksort(array, comparator):
if array has length 0 or 1, return.
pick a random pivot element from the array.
use the comparator to form subarrays smaller and greater based on
how elements compare against the pivot.
recursively apply modifiedQuicksort to those two arrays.
if the comparator finds a triangle formed from the last element of
smaller, the pivot, and the first element of greater, report those
three items as a triangle.
return smaller, pivot, greater.
function sortAsBestWeCan(array, comparator):
run modifiedQuicksort(array, comparator)
if it didn't report a triangle, return the result of the call.
otherwise, it reported a triangle A, B, C.
for each other item D:
if comparator(A, D) and comparator(D, B) or
comparator(B, D) and comparator(D, C) or
comparator(C, D) and comparator(D, A):
you have found a 4-cycle from A, B, C, and D.
detect which comparison is reversed.
use that knowledge plus the comparator and your favorite
O(n log n)-time sorting algorithm to perfectly sort
the input array.
otherwise, those three items are the only triangle, and the
array is sorted as well as it can be. return it.
I think I've thought up a solution.
First, do a first pass with any decent sorting algorithm you want (like quicksort), which should, at worst, result in only one item that's significantly far from where it should be.
Then, choose a width h that's at least 5.
for i from 0 to n-h, we look at the group of h items at i, i+1, ..., i+h-1. We make all h*(h-1)/2 pairwise comparisons in that group, and rearrange them by who won the most comparisons. We then increment i and move onto the next group.
Afterwards, we do the same thing, but going backwards from i=n-h to i=0.
These two extra passes will bubble up/bubble down the displaced item to be in the correct area, and uses the extra comparisons in a group of h to override the faulty single comparison.
The final number of comparisons will be O(n*log(n)) + n*h*(h-1)/2. Not sure how much better you can do.
This method also works (I think) for more than one faulty comparison. All you need to do is make sure that h is large enough to override those faulty comparisons.

Picking the "spread" from the points on a line

I'm facing an algorithmic problem described as follows: Given a line from 0 to N (really big N), a list of X points on said line, and a number Z (0<=Z<=X) pick Z points from X to maximize the distance between two closest points. The brute-force solution in O(n^2) doesn't seem that difficult but I'm looking for something more sophisticated that can be done in O(n log n) time. Any clues, solutions, advice is very appreciated.
Edit: Answering the question in the first post-it is the minimal distance (between the two closest points) that has to be maximized.
One easy approach is O(XlogN).
First, sort the points.
Next observe that if you already know the minimum distance (call it d) between the points, it's O(X) to see if there's a way of picking Z points all of which are at least distance d apart: take the left-most element, then the next that's at least distance d away, then the next that's at least distance d away from that, and so on. If by the time you've got to the end of the array you have at least Z points, then you have a solution, and if you don't, there is no solution.
Now, you can use a binary search on [0, N] to find the largest d with a solution.
The sort is O(XlogX), the binary search takes O(logN) trials, and each is O(X). Overall, that's O(XlogX + XlogN), but since N >= X that simplifies to O(XlogN).

Robot moving in a grid algorithm possible paths and time complexity ?

I'm not able to understand how for the problem below the number of paths are (x+y)!/x!y! .. I understand it comes from choose X items out of a path of X+Y items, but why is it not choosing x items over x+y + choosing y items over x+y ? Why does it have to be only x ?
A robot is located at the top-left corner of a m x n grid (marked
‘Start’ in the diagram below). The robot can only move either down or
right at any point in time. The robot is trying to reach the
bottom-right corner of the grid (marked ‘Finish’ in the diagram
below). How many possible paths are there?
Are all of these paths unique ?
How do I determine that ?
And what would be the time complexity for the backtracking algorithm ?
This is somewhat based on Mukul Joshi's answer, but hopefully a little clearer.
To go from 0,0 to x,y, you need to move right exactly x times and down exactly y times.
Let each right movement be represented by a 0 and a down movement by a 1.
Let a string of 0s and 1s then indicate a path from 0,0 to x,y. This path will contain x 0s and y 1s.
Now we want to count all such strings. This is equivalent to counting the number of permutations of any string containing x 0s and y 1s. This string is a multiset (each element can appear more than once), thus we want a multiset permutation, which can be calculated as n!/(m1!m2!...mk!) where n is the total number of characters, k is the number of unique characters and mi is the number of times the ith unique character is repeated. Since there are x+y characters in total, and 0 is repeated x times and 1 is repeated y times, we get to (x+y)!/x!y!.
Time Complexity:
The time complexity of backtracking / brute force would involve having to explore all of these paths. Think of it as a tree, with there being (x+y)!/x!y! leaves. I might be wrong, but I think the number of nodes in trees with a branching factor > 1 can be represented as the big-O of the number of leaves, thus we end up with O((x+y)!/x!y!) nodes, and thus the same time complexity.
Ok, I give you a solution to that problem so that you have better time catching it.
First of all, let us decide a solution algorithm. We will count all possible paths for every cell to reach end from it. The algorithm will check cells and write there sum of right and bottom cells. We do it because robot can move down and follow any of bottom paths or move right and follow any of rightside paths, thus, adding the total number of different paths. It is quite obvious for me to prove the divercity of these paths. If you want I can do it in comments.
Initial values for cells will be 1 for rightmost bottom cell (finish) because there only 1 way to get there from this cell (not to move at all). And if cell doesn't exist (e.g. taking bottom cell for bottommost cell) it will have value of 0.
Building cell values one by one will result in a Pascal's Triangle which values are (x + y)! / x! / y! in a (x, y) cell where x is the Ox distance from finish and y is Oy one.
Talking about complexity we will have x * y iterations over grid cells, each iteration is a constant time. If you don't want to use backtracking algorith you can use the formula that is mentioned above and have O(x + y) instead of O(x * y)
Well here is the explanation.
To reach till the destination no matter how you go, the path has to have m rows and n columns.
Consider that you represent row by 1 and column by 0. Your path is a string of m+n characters. But it can have only m 1s and n 0s.
if you have m+n different characters the number of permutations will be (m+n)! but when you have repeating characters then it will be (m+n)!/m!n! Refer to this
Of course this will be unique. Test it for 4*3 grid and you can see it.
You don't add "How many ways can I distribute my X moves?" to "How many ways can I distribute my Y moves?" for two reasons:
The distribution of X moves and Y moves are not independent. For each configuration of X moves, there is only 1 possible configuration of Y moves.
If they were independent, you wouldn't add them, you would multiply them. For example, if I have X different color shirts and Y different color pants, there are X * Y different combinations of shirts and pants.
Note that for #1 there is nothing special about X - I could just have easily chosen Y and said: "The distribution of Y moves and X moves are not independent. For each configuration of Y moves, there is only 1 possible configuration of X moves." Which is why, as others have pointed out, counting the number of ways to distribute your Y moves gives the same result as counting the number of ways to distribute your X moves.

Algorithm for finding path combinations?

Imagine you have a dancing robot in n-dimensional euclidean space starting at origin P_0 = (0,0,...,0).
The robot can make m types of dance moves D_1, D_2, ..., D_m
D_i is an n-vector of integers (D_i_1, D_i_2, ..., D_i_n)
If the robot makes dance move i than its position changes by D_i:
P_{t+1} = P_t + D_i
The robot can make any of the dance moves as many times as he wants and in any order.
Let a k-dance be defined as a sequence of k dance moves.
Clearly there are m^k possible k-dances.
We are interested to know the set of possible end positions of a k-dance, and for each end position, how many k-dances end at that location.
One way to do this is as follows:
P0 = (0, 0, ..., 0);
S[0][P0] = 1
for I in 1 to k
for J in 1 to m
for P in S[I-1]
S[I][P + D_J] += S[I][P]
Now S[k][Q] will tell you how many k-dances end at position Q
Assume that n, m, |D_i| are small (less than 5) and k is less than 40.
Is there a faster way? Can we calculate S[k][Q] "directly" somehow with some sort of linear algebra related trick? or some other approach?
You could create an adjacency matrix that would contain dance-move transitions in your space (the part of it that's reachable in k moves, otherwise it would be infinite). Then, the P_0 row of n-th power of this matrix contains the S[k] values.
The matrix in question quickly gets enormous, something like (k*(max(D_i_j)-min(D_i_j)))^n (every dimension can be halved if Q is close to origin), but that's true for your S matrix as well
Since dance moves are interchangable you can assume that for a i < j the robot first makes all the D_i moves before the D_j moves, thus reducing the number of combinations to actually calculate.
If you keep track of the number of times each dance move was made calculating the total number of combinations should be easy.
Since the 1-dimensional problem is closely related to the subset sum problem, you could probably take a similar approach - find all of the combinations of dance vectors that add together to have the correct first coordinate with exactly k moves; then take that subset of combinations and check to see which of those have the right sum for the second, and take the subset which matches both and check it for the third, and so on.
In this way, you get to at least only have to perform a very simple addition for the extremely painful O(n^k) step. It will indeed find all of the vectors which will hit a given value.

Resources