Solving vector equations in Mathematica - wolfram-mathematica

I'm trying to figure out how to use Mathematica to solve systems of equations where some of the variables and coefficients are vectors. A simple example would be something like
where I know A, V, and the magnitude of P, and I have to solve for t and the direction of P. (Basically, given two rays A and B, where I know everything about A but only the origin and magnitude of B, figure out what the direction of B must be such that it intersects A.)
Now, I know how to solve this sort of thing by hand, but that's slow and error-prone, so I was hoping I could use Mathematica to speed things along and error-check me. However, I can't see how to get Mathematica to symbolically solve equations involving vectors like this.
I've looked in the VectorAnalysis package, without finding anything there that seems relevant; meanwhile the Linear Algebra package only seems to have a solver for linear systems (which this isn't, since I don't know t or P, just |P|).
I tried doing the simpleminded thing: expanding the vectors into their components (pretend they're 3D) and solving them as if I were trying to equate two parametric functions,
Solve[
{ Function[t, {Bx + Vx*t, By + Vy*t, Bz + Vz*t}][t] ==
Function[t, {Px*t, Py*t, Pz*t}][t],
Px^2 + Py^2 + Pz^2 == Q^2 } ,
{ t, Px, Py, Pz }
]
but the "solution" that spits out is a huge mess of coefficients and congestion. It also forces me to expand out each of the dimensions I feed it.
What I want is a nice symbolic solution in terms of dot products, cross products, and norms:
But I can't see how to tell Solve that some of the coefficients are vectors instead of scalars.
Is this possible? Can Mathematica give me symbolic solutions on vectors? Or should I just stick with No.2 Pencil technology?
(Just to be clear, I'm not interested in the solution to the particular equation at top -- I'm asking if I can use Mathematica to solve computational geometry problems like that generally without my having to express everything as an explicit matrix of {Ax, Ay, Az}, etc.)

With Mathematica 7.0.1.0
Clear[A, V, P];
A = {1, 2, 3};
V = {4, 5, 6};
P = {P1, P2, P3};
Solve[A + V t == P, P]
outputs:
{{P1 -> 1 + 4 t, P2 -> 2 + 5 t, P3 -> 3 (1 + 2 t)}}
Typing out P = {P1, P2, P3} can be annoying if the array or matrix is large.
Clear[A, V, PP, P];
A = {1, 2, 3};
V = {4, 5, 6};
PP = Array[P, 3];
Solve[A + V t == PP, PP]
outputs:
{{P[1] -> 1 + 4 t, P[2] -> 2 + 5 t, P[3] -> 3 (1 + 2 t)}}
Matrix vector inner product:
Clear[A, xx, bb];
A = {{1, 5}, {6, 7}};
xx = Array[x, 2];
bb = Array[b, 2];
Solve[A.xx == bb, xx]
outputs:
{{x[1] -> 1/23 (-7 b[1] + 5 b[2]), x[2] -> 1/23 (6 b[1] - b[2])}}
Matrix multiplication:
Clear[A, BB, d];
A = {{1, 5}, {6, 7}};
BB = Array[B, {2, 2}];
d = {{6, 7}, {8, 9}};
Solve[A.BB == d]
outputs:
{{B[1, 1] -> -(2/23), B[2, 1] -> 28/23, B[1, 2] -> -(4/23), B[2, 2] -> 33/23}}
The dot product has an infix notation built in just use a period for the dot.
I do not think the cross product does however. This is how you use the Notation package to make one. "X" will become our infix form of Cross. I suggest coping the example from the Notation, Symbolize and InfixNotation tutorial. Also use the Notation Palette which helps abstract away some of the Box syntax.
Clear[X]
Needs["Notation`"]
Notation[x_ X y_\[DoubleLongLeftRightArrow]Cross[x_, y_]]
Notation[NotationTemplateTag[
RowBox[{x_, , X, , y_, }]] \[DoubleLongLeftRightArrow]
NotationTemplateTag[RowBox[{ ,
RowBox[{Cross, [,
RowBox[{x_, ,, y_}], ]}]}]]]
{a, b, c} X {x, y, z}
outputs:
{-c y + b z, c x - a z, -b x + a y}
The above looks horrible but when using the Notation Palette it looks like:
Clear[X]
Needs["Notation`"]
Notation[x_ X y_\[DoubleLongLeftRightArrow]Cross[x_, y_]]
{a, b, c} X {x, y, z}
I have run into some quirks using the notation package in the past versions of mathematica so be careful.

I don't have a general solution for you by any means (MathForum may be the better way to go), but there are some tips that I can offer you. The first is to do the expansion of your vectors into components in a more systematic way. For instance, I would solve the equation you wrote as follows.
rawSol = With[{coords = {x, y, z}},
Solve[
Flatten[
{A[#] + V[#] t == P[#] t & /# coords,
Total[P[#]^2 & /# coords] == P^2}],
Flatten[{t, P /# coords}]]];
Then you can work with the rawSol variable more easily. Next, because you are referring the vector components in a uniform way (always matching the Mathematica pattern v_[x|y|z]), you can define rules that will aid in simplifying them. I played around a bit before coming up with the following rules:
vectorRules =
{forms___ + vec_[x]^2 + vec_[y]^2 + vec_[z]^2 :> forms + vec^2,
forms___ + c_. v1_[x]*v2_[x] + c_. v1_[y]*v2_[y] + c_. v1_[z]*v2_[z] :>
forms + c v1\[CenterDot]v2};
These rules will simplify the relationships for vector norms and dot products (cross-products are left as a likely painful exercise for the reader). EDIT: rcollyer pointed out that you can make c optional in the rule for dot products, so you only need two rules for norms and dot products.
With these rules, I was immediately able to simplify the solution for t into a form very close to yours:
In[3] := t /. rawSol //. vectorRules // Simplify // InputForm
Out[3] = {(A \[CenterDot] V - Sqrt[A^2*(P^2 - V^2) +
(A \[CenterDot] V)^2])/(P^2 - V^2),
(A \[CenterDot] V + Sqrt[A^2*(P^2 - V^2) +
(A \[CenterDot] V)^2])/(P^2 - V^2)}
Like I said, it's not a complete way of solving these kinds of problems by any means, but if you're careful about casting the problem into terms that are easy to work with from a pattern-matching and rule-replacement standpoint, you can go pretty far.

I've taken a somewhat different approach to this issue. I've made some definitions that return this output:
Patterns that are known to be vector quantities may be specified using vec[_], patterns that have an OverVector[] or OverHat[] wrapper (symbols with a vector or hat over them) are assumed to be vectors by default.
The definitions are experimental and should be treated as such, but they seem to work well. I expect to add to this over time.
Here are the definitions. The need to be pasted into a Mathematica Notebook cell and converted to StandardForm to see them properly.
Unprotect[vExpand,vExpand$,Cross,Plus,Times,CenterDot];
(* vec[pat] determines if pat is a vector quantity.
vec[pat] can be used to define patterns that should be treated as vectors.
Default: Patterns are assumed to be scalar unless otherwise defined *)
vec[_]:=False;
(* Symbols with a vector hat, or vector operations on vectors are assumed to be vectors *)
vec[OverVector[_]]:=True;
vec[OverHat[_]]:=True;
vec[u_?vec+v_?vec]:=True;
vec[u_?vec-v_?vec]:=True;
vec[u_?vec\[Cross]v_?vec]:=True;
vec[u_?VectorQ]:=True;
(* Placeholder for matrix types *)
mat[a_]:=False;
(* Anything not defined as a vector or matrix is a scalar *)
scal[x_]:=!(vec[x]\[Or]mat[x]);
scal[x_?scal+y_?scal]:=True;scal[x_?scal y_?scal]:=True;
(* Scalars times vectors are vectors *)
vec[a_?scal u_?vec]:=True;
mat[a_?scal m_?mat]:=True;
vExpand$[u_?vec\[Cross](v_?vec+w_?vec)]:=vExpand$[u\[Cross]v]+vExpand$[u\[Cross]w];
vExpand$[(u_?vec+v_?vec)\[Cross]w_?vec]:=vExpand$[u\[Cross]w]+vExpand$[v\[Cross]w];
vExpand$[u_?vec\[CenterDot](v_?vec+w_?vec)]:=vExpand$[u\[CenterDot]v]+vExpand$[u\[CenterDot]w];
vExpand$[(u_?vec+v_?vec)\[CenterDot]w_?vec]:=vExpand$[u\[CenterDot]w]+vExpand$[v\[CenterDot]w];
vExpand$[s_?scal (u_?vec\[Cross]v_?vec)]:=Expand[s] vExpand$[u\[Cross]v];
vExpand$[s_?scal (u_?vec\[CenterDot]v_?vec)]:=Expand[s] vExpand$[u\[CenterDot]v];
vExpand$[Plus[x__]]:=vExpand$/#Plus[x];
vExpand$[s_?scal,Plus[x__]]:=Expand[s](vExpand$/#Plus[x]);
vExpand$[Times[x__]]:=vExpand$/#Times[x];
vExpand[e_]:=e//.e:>Expand[vExpand$[e]]
(* Some simplification rules *)
(u_?vec\[Cross]u_?vec):=\!\(\*OverscriptBox["0", "\[RightVector]"]\);
(u_?vec+\!\(\*OverscriptBox["0", "\[RightVector]"]\)):=u;
0v_?vec:=\!\(\*OverscriptBox["0", "\[RightVector]"]\);
\!\(\*OverscriptBox["0", "\[RightVector]"]\)\[CenterDot]v_?vec:=0;
v_?vec\[CenterDot]\!\(\*OverscriptBox["0", "\[RightVector]"]\):=0;
(a_?scal u_?vec)\[Cross]v_?vec :=a u\[Cross]v;u_?vec\[Cross](a_?scal v_?vec ):=a u\[Cross]v;
(a_?scal u_?vec)\[CenterDot]v_?vec :=a u\[CenterDot]v;
u_?vec\[CenterDot](a_?scal v_?vec) :=a u\[CenterDot]v;
(* Stealing behavior from Dot *)
Attributes[CenterDot]=Attributes[Dot];
Protect[vExpand,vExpand$,Cross,Plus,Times,CenterDot];

Related

Solve linear system of equations with symbolic expressions

Hi I am trying to solve a linear system of equations with mathematica. I have 18 equations and 18 Unknowns and the coefficient matrix has full rank. All entries are symbolic since I am trying to solve the problem analytically. Unfortunately Mathematica never stops the evaluation. I have prepared a minimal working example:
n = 18
A = Table[AA[i, j], {i, 1, n}, {j, 1, n}];
A // MatrixForm
x = Table[xx[i], {i, 1, n}]
b = Table[bb[i], {i, 1, n}]
MatrixRank[A]
sol = Timing[Solve[{A.x == b}, x, Reals]]
A.x == b //. sol[[2]][[1]] // Simplify
For n=2,3,4,.. all works perfectly well. But with n=10... nothing works anymore.
Why has mathematica such problems solving this?
Is there a way to solve this problem?
Thanks for help,
Andreas
You simply need more memory:
The symbolic solution involves n+1 determinants, here is an estimate of the memory needed.
bc[n_] := (A = Det[Array[a, {n, n}]];ByteCount[A])
ListLogPlot[
t = Table[ {n, (n + 1) bc[n] /1024^3 // N} , {n,2,10}], Joined -> True]
extrapolating to n=18 we can see you'll need only about 10^8 Gigabytes..
(thats 1000x more than the largest supercomputers for anyone not getting the point )

Mathematica: Tangent of Two Curves

I asked this question yesterday but not sure if I made clear what I was looking for. Say I have two curves defined as f[x_]:=... and g[x_]:=... as shown below. I want to use Mathematica to determine the abscissa intersection of the tangent to both curves and store value for each curve separately. Perhaps this is really a trivial task, but I do appreciate the help. I am an intermediate with Mathematica but this is one I haven't been able to find a solution to elsewhere.
f[x_] := x^2
g[x_] := (x - 2)^2 + 3
sol = Solve[(f[x1] - g[x2])/(x1 - x2) == f'[x1] == g'[x2], {x1, x2}, Reals]
(* ==> {{x1 -> 3/4, x2 -> 11/4}} *)
eqns = FlattenAt[{f[x], g[x], f'[x1] x + g[x2] - f'[x1] x2 /. sol}, 3];
Plot[eqns, {x, -2, 4}, Frame -> True, Axes -> None]
Please note that there will be many functions f and g for which you won't find a solution in this way. In that case you will have to resort to numerical problem solving methods.
You just need so solve a system of simultaneous equations:
The common tangent line is y = a x + b.
The common slope is a = f'(x1) = g'(x2)
The common points are a x0 + b = f(x0) and a x1 + b = g(x1).
Depending on the nature of the functions f and g this may have no, one, or many solutions.

Trying to get Mathematica to approximate an integral

I am trying to get Mathematica to approximate an integral that is a function of various parameters. I don't need it to be extremely precise -- the answer will be a fraction, and 5 digits would be nice, but I'd settle for as few as 2.
The problem is that there is a symbolic integral buried in the main integral, and I can't use NIntegrate on it since its symbolic.
F[x_, c_] := (1 - (1 - x)^c)^c;
a[n_, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
MyIntegral[n_,c_] :=
NIntegrate[Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}],{x,0,1}]
Mathematica starts hanging when n is greater than 2 and c is greater than 3 or so (generally as both n and c get a little higher).
Are there any tricks for rewriting this expression so that it can be evaluated more easily? I've played with different WorkingPrecision and AccuracyGoal and PrecisionGoal options on the outer NIntegrate, but none of that helps the inner integral, which is where the problem is. In fact, for the higher values of n and c, I can't even get Mathematica to expand the inner derivative, i.e.
Expand[D[a[4,6,y],y]]
hangs.
I am using Mathematica 8 for Students.
If anyone has any tips for how I can get M. to approximate this, I would appreciate it.
Since you only want a numerical output (or that's what you'll get anyway), you can convert the symbolic integration into a numerical one using just NIntegrate as follows:
Clear[a,myIntegral]
a[n_Integer?Positive, c_Integer?Positive, x_] :=
a[n, c, x] = (1 - (1 - a[n - 1, c, x])^c)^c;
a[0, c_Integer, x_] = x;
myIntegral[n_, c_] :=
NIntegrate[D[a[n, c, y], y]*y/(1 - a[n, c, x]), {x, 0, 1}, {y, x, 1},
WorkingPrecision -> 200, PrecisionGoal -> 5]
This is much faster than performing the integration symbolically. Here's a comparison:
yoda:
myIntegral[2,2]//Timing
Out[1]= {0.088441, 0.647376595...}
myIntegral[5,2]//Timing
Out[2]= {1.10486, 0.587502888...}
rcollyer:
MyIntegral[2,2]//Timing
Out[3]= {1.0029, 0.647376}
MyIntegral[5,2]//Timing
Out[4]= {27.1697, 0.587503006...}
(* Obtained with WorkingPrecision->500, PrecisionGoal->5, MaxRecursion->20 *)
Jand's function has timings similar to rcollyer's. Of course, as you increase n, you will have to increase your WorkingPrecision way higher than this, as you've experienced in your previous question. Since you said you only need about 5 digits of precision, I've explicitly set PrecisionGoal to 5. You can change this as per your needs.
To codify the comments, I'd try the following. First, to eliminate infinite recursion with regards to the variable, n, I'd rewrite your functions as
F[x_, c_] := (1 - (1-x)^c)^c;
(* see note below *)
a[n_Integer?Positive, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
that way n==0 will actually be a stopping point. The ?Positive form is a PatternTest, and useful for applying additional conditions to the parameters. I suspect the issue is that NIntegrate is re-evaluating the inner Integrate for every value of x, so I'd pull that evaluation out, like
MyIntegral[n_,c_] :=
With[{ int = Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}] },
NIntegrate[int,{x,0,1}]
]
where With is one of several scoping constructs specifically for creating local constants.
Your comments indicate that the inner integral takes a long time, have you tried simplifying the integrand as it is a derivative of a times a function of a? It seems like the result of a chain rule expansion to me.
Note: as per Yoda's suggestion in the comments, you can add a cacheing, or memoization, mechanism to a. Change its definition to
d:a[n_Integer?Positive, c_, x_] := d = F[a[n - 1, c, x], c];
The trick here is that in d:a[ ... ], d is a named pattern that is used again in d = F[...] cacheing the value of a for those particular parameter values.

Using Mathematica to Automatically Reveal Matrix Structure

I spend a lot of time looking at larger matrices (10x10, 20x20, etc) which usually have some structure, but it is difficult to quickly determine the structure of them as they get larger. Ideally, I'd like to have Mathematica automatically generate some representation of a matrix that will highlight its structure. For instance,
(A = {{1, 2 + 3 I}, {2 - 3 I, 4}}) // StructureForm
would give
{{a, b}, {Conjugate[b], c}}
or even
{{a, b + c I}, {b - c I, d}}
is acceptable. A somewhat naive implementation
StructureForm[M_?MatrixQ] :=
MatrixForm # Module[
{pos, chars},
pos = Reap[
Map[Sow[Position[M, #1], #1] &, M, {2}], _,
Union[Flatten[#2, 1]] &
][[2]]; (* establishes equality relationship *)
chars = CharacterRange["a", "z"][[;; Length # pos ]];
SparseArray[Flatten[Thread /# Thread[pos -> chars] ], Dimensions[M]]
]
works only for real numeric matrices, e.g.
StructureForm # {{1, 2}, {2, 3}} == {{a, b}, {b, c}}
Obviously, I need to define what relationships I think may exist (equality, negation, conjugate, negative conjugate, etc.), but I'm not sure how to establish that these relationships exist, at least in a clean manner. And, once I have the relationships, the next question is how to determine which is the simplest, in some sense? Any thoughts?
One possibility that comes to mind is for each pair of elements generate a triple relating their positions, like {{1,2}, Conjugate, {2,1}} for A, above, then it becomes amenable to graph algorithms.
Edit: Incidentally, my inspiration is from the Matrix Algorithms series (1, 2) by Stewart.
We can start by defining the relationships that we want to recognize:
ClearAll#relationship
relationship[a_ -> sA_, b_ -> sB_] /; b == a := b -> sA
relationship[a_ -> sA_, b_ -> sB_] /; b == -a := b -> -sA
relationship[a_ -> sA_, b_ -> sB_] /; b == Conjugate[a] := b -> SuperStar[sA]
relationship[a_ -> sA_, b_ -> sB_] /; b == -Conjugate[a] := b -> -SuperStar[sA]
relationship[_, _] := Sequence[]
The form in which these relationships are expressed is convenient for the definition of structureForm:
ClearAll#structureForm
structureForm[matrix_?MatrixQ] :=
Module[{values, rules, pairs, inferences}
, values = matrix // Flatten // DeleteDuplicates
; rules = Thread[Rule[values, CharacterRange["a", "z"][[;; Length#values]]]]
; pairs = rules[[#]]& /# Select[Tuples[Range[Length#values], 2], #[[1]] < #[[2]]&]
; inferences = relationship ### pairs
; matrix /. inferences ~Join~ rules
]
In a nutshell, this function checks each possible pair of values in the matrix inferring a substitution rule whenever a pair matches a defined relationship. Note how the relationship definitions are expressed in terms of pairs of substitution rules in the form value -> name. Matrix values are assigned letter names, proceeding from left-to-right, top-to-bottom. Redundant inferred relationships are ignored assuming a precedence in that same order.
Beware that the function will run out of names after it finds 26 distinct values -- an alternate name-assignment strategy will be needed if that is an issue. Also, the names are being represented as strings instead of symbols. This conveniently dodges any unwanted bindings of the single-letter symbols names. If symbols are preferred, it would be trivial to apply the Symbol function to each name.
Here are some sample uses of the function:
In[31]:= structureForm # {{1, 2 + 3 I}, {2 - 3 I, 4}}
Out[31]= {{"a", "b"}, {SuperStar["b"], "d"}}
In[32]:= $m = a + b I /. a | b :> RandomInteger[{-2, 2}, {10, 10}];
$m // MatrixForm
$m // structureForm // MatrixForm
Have you tried looking at the eigenvalues? The eigenvalues reveal a great deal of information on the structure and symmetry of matrices and are standard in statistical analysis of datasets. For e.g.,
Hermitian/symmetric eigenvalues have
real eigenvalues.
Positive semi-definite matrices have
non-negative eigenvalues and vice versa.
Rotation matrices have complex eigenvalues.
Circulant matrices have eigenvalues that are simply the DFT of the first row. The beauty of circulant matrices is that every circulant matrix has the same set of eigenvectors. In some cases, these results (circulant) can be extended to Toeplitz matrices.
If you're dealing with matrices that are random (an experimental observation can be modeled as a random matrix), you could also read up on random matrix theory, which relates the distributions of eigenvalues to the underlying symmetries in the matrix and the statistical distributions of elements. Specifically,
The eigenvalue distribution of symmetric/hermitian Gaussian matrices is a [semicircle]
Eigenvalue distributions of Wishart matrices (if A is a random Gaussian matrix, W=AA' is a Wishart matrix) are given by the Marcenko-Pastur distribution
Also, the differences (spacings) between the eigenvalues also convey information about the matrix.
I'm not sure if the structure that you're looking for is like a connected graph within the matrix or something similar... I presume random matrix theory (which is more general and vast than those links will ever tell you) has some results in this regard.
Perhaps this is not really what you were looking for, but afaik, there is no one stop solution to getting the structure of a matrix. You'll have to use multiple tools to nail it down, and if I were to do it, eigenvalues would be my first pick.

Mathematica: reconstruct an arbitrary nested list after Flatten

What is the simplest way to map an arbitrarily funky nested list expr to a function unflatten so that expr==unflatten##Flatten#expr?
Motivation:
Compile can only handle full arrays (something I just learned -- but not from the error message), so the idea is to use unflatten together with a compiled version of the flattened expression:
fPrivate=Compile[{x,y},Evaluate#Flatten#expr];
f[x_?NumericQ,y_?NumericQ]:=unflatten##fPrivate[x,y]
Example of a solution to a less general problem:
What I actually need to do is to calculate all the derivatives for a given multivariate function up to some order. For this case, I hack my way along like so:
expr=Table[D[x^2 y+y^3,{{x,y},k}],{k,0,2}];
unflatten=Module[{f,x,y,a,b,sslot,tt},
tt=Table[D[f[x,y],{{x,y},k}],{k,0,2}] /.
{Derivative[a_,b_][_][__]-> x[a,b], f[__]-> x[0,0]};
(Evaluate[tt/.MapIndexed[#1->sslot[#2[[1]]]&,
Flatten[tt]]/. sslot-> Slot]&) ]
Out[1]= {x^2 y + y^3, {2 x y, x^2 + 3 y^2}, {{2 y, 2 x}, {2 x, 6 y}}}
Out[2]= {#1, {#2, #3}, {{#4, #5}, {#5, #7}}} &
This works, but it is neither elegant nor general.
Edit: Here is the "job security" version of the solution provided by aaz:
makeUnflatten[expr_List]:=Module[{i=1},
Function#Evaluate#ReplaceAll[
If[ListQ[#1],Map[#0,#1],i++]&#expr,
i_Integer-> Slot[i]]]
It works a charm:
In[2]= makeUnflatten[expr]
Out[2]= {#1,{#2,#3},{{#4,#5},{#6,#7}}}&
You obviously need to save some information about list structure, because Flatten[{a,{b,c}}]==Flatten[{{a,b},c}].
If ArrayQ[expr], then the list structure is given by Dimensions[expr] and you can reconstruct it with Partition. E.g.
expr = {{a, b, c}, {d, e, f}};
dimensions = Dimensions[expr]
{2,3}
unflatten = Fold[Partition, #1, Reverse[Drop[dimensions, 1]]]&;
expr == unflatten # Flatten[expr]
(The Partition man page actually has a similar example called unflatten.)
If expr is not an array, you can try this:
expr = {a, {b, c}};
indexes = Module[{i=0}, If[ListQ[#1], Map[#0, #1], ++i]& #expr]
{1, {2, 3}}
slots = indexes /. {i_Integer -> Slot[i]}
{#1, {#2, #3}}
unflatten = Function[Release[slots]]
{#1, {#2, #3}} &
expr == unflatten ## Flatten[expr]
I am not sure what you are trying to do with Compile. It is used when you want to evaluate procedural or functional expressions very quickly on numerical values, so I don't think it is going to help here. If repeated calculations of D[f,...] are impeding your performance, you can precompute and store them with something like
Table[d[k]=D[f,{{x,y},k}],{k,0,kk}];
Then just call d[k] to get the kth derivative.
I just wanted to update the excellent solutions by aaz and Janus. It seems that, at least in Mathematica 9.0.1.0 on Mac OSX, the assignment (see aaz's solution)
{i_Integer -> Slot[i]}
fails. If, however, we use
{i_Integer :> Slot[i]}
instead, we succeed. The same holds, of course, for the ReplaceAll call in Janus's "job security" version.
For good measure, I include my own function.
unflatten[ex_List, exOriginal_List] :=
Module[
{indexes, slots, unflat},
indexes =
Module[
{i = 0},
If[ListQ[#1], Map[#0, #1], ++i] &#exOriginal
];
slots = indexes /. {i_Integer :> Slot[i]};
unflat = Function[Release[slots]];
unflat ## ex
];
(* example *)
expr = {a, {b, c}};
expr // Flatten // unflatten[#, expr] &
It might seem a little like a cheat to use the original expression in the function, but as aaz points out, we need some information from the original expression. While you don't need it all, in order to have a single function that can unflatten, all is necessary.
My application is similar to Janus's: I am parallelizing calls to Simplify for a tensor. Using ParallelTable I can significantly improve performance, but I wreck the tensor structure in the process. This gives me a quick way to reconstruct my original tensor, simplified.

Resources