I have a collection of points displayed in a graphic:
alt text http://img69.imageshack.us/img69/874/plc1k1lrqynuyshgrdegvfy.jpg
I'd like to know if there is any command that will connect them automatically along the xx and yy axis. This can be better understood looking at the following picture:
alt text http://img341.imageshack.us/img341/5926/tr53exnkpeofcuiw40koyks.jpg
(I am not asking how to implement the algorithm myself!).
Thanks
I suspect the answer is no, there's no such command. It would be interesting to write something to do that though, ie, given a list of points, output the corresponding lines. I guess that would just be a matter of:
For each unique x-coordinate get the list of y-coordinates for points with that x-coordinate and make a line from the min to the max y-coordinate. Then repeat for the y-coordinates.
If you do that, it would be interesting to post it here as a follow-up. Or if you want to make that the question, I'm sure you'll get some nice solutions.
I vote for dreeves' suggestion. It doesn't use a "built-in" function, but it's a one-liner using functional programming and level specifications. An implementation is:
gridify[pts : {{_?NumericQ, _?NumericQ} ...}] :=
Map[Line, GatherBy[pts, #]& /# {First, Last}, {2}]
Some of what you are looking for is in the ComputationalGeometry Package. In particular, ConvexHull will give you the outer points listed in counterclockwise direction. At which point you can use Line to connect them together. The inner paths are a bit trickier, and I don't think there is an exact match. But, a DelaunayTriangulation comes closest. It essentially breaks your list of points up into sets of triangles. I don't know of a built in function that would break it into rectangles, though.
Related
These days I am trying to redo shock spectrum of single degree of freedom system using Sympy. The problem can reduce to find maximum value of a function. Following are two cases I cannot figure out how to do.
The first one is
tau,t,t_r,omega,p0=symbols('tau,t,t_r,omega,p0',positive=True)
h=expand(sin(omega*(t-tau)))
f=simplify(integrate(p0*tau/t_r*h,(tau,0,t_r))+integrate(p0*h,(tau,t_r,t)))
The final goal is to obtain maximum absolute value of f (The variable is t). The direct way is
df=diff(f,t)
sln=solve(simplify(df),t)
simplify(f.subs(t,sln[1]))
Here is the result, I tried many ways, but I can not simplify any further.
Therefore, I tried another way. Because I need the maximum absolute value and the location where abs(f) is maximum happens at the same location of square of f, we can calculate square of f first.
df=expand_trig(diff(expand(f)**2,t))
sln=solve(df,t)
simplify(f.subs(t,sln[2]))
It seems the answer is almost the same, just in another form.
The expected answer is a sinc function plus a constant as following:
Therefore, the question is how to get the final presentation.
The second one may be a little harder. The question can be reduced to find the maximum value of f=sin(pi*t/t_r)-T/2/t_r*sin(2*pi/T*t), in which t_r and T are two parameters. The maximum located at different peak when the ratio of t_r and T changes. And I do not find a way to solve it in Sympy. Any suggestion? The answer can be represented in following figure.
The problem is the log(exp(I*omega*t_r/2)) term. SymPy is not reducing this to I*omega*t_r/2. SymPy doesn't simplify this because in general, log(exp(x)) != x, but rather log(exp(x)) = x + 2*pi*I*n for some integer n. But in this case, if you replace log(exp(I*omega*t_r/2)) with omega*t_r/2 or omega*t_r/2 + 2*pi*I*n, it will be the same, because it will just add a 2*pi*I*n inside the sin.
I couldn't figure out any functions that force this simplification, but the easiest way is to just do a substitution:
In [18]: print(simplify(f.subs(t,sln[1]).subs(log(exp(I*omega*t_r/2)), I*omega*t_r/2)))
p0*(omega*t_r - 2*sin(omega*t_r/2))/(omega**2*t_r)
That looks like the answer you are looking for, except for the absolute value (I'm not sure where they should come from).
Those who are familiar with XPath know that some axes, such as preceding::, are reverse axes. And if you put a positional predicate on an expression built with a reverse axis, you may be counting backward instead of forward. E.g.
$foo/preceding-sibling::*[1]
returns the preceding sibling element just before $foo, not the first preceding sibling element (in document order).
But then you encounter variations where this rule seems to be broken, depending on how far removed the positional predicate is from the reverse axis. E.g.
($foo/preceding-sibling::*)[1]
counts forward from the beginning of the document, not backward from $foo.
Today I was writing some code where I had an expression like
$foo/preceding::bar[not(parent::baz)][1]
I wanted to be counting backwards from $foo. But was my positional predicate too far removed from the preceding:: axis? Had the expression lost its reverse direction before I added the [1]? I thought it probably wouldn't work, so I changed it to
$foo/preceding::bar[not(parent::baz)][last()]
but then I wasn't really sure of the direction, so I put in parentheses to make sure:
($foo/preceding::bar[not(parent::baz)])[last()]
However, the extra parentheses are a bit confusing, and I thought the expression might be less efficient, if it really has to count from the beginning of the (large) input document instead of backward from $foo. Was it really necessary to do it this way?
Finally I tested the original expression, and found to my surprise that it worked! So the intervening [not(parent::baz)] had not caused the expression to lose its reverse direction after all.
That problem was solved, but I've come to the point where I'd like to get a better handle on when I can expect the reverse direction of an axis to apply. My question is: At what point(s) does an XPath expression using a reverse axis lose its reverse direction?
I believe I've found the answer now, so I'll answer my own question. But I couldn't find the answer on SO, and it's something that has bothered me long enough that it was worth asking and answering here.
The best answer I found was in an old email by Evan Lenz.
It's worth reading in full, as an explanation of how XPath works in this regard, and how the XPath 1.0 spec shows us the answer. But the executive summary is in this rule:
Step ::= AxisSpecifier NodeTest Predicate*
| AbbreviatedStep
The Step production defines the syntax of a location step, and it's only within a location step that the reverse direction of an axis applies.
Any syntax that comes between the axis and a positional predicate, other than the nodetest and predicates, will break the chain and the direction will revert to forward.
This explains why, if you put parentheses around a preceding::foo and append a positional predicate outside the parentheses, the positional predicate ignores the direction of the preceding:: axis.
It also explains why my first attempt in my code today worked, despite my expectations: you can put as many predicates after a NodeTest as you want, and the direction of the axis will still apply to all of them.
I was given a brain puzzle from lonpos.cc as a present. I was curius of how many different solutions there were, and I quite enjoy writing algorithms and code, so I started writing an application to brute force it.
The puzzle looks like this : http://www.lonpos.cc/images/LONPOSdb.jpg / http://cdn100.iofferphoto.com/img/item/191/498/944/u2t6.jpg
It's a board of 20x14 "points". And all puzzle pieces can be flipped and turned. I wrote an application where each piece (and the puzzle) is presented like this:
01010
00100
01110
01110
11111
01010
Now my application so far is reasonably simple.
It takes the list of pieces and a blank board, pops of piece #0
flips it in every direction, and for that piece tries to place it for every x and y coordinate. If it successfully places a piece it passes a copy of the new "board" with some pieces taken to a recursive function, and tries all combinations for their pieces.
Explained in pseudocode:
bruteForce(Board base, List pieces) {
for (Piece in pieces.pop, piece.pop.flip, piece.pop.flip2...) {
int x,y = 0;
if canplace(piece, x, y) {
Board newBoard = base.clone();
newBoard.placePiece(piece, x, y);
bruteForce(newBoard, pieces);
}
## increment x until x > width, then y
}
}
Now I'm trying to find out ways to make this quicker. Things I've thought of so far:
Making it solve in parallel - Implemented, now using 4 threads.
Sorting the pieces, and only trying to place the pieces that will fit in the x,y space we're trying to fit. (Aka if we're on the bottom row, and we only have 4 "points" from our position to the bottom, dont try the ones that are 8 high).
Not duplicating the board, instead using placePiece and removePiece or something like it.
Checking for "invalid" boards, aka if a piece is impossible to reach (boxed in completely).
Anyone have any creative ideas on how I can do this quicker? Or any way to mathematically calculate how many different combinations there are?
I don't see any obvious way to do things fast, but here are some tips that might help.
First off, if you ignore the bumps, you have a 6x4 grid to fill with 1x2 blocks. Each of the blocks has 6 positions where it can have a bump or a hole. Therefore, you're trying to find an arrangement of the blocks such that at each edge, a bump is matched with a hole. Also, you can represent the pieces much more efficiently using this information.
Next, I'd recommend trying all ways to place a block in a specific spot rather than all places to play a specific block anywhere. This will reduce the number of false trails you go down.
This looks like the Exact Cover Problem. You basically want to cover all fields on the board with your given pieces. I can recommend Dancing Links, published by Donald Knuth. In the paper you find a clear example for the pentomino problem which should give you a good idea of how it works.
You basically set up a system that keeps track of all possible ways to place a specific block on the board. By placing a block, you would cover a set of positions on the field. These positions can't be used to place any other blocks. All possibilities would then be erased from the problem setting before you place another block. The dancing links allows for fast backtracking and erasing of possibilities.
I'm trying to solve the following problem:
I'm analyzing an image and I obtain from this analysis a set of segments
I want to know the intersection of these lines (best fit)
I'm using for this opencv's function cvSolve. For reasonably good input everything works fine.
The problem that I have comes from the fact that when I have just a single bad segment as input the result is different from the one expected.
Details:
Upper left image show the "lonely" purple lines influencing the result (all lines are used as input).
Upper right image shows how a single purple line (one removed) can influence the result.
Lower left image show what we want - the intersection of lines as expected (both purple lines eliminated).
Lower right image show how the other purple line (the other is removed) can influence the result.
As you can see only two lines and the result is completely different from the one expected. Any ideas on how to avoid this are appreciated.
Thanks,
Iulian
The algorithm you are using finds, as described in the link, the least square error solution to the problem. This means that if there are more intersection points, the result will be an average (for a reasonable definition of average) of the real solutions.
I would try an iterative solution: if the error of the first solution is too large, remove from the set of segments the one farthest to the solution, and iterate until the error is acceptably small. This should remove one of the many intersection point, and converge on the one with most lines nearby.
A general answer to this kind of problems is the RANSAC algorithm (question dealing with this), however it has a few disadvantages, for example you need to estimate things like "the expected number of outliers" beforehand. Another Problem I see with your sample is that removing the two green lines also results in a pretty good fit, so that might be a more general problem.
you can solve using SVD incase line1 =(x1,y1)-(x2,y2) ; line2 =(x2,y2)-(x3,y3)
let Ax = b where;
A = [-(y2-y1) (x2-x1);
-(y3-y2) (x3-x2);
.................
.................] -->(nx2)
x = transpose[s t] -->(2x1)
b = [-(y2-y1)x1 + (x2-x1)y1 ;
-(y3-y2)x2 + (x3-x2)y2 ;
........................
........................] --> (nx1)
Example; Matlab Code
line1=[0,10;5,10]
line2=[10,0;10,5]
line3=[0,0;5,5]
A=[-(line1(2,2)-line1(1,2)),(line1(2,1)-line1(1,1));
-(line2(2,2)-line2(1,2)),(line2(2,1)-line2(1,1));
-(line3(2,2)-line3(1,2)),(line3(2,1)-line3(1,1))];
b=[(line1(1,1)*A(1,1))+ (line1(1,2)*A(1,2));
(line2(1,1)*A(2,1))+ (line2(1,2)*A(2,2));
(line3(1,1)*A(3,1))+ (line3(1,2)*A(3,2))];
[U D V] = svd(A)
bprime = U'*b
y=[bprime(1)/D(1,1);bprime(2)/D(2,2)]
x=V*y
I just started working with Mathematica (5.0) for the first time, and while the manual has been helpful, I'm not entirely sure my technique has been correct using (Full)Simplify. I am using the program to check my work on a derived transform to change between reference frames, which consisted of multiplying a trio of relatively large square matrices.
A colleague and I each did the work by hand, separately, to make sure there were no mistakes. We hoped to get a third check from the program, which seemed that it would be simple enough to ask. The hand calculations took some time due to matrix size, but we came to the same conclusions. The fact that we had the same answer made me skeptical when the program produced different results.
I've checked and double checked my inputs.
I am definitely . (dot-multiplying) the matrices for correct multiplication.
FullSimplify made no difference.
Neither have combinations with TrigReduce / expanding algebraically before simplifying.
I've taken indices from the final matrix and tryed to simplify them while isolated, to no avail, so the problem isn't due to the use of matrices.
I've also tried to multiply the first two matrices, simplify, and then multiply that with the third matrix; however, this produced the same results as before.
I thought Simplify automatically crossed into all levels of Heads, so I didn't need to worry about mapping, but even where zeros would be expected as outputs in the matrix, there are terms, and where we would expect terms, there are close answers, plus a host of sin and cosine terms that do not reduce.
Does anyone frequent any type of technique with Simplify to get more preferable results, in contrast to solely using Simplify?
If there are assumptions on parameter ranges you will want to feed them to Simplify. The following simple examples will indicate why this might be useful.
In[218]:= Simplify[a*Sqrt[1 - x^2] - Sqrt[a^2 - a^2*x^2]]
Out[218]= a Sqrt[1 - x^2] - Sqrt[-a^2 (-1 + x^2)]
In[219]:= Simplify[a*Sqrt[1 - x^2] - Sqrt[a^2 - a^2*x^2],
Assumptions -> a > 0]
Out[219]= 0
Assuming this and other responses miss the mark, if you could provide an example that in some way shows the possibly bad behavior, that would be very helpful. Disguise it howsoever necessary in order to hide proprietary features: bleach out watermarks, file down registration numbers, maybe dress it in a moustache.
Daniel Lichtblau
Wolfram Research
As you didn't give much details to chew on I can only give you a few tips:
Mma5 is pretty old. The current version is 8. If you have access to someone with 8 you might ask him to try it to see whether that makes a difference. You could also try WolframAlpha online (http://www.wolframalpha.com/), which also understands some (all?) Mma syntax.
Have you tried comparing your own and Mma's result numerically? Generate a Table of differences for various parameter values or use Plot. If the differences are negligable (use Chop to cut off small residuals) the results are probably equivalent.
Cheers -- Sjoerd