The ComputeShortestPath function of D* Lite looks like this:
while ( U.TopKey() < CalculateKey(Sstart) OR rhs(Sstart) != g(Sstart) )
In my experience with the algorithm the first path (U.TopKey() < CalculateKey(Sstart)) is true for the very first execution of ComputeShortestPath where we find the initial route. However, afterwards I've experienced that it becomes false and it's the second part that is true and makes the function run when we replan...
However, I wonder if that might be a mistaken conclusion on my part?
I've been experiencing issues when Sstart is not in the neighborhood of a cell that has become an obstacle, and since Sstart is not updated ComputeShortestPath doesn't not execute since its first and second statements are both false.
Related
I have encountered an unexpected problem, while i am trying to implement an AC-3 algorithm in MATLAB. It's an AI algorithm, to achieve local consistency for networks of binary constraints.
Well, firstly, let me explain this to you. In general, what i am trying to do here, is check whether an "arc" (xi,xj) of variables already exists or not in a FIFO queue called Q. If yes, i do nothing. If it does not already exists, i have to insert the arc into the Q(into the back, FIFO-style) and use break to exit the if statement. I hope that it is pretty much clear until now.
To add this arc (xi,xj) to the Q, i use a replicated matrix Q2, just like Q, which holds all the arcs of type (xi,xj) the whole time, while the initial Q is making some deletions of arcs through the iterations of a for loop, following the theory of the AC-3 algorithm.
There is the code the problem is located:
for m=1:length(Q2)
alreadyExists = any(cellfun(#(x) isequal(x, Q2{m}), Q));
if alreadyExists==0;
fprintf('\nRe-entering in Q arc(x%d,x%d)...\n',Q2{m}(1),Q2{m}(2));
Q{end+1}=[Q2{m}];
re_enteringQ_cntr=re_enteringQ_cntr+1;
break;
end
end
So what i am doing in line:
alreadyExists = any(cellfun(#(x) isequal(x, Q2{m}), Q));
Is just to compare every arc (xi,xj) stored in Q2 with x, a variable returned by another function, which is also another arc.
And here is the problem, using profile viewer on MATLAB:
It is really weird. About ~70 million calls this specific code in nearly 12 minutes of "real" time?! I did Ctrl+C to stop the program, as it seemed it would be executed for a really long time.This cannot be right of course, it is a kind of bottleneck and I can not find any solution until now.
To catch you up guys, in every function and .m file of my whole algorithm i do preallocations for all the matrices, cell arrays and vectors i use, in order to boost up the speed.
Any ideas, any thoughts and/or feedback would be really appreciated.
Thank you in advance for your time.
I need to run many many tests of the form a<0 where a is a vector (a relatively short one). I am currently doing it with
all(v<0)
Is there a faster way?
Not sure which one will be faster (that may depend on the machine and Matlab version), but here are some alternatives to all(v<0):
~any(v>0)
nnz(v>=0)==0 %// Or ~nnz(v>=0)
sum(v>=0)==0 %// Or ~sum(v>=0)
isempty(find(v>0, 1)) %// Or isempty(find(v>0))
I think the issue is that the conditional is executed on all elements of the array first, then the condition is tested... That is, for the test "any(v<0)", matlab does the following I believe:
Step 1: compute v<0 for every element of v
Step 2: search through the results of step 1 for a true value
So even if the first element of v is less than zero, the conditional was first computed for all elements, hence wasting a lot of time. I think this is also true for any of the alternative solutions offered above.
I don't know of a faster way to do it easily, but wish I did. In some cases, breaking the array v up into smaller chunks and testing incrementally could speed things up, particularly if the condition is common. For example:
function result = anyLessThanZero(v);
w = v(:);
result = true;
for i=1:numel(w)
if ( w(i) < 0 )
return;
end
end
result = false;
end
but that can be very inefficient if the condition is rare. (If you were to really do this, there is probably a better way than I illustrate above to handle any condition, not just <0, but I show it this way to make it clear).
If I have code for some function f (that takes in one input for simplicity), I need to decide if the input x affects the output f(x), i.e, if f is a constant function defined below.
Define f to be constant function if output of f is invariant w.r.t x. This should hold for ALL inputs. So for example, if we have f(x) = 0 power x, it may output 0 for all inputs except for x = 0, where it may output error. So f is not a constant function.
I can only do static analysis of the code and assume the code is Java source for simplicity.
Is this possible?
This is obviously at least as hard as solving the Halting Problem (proof left as an exercise), so the answer is "no", this is not possible.
It is almost certainly possible. In most cases. Where there aren't weird thing going on.
For normal functions, the ordinary, useful kind that actually return values rather than doing their own little thing, yes.
For a simple function, not recursive, no nastiness of that sort, doing it manually, I would probably make the static-analysis equivalent of a sign chart, where I examine the code and determine every value of x that might possibly be a boundary condition or such (e.g. the code has if (x < 0) somewhere in it, so I check the function for values of x near 0). If this sort of attempt is doomed to fail please tell me before I try to use it on something.
Using brute force to grind away at it could work, unless you are working with quadruple precision x values or something similarly-sized, because then brute force could take years. Although at that point its not really static-analysis anymore.
Static-analysis generally really means having a computer tell you by looking at the code, not you looking at it yourself (at least not very much). Algorithms exist for doing this in many languages, wikipedia has such a list, including some free or even open source.
The ultimate proof that something can be done is for it to have been done already.
Since you'd call a non-terminating function non-constant, here's the reduction from your problem to the halting problem:
void does_it_halt(...);
int f(int x) {
if(x == 1) {
does_it_halt();
}
return 0;
}
Asking if f is constant is equivalent to asking if does_it_halt halts. Therefore, what you're asking for is impossible, since the halting problem is undecidable.
I'm reading about visual programming languages these days. So I've thought up two "paradigms". In both of them, you have one start point, and several end points.
Now, you could either begin at the start point or move in reverse from the end points (the order of end points is known).
Beginning from the start point feels weird, because you can have "splits" in the data flow. Say, if I have an interger, and this integer is needed by two functions simultaenously. Bad. I don't want to get into concurrent coding. Atleast not yet. Or should I?
Beginning at the end points feels much better. You start at the first end point. Check whatever is needed, and evaluate that. I believe this is the lazy evaluation. But the problem comes when you have multiple inputs. How do you decide the order in which to evaluate the inputs?
Can you point me to some articles/papers/something on the internet. Or mabye tell me a few keywords to look for?
If I get what you mean, using the same integer in two functions, is exactly that: you just use it twice, no need to bring concurrency in. If the 'implementation' you were thinking about destroyed input values, you could take a copy before using it.
int i = 2;
int j = fun1(i);
int k = fun2(i);
int res = fun3(j, k);
would become:
i = 2[A]
|
Clone[B]
/ \
/ \
/ \
i_1 i_2
| |
fun1[C] fun2[D]
| |
j k
\ /
\ /
\ /
fun3[E]
|
res
But there's no need of concurrency in order to evaluate the graph. You can just evaluate 'parallel' branches left to right (as indicated by the A-B-C-... labelling - see also here).
Top-down (aka from start to end), left-to-right feels more natural than bottom-up, provided bottom-up actually has a well defined meaning. Regarding the latter point, assuming you do have results for the program, you can't always compute the inputs: think about what happens when funXXX are not injective (for example fun1(x) = x*x) and thus not invertible.
I hope I'm not completely misinterpreting your train of thought.
Moving forward, what you want is the topological sort of your dependency graph - that is, an order in which to execute nodes such that you never execute a node before its dependencies. This assumes, naturally, that there are no cycles in your graph.
Moving backwards, what you're doing is recursively resolving the graph. Starting with the end node, for each dependency that is not yet calculated, you recursively invoke the procedure on that node, until all input values are evaluated. This has the advantage that you never process nodes that aren't required by a particular end state.
Which of the two approaches is best depends somewhat on what precisely you're doing.
FIRST PROBLEM
I have timed how long it takes to compute the following statements (where V[x] is a time-intensive function call):
Alice = Table[V[i],{i,1,300},{1000}];
Bob = Table[Table[V[i],{i,1,300}],{1000}]^tr;
Chris_pre = Table[V[i],{i,1,300}];
Chris = Table[Chris_pre,{1000}]^tr;
Alice, Bob, and Chris are identical matricies computed 3 slightly different ways. I find that Chris is computed 1000 times faster than Alice and Bob.
It is not surprising that Alice is computed 1000 times slower because, naively, the function V must be called 1000 more times than when Chris is computed. But it is very surprising that Bob is so slow, since he is computed identically to Chris except that Chris stores the intermediate step Chris_pre.
Why does Bob evaluate so slowly?
SECOND PROBLEM
Suppose I want to compile a function in Mathematica of the form
f(x)=x+y
where "y" is a constant fixed at compile time (but which I prefer not to directly replace in the code with its numerical because I want to be able to easily change it). If y's actual value is y=7.3, and I define
f1=Compile[{x},x+y]
f2=Compile[{x},x+7.3]
then f1 runs 50% slower than f2. How do I make Mathematica replace "y" with "7.3" when f1 is compiled, so that f1 runs as fast as f2?
EDIT:
I found an ugly workaround for the second problem:
f1=ReleaseHold[Hold[Compile[{x},x+z]]/.{z->y}]
There must be a better way...
You probably should've posted these as separate questions, but no worries!
Problem one
The problem with Alice is of course what you expect. The problem with Bob is that the inner Table is evaluated once per iteration of the outer Table. This is clearly visible with Trace:
Trace[Table[Table[i, {i, 1, 3}], {3}]]
{
Table[Table[i,{i,1,2}],{2}],
{Table[i,{i,1,2}],{i,1},{i,2},{1,2}},{Table[i,{i,1,2}],{i,1},{i,2},{1,2}},
{{1,2},{1,2}}
}
Line breaks added for emphasis, and yeah, the output of Trace on Table is a little weird, but you can see it. Clearly Mathematica could optimize this better, knowing that the outside table has no iterator, but for whatever reason, it doesn't take that into account. Only Chris does what you want, though you could modify Bob:
Transpose[Table[Evaluate[Table[V[i],{i,1,300}]],{1000}]]
This looks like it actually outperforms Chris by a factor of two or so, because it doesn't have to store the intermediate result.
Problem two
There's a simpler solution with Evaluate, though I expect it won't work with all possible functions to be compiled (i.e. ones that really should be Held):
f1 = Compile[{x}, Evaluate[x + y]];
You could also use a With:
With[{y=7.3},
f1 = Compile[{x}, x + y];
]
Or if y is defined elsewhere, use a temporary:
y = 7.3;
With[{z = y},
f1 = Compile[{x}, x + z];
]
I'm not an expert on Mathematica's scoping and evaluation mechanisms, so there could easily be a much better way, but hopefully one of those does it for you!
Your first problem has already been explained, but I want to point out that ConstantArray was introduced in Mathematica 6 to address this issue. Prior to that time Table[expr, {50}] was used for both fixed and changing expressions.
Since the introduction of ConstantArray there is clear separation between iteration with reevaluation, and simple duplication of an expression. You can see the behavior using this:
ConstantArray[Table[Pause[1]; i, {i, 5}], {50}] ~Monitor~ i
It takes five seconds to loop through Table because of Pause[1], but after that loop is complete it is not reevaluated and the 50 copies are immediately printed.
First problem
Have you checked the output of the Chris_pre computation? You will find that it is not a large matrix at all, since you're trying to store an intermediate result in a pattern, rather than a Variable. Try ChrisPre, instead. Then all the timings are comparable.
Second problem
Compile has a number of tricky restrictions on it's use. One issue is that you cannot refer to global variables. The With construct that was already suggested is the suggested way around this. If you want to learn more about Compile, check out Ted Ersek's tricks:
http://www.verbeia.com/mathematica/tips/Tricks.html