calculating an algorithm time in prolog - prolog

I am solving Bridge and Torch puzzle by PROLOG. As I am trying different search methods, I need to find the calculation time. I mean the start time and the end time of solving the problem by the algorithm.
Can I have access to System time from Prolog ? How?

I calculated overall Time in a predicate using get_time(T) as this:
print_solution :-
**get_time(T1),**
init(State),
solve(State,Solution,EndState),
writeln('Start state:'),
writeln(State),
writeln('Solution:'),
writeln(Solution),
writeln('Final state:'),
writeln(EndState),
**get_time(T2),**
**DeltaT is T2- T1,**
**write('time: '), write(DeltaT), write(' ms.\n'), nl.**

Using get_time/1 for assessing the execution time of algorithms does not calculate the execution time of the algorithm in isolation, but of your entire computer's process stack. E.g., if your OS decides to run some process in the background, this can already change the time delta considerably.
SWI-Prolog has dedicated support for tracking the time spend on calculating all called predicates. For this you need to (1) import library statistics and (2) enclose the predicate you want to profile in profile/1.
For example:
:- use_module(library(statistics)).
run_test:-
profile(my_predicate(MyArgument)).
Hope this helps!

Related

Prove that we can decide whether a Turing machine takes at least 100 steps on some input

We know that the problem “Does this Turing machine take at least this finite number of steps on that input?” is decidable, because it will always answer yes or no, where it will say yes if the machine reaches the given number of steps and no if it halts before that.
Now here is my doubt: if it halts before reaching those many steps — i.e. the input either (1) got accepted or (2) got rejected or maybe (3)if it doesn’t halt but rather goes into an infinite loop — then, when we are in case (3), how can we be sure that it will always be in that loop?
What I mean to say is that if it doesn't run forever but comes out of the loop at some point of time then it might cross the asked number of steps and the decision can be made now which was earlier not possible. If so, then how can we conclude that it's decidable when we know that being stuck in a loop we won’t be able to say anything about the outcome?
(I already more or less answered your question when I edited it.)
The thing is, the decision system (a Turing machine, an algorithm or any other equivalent formalism) that takes as inputs a Turing machine M, a number N and a value X, and returns yes or no, has total control over how it executes M on X. It simulates it step by step. So it can run one step of M(X), increment an instruction counter, compare it to N and, as soon as the given number of steps is reached, it stops and returns yes. At that point, there is no need that the simulated machine M be in a final state, and actually the full computation M(X) could very well diverge. We don’t care, because we only run the first N steps.
Most likely the "conditional structures where not being debuged/developed enough so that multiple conditions often conflicted each other..the error reporting where not as definitive, so it where used semi abstract notions as "decidable" and "undecidable"
as a semi example i writen years ago in vbs a "64 bit rom memory" simulator, as i tried to manage the memory cells, where i/o read/write locations where atributed , using manny formulas and conditions to set conversions from decimal to binary and all the operations, indexing, etc.
I had allso run into bugs becouse that the conditons where not perfect.Why? becouse the program had some unresolved somewhat arbitrary results that could had ended up in :
print.debug "decidable"
On Error Resume h
h:
print.debug "undecidable"
this was a example with a clear scope and with a debatable result.
to resume to your question : > "so how do we conclude that it's decidable??"
wikipedia :
The Turing machine was invented in 1936 by Alan Turing, who called it an "a-machine" (automatic machine). With this model, Turing was able to answer two questions in the negative:
Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)?
Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol?
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the ('decision problem').

Modelica events and hybrid modelling

I would like to understand the general idea behind hybrid modelling (in particular state events) from a numerical point of view (although I am not a mathematician :)). Given the following Modelica model:
model BouncingBall
constant Real g=9.81
Real h(start=1);
Real v(start=0);
equation
der(h)=v;
der(v)=-g;
algorithm
when h < 0 then
reinit(v,-pre(v));
end when;
end BouncingBall;
I understand the concept of when and reinit.
The equation in the when statement are only active when the condition become true right?
Let's assume that the ball would hit the floor at exactly 2sec. Since I am using multi-step solver does that mean that the solver "goes beyond 2 seconds", recognizes that h<0 (lets assume at simulation time = 2.5sec , h = -0.7). What does this mean "The time for the event is searched using a crossing function? Is there a simple explanation(example)?
Is the solver now going back? Taking a smaller step-size?
What does the pre() operation mean in that context?
noEvent(): "Expressions are taken literally instead of generating crossing functions. Since there is no crossing function, there is no requirement tat the expression can be evaluated beyond the event limit": What does that mean? Given the same example with the bouncing ball: The solver detects at time 2.5 that h = 0.7. Whats the difference between with and without noEvent()?
Yes, the body of when is only executed at events.
Simple view: The solver takes steps, and then uses a continuous extension to generate a (smooth) interpolation formula for the previous step. That interpolation formula can be used to generate a plot, and also for finding the first point where h has crossed zero (likely 2.000000001). An event iteration is then done at that interpolated point - and afterwards the solver is restarted.
I wouldn't say that the solver goes back. It takes a partial step and then continues forward. Some solvers need to reduce the step-size a lot after the event - others don't.
pre(x) is set to the value of x before the event.
noEvent(h<0) basically means evaluate the expression as written without all the bells-and-whistles of crossing functions. You cannot use when noEvent(h<0) then
There are many additional point:
If you are familiar with Sturm-sequences or control theory you might realize that it is not necessary to interpolate a formula to determine if it crossed zero or not in an interval (and some tools use that). The fact that the function is not necessarily smooth makes it a bit more complicated, and also means that derivative-tests cannot be used.
How much the solver is reset depends on the kind of solver. One-step solvers (Runge-Kutta) can be restarted directly as if virtually nothing happened, whereas multi-step solvers (BDF/Adams - such as dassl/lsodar/cvode) need to start with lower order and smaller step-size.

Prolog's CLP over Finite Domains library performance

I'm programming a task scheduler/planner in Prolog and for that I'm planning to use the CLPFD library (on SWIPL). I was wondering how powerful it is to use finite domains to solve scheduling problems and what the impact would be in CPU load if I do use it.
The scheduling problem would be based on the assertions stated in page 10 of this paper: "Constraint-Based Scheduling". In fact my tasks/activities will be very heterogeneous (some will be preemptable while others won't) and the activity resources will have different capacities. Right now, I'm just working on a simple case (non-preemptible, disjunctive scheduling) and I've come to something like this:
/* Non-preemptive, disjunctive scheduling. *******************************/
planner :-
/* 'S' stands for start point.
'E' stands for end point. */
set(a1,S1,E1),
set(a2,S2,E2),
set(a3,S3,E3),
interval(intersection,[S1,E1],[S2,E2],[]), % Tests whether activities
interval(intersection,[S2,E2],[S3,E3],[]), % intersect. If they do,
interval(intersection,[S3,E3],[S1,E1],[]), % backtracking occurs and
(...). % an alternative solution
% will be looked for.
/* A set of times in which activity A executes (non-preemptive) */
set(A,[S],[E]) :-
/* 'A' is the activity.
'R' is release point and 'D' deadline point.
'Lst' stands for Latest Start Point.
'Eet' stands for Earliest End Point. */
preemptable(A,no),
rd(A,R,D),
p(A,P),
Lst is D-P,
Eet is R+P,
S in R..D,
E in R..D,
S #=< Lst,
E #>= Eet,
S #< E,
P #= E-S,
indomain(S),
indomain(E).
set(A,[],[]). /* When the activity can't be scheduled. */
It does work, and it's really fast (intstantaneous in fact). But this is just a simple case with three activities, when in my final program I'll have hundreds of them, and the scheduling problem will be much more complex than this.
Thanks for your advice!
In general, CLP(FD) is a suitable and well-established way to solve such kinds of problems. Note however that there are many different ways to model your problem even within library(clpfd): You can for example use the global constraints serialized/2 or cumulative/1 to express it. Other Prolog systems will often give you much better performance than SWI-Prolog, but the way you model your problem and search for solutions typically affects performance much more than the optimizations of any specific implementation.

State propagation during bactracking in Prolog

Let's assume, that I have a simple program in Prolog, which is searching through a certain state space:
search(State, State) :-
is_solution(State).
search(State, Solution) :-
generate(State, NewState),
search(NewState, Solution).
And I know that:
generate(State, NewState) is producing at least one NewState for any given State
the whole states space is finite
I want to modify the search predicate to ensure that it always manages to check in a finite time. So I write something like:
search(State, Solution) :-
empty_memory(EmptyMem),
add(State, EmptyMem, Memory),
search(State, Memory, Solution).
search(State, _, State) :-
is_solution(State).
search(State, Memory, Solution) :-
generate(State, NewState),
\+ exist(NewState, Memory),
add(NewState, Memory, NewMemory),
search(NewState, NewMemory, Solution).
which is working, but it's losing computed states during backtracking, so now I have a search tree with maximum height of space size.
Is it any way to propagate a state during the backtracking without losing any computed information? I want to have a whole search tree with O(space_size) nodes. Is it possible?
EDIT:
It seems that I should use assert/[1,2] in order to dynamically create new clauses which will serve as a global memory.
In SICStus Prolog, you can use the blackboard to store information across backtracks: see Blackboard Primitives in the manual. Use bb_put(Key, Value) to store something on the blackboard, and bb_get(Key, Value) to retrieve it. Note that the bloackboard is defined per module.
The most clean solution will likely be to use a Prolog compiler that supports tabling like B-Prolog, Ciao, XSB, or YAP. In this case, you would simply declare the generate/2 predicate as tabled.
Instead of using assert, why not generate all possible states with findall(N, generate(S,N),ALL). This will eliminate the need for backtracking and will explicate the search space tree, which then could be preorder-traversed while passing along the visited-so-far states as additional argument.

what algorithm for a scheduling program

I have this problem of scheduling tasks. Each task has a suggested start time T (it needs to start at [T-10, T+10]), takes L minutes to complete and uses a number of resources [R1, R2,...]. When a resource is being used, no other task can use it. Given that only the start time is flexible, my goal is to schedule the tasks so that they can access any resource they need or point out all the conflicts that needs resolving.
Which algorithm can I use for this purpose? Thank you.
Since you've tagged this as prolog, I recommend implementing it in constraint logic programming (CLP) and using the algorithms built into your CLP implementation. Partial example:
:- use_module(library(clpfd)).
on_time([]).
on_time([Task|Tasks]) :-
Task = task(TSuggested,TActual,L,Rs),
TActual #>= TSuggested - 10,
TActual #=< TSuggested + 10,
on_time(Tasks).
Another predicate would check that no two tasks use the same resource concurrently:
nonoverlap(R,Task1,Task2) :-
Task1 = task(_,T1,L1,Rs2),
Task2 = task(_,T2,L2,Rs2),
((member(R,Rs1), member(R,Rs2)) ->
T2 #> T1+L1 % start Task2 after Task1 has finished
#\/ % OR
T1 #> T2+L2 % start Task1 after Task2 has finished
;
true % non-conflicting, do nothing
).
Finally, call labeling on all the constrained variables to give them consistent values. This uses CLP(fd), which works for integer time units. CLP(R) does the same for real-valued time but it slightly more complicated. Links are for SWI-Prolog but SICStus and ECLiPSe have similar libraries.
Scheduling problems like this are often best addressed using either Constraint Programming CP or Mixed Integer Programming (MIP). Both are declarative approaches, so you only need to focus on the properties of your problem and let a specialized engine handle the underlying algorithm. More information can be found on wikipedia:
http://en.wikipedia.org/wiki/Constraint_programming
http://en.wikipedia.org/wiki/Linear_programming
If you're constraints or your problem domain will scale out, you should also take a look at the imperfect algorithms, such as:
Metaheuristics such as tabu search and simulated annealing. There are a couple of open source implementations out there, such as Drools Planner.
Genetic algorithms, such as JGap.

Resources