ECLiPSe CLP : Pause between subresults found by search/6 in ic library - prolog

(This question regards search/6.)
I was wondering if there is a way -rather than manual tracing- to pause the execution of search/6 every time a new solution for a single variable was found?
I would like to accomplish this to further investigate what is happening during search in constrained models.
For example, if you are trying to solve the classic sudoku problem, and you have written a set of constraints and a print method for your board, it can be useful to print the board after setting the constraints, but before searching, in order to evaluate the strongness of your constraints. However, once search is called to solve the sudoku, you don't really have an overview of the single results being built underneath unless you do a trace.
It would be very useful if something was possible in the likes of:
(this is just an abstract example)
% Let's imagine this is a (very poorly) constrained sudoku board
?- problem(Sudoku),constraint(Sudoku),print(Sudoku).
[[1,3,_,2,_,_,7,4,_],
[_,2,5,_,1,_,_,_,_],
[4,8,_,_,6,_,_,5,_],
[_,_,_,7,8,_,2,1,_],
[5,_,_,_,9,_,3,7,_],
[9,_,_,_,3,_,_,_,5],
[_,4,_,_,_,6,8,9,_],
[_,5,3,_,_,1,4,_,_],
[6,_,_,_,_,_,_,_,_]]
Now for the search:
?- problem(Sudoku),constraint(Sudoku),search_pause(Sudoku,BT),print(Sudoku,BT).
[[1,3,6,2,_,_,7,4,_],
[_,2,5,_,1,_,_,_,_],
[4,8,_,_,6,_,_,5,_],
[_,_,_,7,8,_,2,1,_],
[5,_,_,_,9,_,3,7,_],
[9,_,_,_,3,_,_,_,5],
[_,4,_,_,_,6,8,9,_],
[_,5,3,_,_,1,4,_,_],
[6,_,_,_,_,_,_,_,_]]
Board[1,3] = 6
Backtracks = 1
more ;

Using existing Visualization tools
Have a look at the Visualization Tools Manual. You can get the kind of matrix display you want by adding a viewable_create/2 annotation to your code, and launching a Visualisation Client from TkECLiPSe's Tools-menu.
Using your own instrumented search routine
You can replace the indomain_xxx choice methods in search/6 with a user-defined one where you can print information before and/or after propagation.
If that is not enough, you can replace the whole built-in search/6 with your own, which is not too difficult, see e.g. the ECLiPSe Tutorial chapter on tree search or my answer to this question.
Tracing using data-driven facilities
Using ECLiPSe's data-driven control facilities, you can quite easily display information when certain things happen to your variables. In the simplest case you do something on variable instantiation:
?- suspend(printf("X was instantiated to %w%n",[X]), 1, X->inst),
writeln(start), X=3, writeln(end).
start
X was instantiated to 3
end
Based on this idea, you can write code that allows you to follow labeling and propagation steps even when they happen inside a black-box search routine. See the link for details.

Related

Visual Studio - Is there any way to search a subset of code that can be executed?

Not sure if this is possible but given the way we can easily navigate to implementations of method calls (or at least a choice of possible implementations) and can even syntax highlight code coverage - is there any way to perform a 'search' or have an overview of all the code that CAN be run in a given highlighted section?
I.E if I highlight code
CallThirdParty(); // this function calls five other functions from classes X Y and Z
WriteToDatabase(); // no child function calls
PerformReconciliation(); // this function calls fourteen other functions from class A
Could I run a search on code that would be in classes X Y Z and A? Or at least get a view of all the code that would / could be run for that snippet?
Forgive me if it doesn't make much sense, but I think this would be absolutely awesome, especially when jumping into a project you aren't familiar with!
For Visual Studio for the question purposes but I'd be interested in any IDE / plugin that accomplishes this.
The Code Map might do what you're looking for. As the article says it can help you
Understand the overall architecture of a .NET application.
Analyze dependencies surfaced in that architecture by progressively drilling into the details.
Understand and analyze the impact of a proposed change to the code by building a dependency map from a specific code element.
The Call Hierarchy may help too. As the article says the
Call Hierarchy enables you to navigate through your code by displaying all calls to and from a selected method, property, or constructor. This enables you to better understand how code flows and to evaluate the effects of changes to code. You can examine several levels of code to view complex chains of method calls and additional entry points to the code, which enables you to explore all possible execution paths.
Additionally, you could always debug and step through the code to see what it does under different circumstances and then look at the call stack etc. to follow your calls and variables through.

How to pass functions as arguments to other functions in Julia without sacrificing performance?

EDIT to try to address #user2864740's edit and comment: I am wondering if there is any information particularly relevant to 0.4rc1/rc2 or in particular a strategy or suggestion from one of the Julia developers more recent than those cited below (particularly #StefanKarpinski's Jan 2014 answer in #6 below). Thx
Please see e.g.
https://groups.google.com/forum/#!topic/julia-users/pCuDx6jNJzU
https://groups.google.com/forum/#!topic/julia-users/2kLNdQTGZcA
https://groups.google.com/forum/#!msg/julia-dev/JEiH96ofclY/_amm9Cah6YAJ
https://github.com/JuliaLang/julia/pull/10269
https://github.com/JuliaLang/julia/issues/1090
Can I add type information to arguments that are functions in Julia?
Performance penalty using anonymous function in Julia
(As a fairly inexperienced Julia user) my best synthesis of this information, some of which seems to be dated, is that the best practice is either "avoid doing this" or "use FastAnonymous.jl."
I'm wondering what the bleeding edge latest and greatest way to handle this is.
[Longer version:]
In particular, suppose I have a big hierarchy of functions. I would like to be able to do something like
function transform(function_one::Function{from A to B},
function_two::Function{from B to C},
function_three::Function{from A to D})
function::Function{from Set{A} to Dict{C,D}}(set_of_As::Set{A})
Dict{C,D}([function_two(function_one(a)) => function_three(a)
for a in set_of_As])
end
end
Please don't take the code too literally. This is a narrow example of a more general form of transformation I'd like to be able to do regardless of the actual specifics of the transformation, BUT I'd like to do it in such a way that I don't have to worry (too much) about checking the performance (that is, beyond the normal worries I'd apply in any non-function-with-function-as-parameter case) each time I write a function that behaves this way.
For example, in my ideal world, the correct answer would be "so long as you annotate each input function with #anon before you call this function with those functions as arguments, then you're going to do as well as you can without tuning to the specific case of the concrete arguments you're passing."
If that's true, great--I'm just wondering if that's the right interpretation, or if not, if there is some resource I could read on this topic that is closer to a "logically" presented synthesis than the collection of links here (which are more a stream of collective consciousness or history of thought on this issue).
The answer is still "use FastAnonymous.jl," or create "functor types" manually (see NumericFuns.jl).
If you're using julia 0.4, FastAnonymous.jl works essentially the same way that official "fast closures" will eventually work in base julia. See https://github.com/JuliaLang/julia/issues/11452#issuecomment-125854499.
(FastAnonymous is implemented in a very different way on julia 0.3, and has many more weaknesses.)

How do I reinstate constraints collected with copy_term/3 in SICStus Prolog?

The documentation says that
copy_term(+Term, -Copy, -Body) makes a copy of Term in which all
variables have been replaced by new variables that occur nowhere
outside the newly created term. If Term contains attributed
variables, Body is unified with a term such that executing Body
will reinstate equivalent attributes on the variables in Copy.
I'm previously affirming numerical CLP(R) constraints over some variables, and at some point I collect these constraints using copy_term/3. Later, when I try to reinstate the constraints using 'call(Body)', I get an "Instantiation error" in arguments of the form [nfr:resubmit_eq(...)]
Here's a simplified example that demonstrates the problem:
:-use_module(library(clpr)).
{Old>=0, A>=0,A=<10, NR= Old+Z, Z=Old*(A/D)}, copy_term(Old,New,CTR), call(CTR).
Results in:
Instantiation error in argument 1 of '.'/2
! goal: [nfr:resubmit_eq([v(-1.0,[_90^ -1,_95^1,_100^1]),v(1.0,[_113^1])])]
My question is: how do I reinstate the constraints in Body over New? I haven't been able to find concrete examples.
copy_term/3 is a relatively new built-in predicate, that has been first introduced in SICStus about 2006. Its motivation was to replace the semantically cumbersome call_residue/2 which originated from SICStus 0.6 of 1987 by a cleaner and more efficient interface that splits the functionality in two:
call_residue_vars(Goal, Vars) which is like call(Goal) and upon success unifies Vars with a list variables (in unspecified order) that are attached to constraints and have been created or affected in Goal.
copy_term(Term, Copy, Body) like copy_term/2 and upon success unifies Body with a term to reinstate the actual constraints involved. Originally, Body was a goal that could be executed directly. Many systems that adopted this interface (like SWI, YAP) however, switched to use a list of goals instead. This simplifies frequent operations since you have less defaultyness, but at the expense of making reinstating more complex. You need to use maplist(call,Goals).
Most of the time, these two built-in predicates will be used together. You are using only one which makes me a bit suspicious. You first need to figure out which variables are involved, and only then you can copy them. Typically you will use call_residue_vars/2 for that. If you are copying only a couple of variables (as in your exemple) you are effectively projecting the constraints on these variables. This may or may not be your intention.
This is simply a bug in CLPR, which is unsupported. We lost touch with the CLPR supplier a long time ago.

How can one get a list of Mathematica's built-in global rewrite rules?

I understand that over a thousand built-in rewrite rules in Mathematica populate the global rules table by default. Is there any way to get Mathematica to give a full or even partial list of those rules?
The best way is to get a job at Wolfram Research.
Failing that, I think that for things not completely compiled into the kernel you can recover most of the rules/definitions. Look at
Attributes[fn]
where fn is the command that you're interested in. If it returns
{Protected, ReadProtected}
then there's something you can get a look at (although often it's just a MakeBoxes (formatting) definition or a AutoLoad/Stub type definition). To see what's there run
Unprotect[fn];
ClearAttributes[fn, ReadProtected];
??fn
Quite often you'll have to run an example of the command to load it if it was a stub. You'll also have to dig down from the user-facing commands to the back-end implementations.
Eventually you'll most likely reach a core command that is compiled into the kernel that you can not see the details of.
I previously mentioned this in tips for creating Graph diagrams and it got a mention in What is in your Mathematica tool bag?.
An good example, with a nice bite-sized and digestible bit of code is Experimental`AngularSlider[] mentioned in Circular/Angular slider. I'll leave it up to you to look at the code produced.
Another example is something like BoxWhiskerChart, where you need to call it once in order to load all of the code. Then you see that BoxWhiskerChart proceeds to call Charting`iBoxWhiskerChart which you'll have to unprotect to look at, etc...

Cross version line matching

I'm considering how to do automatic bug tracking and as part of that I'm wondering what is available to match source code line numbers (or more accurate numbers mapped from instruction pointers via something like addr2line) in one version of a program to the same line in another. (Assume everything is in some kind of source control and is available to my code)
The simplest approach would be to use a diff tool/lib on the files and do some math on the line number spans, however this has some limitations:
It doesn't handle cross file motion.
It might not play well with lines that get changed
It doesn't look at the information available in the intermediate versions.
It provides no way to manually patch up lines when the diff tool gets things wrong.
It's kinda clunky
Before I start diving into developing something better:
What already exists to do this?
What features do similar system have that I've not thought of?
Why do you need to do this? If you use decent source version control, you should have access to old versions of the code, you can simply provide a link to that so people can see the bug in its original place. In fact the main problem I see with this system is that the bug may have already been fixed, but your automatic line tracking code will point to a line and say there's a bug there. Seems this system would be a pain to build, and not provide a whole lot of help in practice.
My suggestion is: instead of trying to track line numbers, which as you observed can quickly get out of sync as software changes, you should decorate each assertion (or other line of interest) with a unique identifier.
Assuming you're using C, in the case of assertions, this could be as simple as changing something like assert(x == 42); to assert(("check_x", x == 42)); -- this is functionally identical, due to the semantics of the comma operator in C and the fact that a string literal will always evaluate to true.
Of course this means that you need to identify a priori those items that you wish to track. But given that there's no generally reliable way to match up source line numbers across versions (by which I mean that for any mechanism you could propose, I believe I could propose a situation in which that mechanism does the wrong thing) I would argue that this is the best you can do.
Another idea: If you're using C++, you can make use of RAII to track dynamic scopes very elegantly. Basically, you have a Track class whose constructor takes a string describing the scope and adds this to a global stack of currently active scopes. The Track destructor pops the top element off the stack. The final ingredient is a static function Track::getState(), which simply returns a list of all currently active scopes -- this can be called from an exception handler or other error-handling mechanism.

Resources