What is Z3Py FreshBool() function? - syntax

What is the difference between z3.Bool() and z3.FreshBool() functions?
My code in z3 on python fails when I use Bool() (the solver returns unsat when it shouldn't), but works fine when I use FreshBool() (Desired behaviour is observed).
I cannot give out details of the code here, as it is part of an ongoing assignment in a course offered in my college.
Even still, if this information isn't sufficient, I can try and recreate in an unrelated code, to provide you with a better sample to work on.
Thanks

If you use FreshBool() then z3 will create a new variable that does not exist elsewhere in the system. When you use Bool and give it a name, then it will be the same variable if created elsewhere.
To wit, consider:
from z3 import *
# These two are *different* variables even though they have the same name
a = FreshBool("a")
b = FreshBool("a")
s = Solver();
s.add(a != b)
print(s.check())
This will print:
sat
since the variables are different. (And hence can have different values in the model.)
But if you try:
from z3 import *
# These two are the *same* variable, since they have the same name
a = Bool("a")
b = Bool("a")
s = Solver();
s.add(a != b)
print(s.check())
Then you'll get:
unsat
because a and b are the same variable since they have the same name.
Most end-user code should simply use Bool, since this is the usual intended semantics: You refer to different variables with their names when you created them. But when developing libraries, you might want to create a temporary variable that is not the same as any other variable in the system. In these cases, you use FreshBool. Note that in this latter case the string you provide is used as a prefix. If you add print(get_model()) at the end of the first program, it'll print:
sat
[a!0 = True, a!1 = False]
showing the internally created "fresh" names.
z3 also provides similar functions for other types too, such as Int() vs FreshInt(), Real() vs FreshReal() etc.; intended to be used in exactly the same manner.

Related

Local declaration of (built-in) Lua functions to reduce overhead

It is often said that one should re-declare (certain) Lua functions locally, as this reduces the overhead.
But what is the exact rule / principle behind this? How do I know for which functions this should be done and for which it is superfluous? Or should it be done for EVERY function, even your own?
Unfortunately I can't figure it out from the Lua manual.
The principle is that every time you write table.insert for example, the Lua interpreter looks up the "insert" entry in the table called table. Actually, it means _ENV.table.insert - _ENV is where the "global variables" are in Lua 5.2+. Lua 5.1 has something similar but it's not called _ENV. The interpreter looks up the string "table" in _ENV and then looks up the string "insert" in that table. Two table lookups every time you call table.insert, before the function actually gets called.
But if you put it in a local variable then the interpreter gets the function directly from the local variable, which is faster. It still has to look it up, to fill in the local variable.
It is superfluous if you only call the function once within the scope of the local variable, but that is pretty rare. There is no reason to do it for functions which are already declared as local. It also makes the code harder to read, so typically you won't do it except when it actually matters (in code that runs a lot of times).
My favorit tool for speed up things in Lua is to place all the useable stuff for a table in a metatable called: __index
A common example for this is the datatype: string
It has all string functions in his __index metatable as methods.
Therefore you can do things like that directly on a string...
print(('istaqsinaayok'):upper():reverse())
-- Output: KOYAANISQATSI
The Logic above...
The lookup for a method in a string fails directly and therefore the __index metamethod will be looked up for that method.
I like to implement same behaviour for the datatype number...
-- do debug.setmetatable() only once for all further defined/used numbers
math.pi = debug.setmetatable(math.pi, {__index = math})
-- From now numbers are objects ;-)
-- Lets output Pi but not using Pi this time
print((180):rad()) -- Pi calcing with method rad()
-- Output: 3.1415926535898
The Logic: If not exists then lookup __index
Is only one step behind: local
...imho.
Another Example, that works with this method...
-- koysenv.lua
_G = setmetatable(_G,
{ -- Metamethods
__index = {}, -- Table constructor
__name = 'Global Environment'
})
-- Reference whats in _G into __index
for key, value in pairs(_G) do
getmetatable(_G)['__index'][key] = value
end
-- Remove all whats in __index now from _G
for key, value in pairs(getmetatable(_G)['__index']) do
_G[key] = nil
end
return _G
When started as a last require it move all in _G into fresh created metatable method __index.
After that _G looks totally empty ;-P
...but the environment is working like nothing happen.
To add to what #user253751 already said:
Code Quality
Lua is a very flexible language. Other languages require you to import the parts of the standard library you use; Lua doesn't. Lua usually provides one global environment not to be polluted. If you play with the environment _ENV (setfenv/getfenv on Lua 5.1 / LuaJIT), you'll want to be able to still access Lua libraries. For that purpose you may to localize them before changing the environment; you can then use your "clean" environment for your module / API table / class / whatever. Another option here is to use metatables; metatable chains may quickly get hairy though and are likely to harm performance, as a failed table lookup is required each time to trigger indexing metamethods. localizing otherwise global variables can thus be seen as a way of importing them; to give a minimal & rough example:
local print = print -- localize ("import") everything we need first
_ENV = {} -- set environment to clean table for module
function hello() -- this writes to _ENV instead of _G
print("Hello World!")
end
hello() -- inside the environment, all variables set here are accessible
return _ENV -- "export" the API table
Performance
Very minor nitpick: Local variables aren't strictly always faster. In very extreme cases (i.e. lots of upvalues), indexing a table (which doesn't need an upvalue if it's the environment, the string metatable or the like) may actually be faster.
I imagine that localizing variables is required for many optimizations of optimizing compilers such as LuaJIT to be applicable though; otherwise Lua makes very little code. A global like print might as well be overwritten somewhere in a deep code path - thus the indexing operation has to be repeated every time; for a local on the other hand, the interpreter will has way more guarantees regarding its scope. It is thus able to detect constants that are only written to once, on initialization for instance; for globals very little code analysis is possible.

Using the output of functions in mathematica for further computation

Mathematica has a bevy of useful functions (Solve, NDSolve, etc.). These functions output in a very strange manner, ie {{v -> 2.05334*10^-7}}. The major issue is that there does not appear to be any way to use the output of these functions in the program; that is to say all of these appear to be terminal functions where the output is for human viewing only.
I have tired multiple methods (Part, /., etc.) to try to get the output of functions into variables so the program can use them for further steps, but nothing works. The documentation says it can be done but nothing they list actually functions. For example, if I try to use /. to move variables, it continues to treat the variable I assigned to as empty and does symbolic math with it instead of seeing the value. If I try to access the variable ie [[1]], it says the variable is not that deep.
The only method I have found is to put the later steps in separate blocks and copy-paste the output to continue evaluation. Is there any way to get the output of these functions into variables programmatically?
Solve etc. produce a list of replacement rules. So you need to apply these rules to the pattern to be replaced. For instance
solutions = x /. Solve[x^2 == 3, x]
gives you all the solutions in a list.
Here is a quick way to get variable names for the solutions:
x1 = solutions[[1]]
x2 = solutions[[2]]

Mathematica Module versus With or Block - Guideline, rule of thumb for usage?

Leonid wrote in chapter iv of his book : "... Module, Block and With. These constructs are explained in detail in Mathematica Book and Mathematica Help, so I will say just a few words about them here. ..."
From what I have read ( been able to find ) I am still in the dark. For packaged functions I ( simply ) use Module, because it works and I know the construct. It may not be the best choice though. It is not entirely clear to me ( from the documentation ) when, where or why to use With ( or Block ).
Question. Is there a rule of thumb / guideline on when to use Module, With or Block ( for functions in packages )? Are there limitations compared to Module? The docs say that With is faster. I want to be able to defend my =choice= for Module ( or another construct ).
A more practical difference between Block and Module can be seen here:
Module[{x}, x]
Block[{x}, x]
(*
-> x$1979
x
*)
So if you wish to return eg x, you can use Block. For instance,
Plot[D[Sin[x], x], {x, 0, 10}]
does not work; to make it work, one could use
Plot[Block[{x}, D[Sin[x], x]], {x, 0, 10}]
(of course this is not ideal, it is simply an example).
Another use is something like Block[{$RecursionLimit = 1000},...], which temporarily changes $RecursionLimit (Module would not have worked as it renames $RecursionLimit).
One can also use Block to block evaluation of something, eg
Block[{Sin}, Sin[.5]] // Trace
(*
-> {Block[{Sin},Sin[0.5]],Sin[0.5],0.479426}
*)
ie, it returns Sin[0.5] which is only evaluated after the Block has finished executing. This is because Sin inside the Block is just a symbol, rather than the sine function. You could even do something like
Block[{Sin = Cos[#/4] &}, Sin[Pi]]
(*
-> 1/Sqrt[2]
*)
(use Trace to see how it works). So you can use Block to locally redefine built-in functions, too:
Block[{Plus = Times}, 3 + 2]
(*
-> 6
*)
As you mentioned there are many things to consider and a detailed discussion is possible. But here are some rules of thumb that I apply the majority of the time:
Module[{x}, ...] is the safest and may be needed if either
There are existing definitions for x that you want to avoid breaking during the evaluation of the Module, or
There is existing code that relies on x being undefined (for example code like Integrate[..., x]).
Module is also the only choice for creating and returning a new symbol. In particular, Module is sometimes needed in advanced Dynamic programming for this reason.
If you are confident there aren't important existing definitions for x or any code relying on it being undefined, then Block[{x}, ...] is often faster. (Note that, in a project entirely coded by you, being confident of these conditions is a reasonable "encapsulation" standard that you may wish to enforce anyway, and so Block is often a sound choice in these situations.)
With[{x = ...}, expr] is the only scoping construct that injects the value of x inside Hold[...]. This is useful and important. With can be either faster or slower than Block depending on expr and the particular evaluation path that is taken. With is less flexible, however, since you can't change the definition of x inside expr.
Andrew has already provided a very comprehensive answer. I would just summarize by noting that Module is for defining local variables that can be redefined within the scope of a function definition, while With is for defining local constants, which can't be. You also can't define a local constant based on the definition of another local constant you have set up in the same With statement, or have multiple symbols on the LHS of a definition. That is, the following does not work.
With[{{a,b}= OptionValue /# {opt1,opt2} }, ...]
I tend to set up complicated function definitions with Module enclosing a With. I set up all the local constants I can first inside the With, e.g. the Length of the data passed to the function, if I need that, then other local variables as needed. The reason is that With is a little faster of you genuinely do have constants not variables.
I'd like to mention the official documentation on the difference between Block and Module is available at http://reference.wolfram.com/mathematica/tutorial/BlocksComparedWithModules.html.

Fixing Combinatorica redefinition of Element

My code relies on version of Element which works like MemberQ, but when I load Combinatorica, Element gets redefined to work like Part. What is the easiest way to fix this conflict? Specifically, what is the syntax to remove Combinatorica's definition from DownValues? Here's what I get for DownValues[Element]
{HoldPattern[
Combinatorica`Private`a_List \[Element] \
{Combinatorica`Private`index___}] :>
Combinatorica`Private`a[[Combinatorica`Private`index]],
HoldPattern[Private`x_ \[Element] Private`list_List] :>
MemberQ[Private`list, Private`x]}
If your goal is to prevent Combinatorica from installing the definition in the first place, you can achieve this result by loading the package for the first time thus:
Block[{Element}, Needs["Combinatorica`"]]
However, this will almost certainly make any Combinatorica features that depend upon the definition fail (which may or may not be of concern in your particular application).
You can do several things. Let us introduce a convenience function
ClearAll[redef];
SetAttributes[redef, HoldRest];
redef[f_, code_] := (Unprotect[f]; code; Protect[f])
If you are sure about the order of definitions, you can do something like
redef[Element, DownValues[Element] = Rest[DownValues[Element]]]
If you want to delete definitions based on the context, you can do something like this:
redef[Element, DownValues[Element] =
DeleteCases[DownValues[Element],
rule_ /; Cases[rule, x_Symbol /; (StringSplit[Context[x], "`"][[1]] ===
"Combinatorica"), Infinity, Heads -> True] =!= {}]]
You can also use a softer way - reorder definitions rather than delete:
redef[Element, DownValues[Element] = RotateRight[DownValues[Element]]]
There are many other ways of dealing with this problem. Another one (which I already recommended) is to use UpValues, if this is suitable. The last one I want to mention here is to make a kind of custom dynamic scoping construct based on Block, and wrap it around your code. I personally find it the safest variant, in case if you want strictly your definition to apply (because it does not care about the order in which various definitions could have been created - it removes all of them and adds just yours). It is also safer in that outside those places where you want your definitions to apply (by "places" I mean parts of the evaluation stack), other definitions will still apply, so this seems to be the least intrusive way. Here is how it may look:
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
ClearAll[elemExec];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
Example of use:
In[10]:= elemExec[Element[1,{1,2,3}]]
Out[10]= True
Edit:
If you need to automate the use of Block, here is an example package to show one way how this can be done:
BeginPackage["Test`"]
var;
f1;
f2;
Begin["`Private`"];
(* Implementations of your functions *)
var = 1;
f1[x_, y_List] := If[Element[x, y], x^2];
f2[x_, y_List] := If[Element[x, y], x^3];
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
(* The following part of the package is defined at the start and you don't
touch it any more, when adding new functions to the package *)
mainContext = StringReplace[Context[], x__ ~~ "Private`" :> x];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
postprocessDefs[context_String] :=
Map[
ToExpression[#, StandardForm,
Function[sym,DownValues[sym] =
DownValues[sym] /.
Verbatim[RuleDelayed][lhs_,rhs_] :> (lhs :> elemExec[rhs])]] &,
Select[Names[context <> "*"], ToExpression[#, StandardForm, DownValues] =!= {} &]];
postprocessDefs[mainContext];
End[]
EndPackage[]
You can load the package and look at the DownValues for f1 and f2, for example:
In[17]:= DownValues[f1]
Out[17]= {HoldPattern[f1[Test`Private`x_,Test`Private`y_List]]:>
Test`Private`elemExec[If[Test`Private`x\[Element]Test`Private`y,Test`Private`x^2]]}
The same scheme will also work for functions not in the same package. In fact, you could separate
the bottom part (code-processing package) to be a package on its own, import it into any other
package where you want to inject Block into your functions' definitions, and then just call something like postprocessDefs[mainContext], as above. You could make the function which makes definitions inside Block (elementDef here) to be an extra parameter to a generalized version of elemExec, which would make this approach more modular and reusable.
If you want to be more selective about the functions where you want to inject Block, this can also be done in various ways. In fact, the whole Block-injection scheme can be made cleaner then, but it will require slightly more care when implementing each function, while the above approach is completely automatic. I can post the code which will illustrate this, if needed.
One more thing: for the less intrusive nature of this method you pay a price - dynamic scope (Block) is usually harder to control than lexically-scoped constructs. So, you must know exactly the parts of evaluation stack where you want that to apply. For example, I would hesitate to inject Block into a definition of a higher order function, which takes some functions as parameters, since those functions may come from code that assumes other definitions (like for example Combinatorica` functions relying on overloaded Element). This is not a big problem, just requires care.
The bottom line of this seems to be: try to avoid overloading built-ins if at all possible. In this case you faced this definitions clash yourself, but it would be even worse if the one who faces this problem is a user of your package (may be yourself a few months later), who wants to combine your package with another one (which happens to overload same system functions as yours). Of course, it also depends on who will be the users of your package - only yourself or potentially others as well. But in terms of design, and in the long term, you may be better off assuming the latter scenario from the start.
To remove Combinatorica's definition, use Unset or the equivalent form =.. The pattern to unset you can grab from the Information output you show in the question:
Unprotect[Element];
Element[a_List, {index___}] =.
Protect[Element];
The worry would be, of course, that Combinatorica depends internally on this ill-conceived redefinition, but you have reason to believe this to not be the case as the Information output from the redefined Element says:
The use of the function
Element in Combinatorica is now
obsolete, though the function call
Element[a, p] still gives the pth
element of nested list a, where p is a
list of indices.
HTH
I propose an entirely different approach than removing Element from DownValues. Simply use the full name of the Element function.
So, if the original is
System`Element[]
the default is now
Combinatorica`Element[]
because of loading the Combinatorica Package.
Just explicitly use
System`Element[]
wherever you need it. Of course check that System is the correct Context using the Context function:
Context[Element]
This approach ensures several things:
The Combinatorica Package will still work in your notebook, even if the Combinatorica Package is updated in the future
You wont have to redefine the Element function, as some have suggested
You can use the Combinatorica`Element function when needed
The only downside is having to explicitly write it every time.

Using function arguments as local variables

Something like this (yes, this doesn't deal with some edge cases - that's not the point):
int CountDigits(int num) {
int count = 1;
while (num >= 10) {
count++;
num /= 10;
}
return count;
}
What's your opinion about this? That is, using function arguments as local variables.
Both are placed on the stack, and pretty much identical performance wise, I'm wondering about the best-practices aspects of this.
I feel like an idiot when I add an additional and quite redundant line to that function consisting of int numCopy = num, however it does bug me.
What do you think? Should this be avoided?
As a general rule, I wouldn't use a function parameter as a local processing variable, i.e. I treat function parameters as read-only.
In my mind, intuitively understandabie code is paramount for maintainability, and modifying a function parameter to use as a local processing variable tends to run counter to that goal. I have come to expect that a parameter will have the same value in the middle and bottom of a method as it does at the top. Plus, an aptly-named local processing variable may improve understandability.
Still, as #Stewart says, this rule is more or less important depending on the length and complexity of the function. For short simple functions like the one you show, simply using the parameter itself may be easier to understand than introducing a new local variable (very subjective).
Nevertheless, if I were to write something as simple as countDigits(), I'd tend to use a remainingBalance local processing variable in lieu of modifying the num parameter as part of local processing - just seems clearer to me.
Sometimes, I will modify a local parameter at the beginning of a method to normalize the parameter:
void saveName(String name) {
name = (name != null ? name.trim() : "");
...
}
I rationalize that this is okay because:
a. it is easy to see at the top of the method,
b. the parameter maintains its the original conceptual intent, and
c. the parameter is stable for the rest of the method
Then again, half the time, I'm just as apt to use a local variable anyway, just to get a couple of extra finals in there (okay, that's a bad reason, but I like final):
void saveName(final String name) {
final String normalizedName = (name != null ? name.trim() : "");
...
}
If, 99% of the time, the code leaves function parameters unmodified (i.e. mutating parameters are unintuitive or unexpected for this code base) , then, during that other 1% of the time, dropping a quick comment about a mutating parameter at the top of a long/complex function could be a big boon to understandability:
int CountDigits(int num) {
// num is consumed
int count = 1;
while (num >= 10) {
count++;
num /= 10;
}
return count;
}
P.S. :-)
parameters vs arguments
http://en.wikipedia.org/wiki/Parameter_(computer_science)#Parameters_and_arguments
These two terms are sometimes loosely used interchangeably; in particular, "argument" is sometimes used in place of "parameter". Nevertheless, there is a difference. Properly, parameters appear in procedure definitions; arguments appear in procedure calls.
So,
int foo(int bar)
bar is a parameter.
int x = 5
int y = foo(x)
The value of x is the argument for the bar parameter.
It always feels a little funny to me when I do this, but that's not really a good reason to avoid it.
One reason you might potentially want to avoid it is for debugging purposes. Being able to tell the difference between "scratchpad" variables and the input to the function can be very useful when you're halfway through debugging.
I can't say it's something that comes up very often in my experience - and often you can find that it's worth introducing another variable just for the sake of having a different name, but if the code which is otherwise cleanest ends up changing the value of the variable, then so be it.
One situation where this can come up and be entirely reasonable is where you've got some value meaning "use the default" (typically a null reference in a language like Java or C#). In that case I think it's entirely reasonable to modify the value of the parameter to the "real" default value. This is particularly useful in C# 4 where you can have optional parameters, but the default value has to be a constant:
For example:
public static void WriteText(string file, string text, Encoding encoding = null)
{
// Null means "use the default" which we would document to be UTF-8
encoding = encoding ?? Encoding.UTF8;
// Rest of code here
}
About C and C++:
My opinion is that using the parameter as a local variable of the function is fine because it is a local variable already. Why then not use it as such?
I feel silly too when copying the parameter into a new local variable just to have a modifiable variable to work with.
But I think this is pretty much a personal opinion. Do it as you like. If you feel sill copying the parameter just because of this, it indicates your personality doesn't like it and then you shouldn't do it.
If I don't need a copy of the original value, I don't declare a new variable.
IMO I don't think mutating the parameter values is a bad practice in general,
it depends on how you're going to use it in your code.
My team coding standard recommends against this because it can get out of hand. To my mind for a function like the one you show, it doesn't hurt because everyone can see what is going on. The problem is that with time functions get longer, and they get bug fixes in them. As soon as a function is more than one screen full of code, this starts to get confusing which is why our coding standard bans it.
The compiler ought to be able to get rid of the redundant variable quite easily, so it has no efficiency impact. It is probably just between you and your code reviewer whether this is OK or not.
I would generally not change the parameter value within the function. If at some point later in the function you need to refer to the original value, you still have it. in your simple case, there is no problem, but if you add more code later, you may refer to 'num' without realizing it has been changed.
The code needs to be as self sufficient as possible. What I mean by that is you now have a dependency on what is being passed in as part of your algorithm. If another member of your team decides to change this to a pass by reference then you might have big problems.
The best practice is definitely to copy the inbound parameters if you expect them to be immutable.
I typically don't modify function parameters, unless they're pointers, in which case I might alter the value that's pointed to.
I think the best-practices of this varies by language. For example, in Perl you can localize any variable or even part of a variable to a local scope, so that changing it in that scope will not have any affect outside of it:
sub my_function
{
my ($arg1, $arg2) = #_; # get the local variables off the stack
local $arg1; # changing $arg1 here will not be visible outside this scope
$arg1++;
local $arg2->{key1}; # only the key1 portion of the hashref referenced by $arg2 is localized
$arg2->{key1}->{key2} = 'foo'; # this change is not visible outside the function
}
Occasionally I have been bitten by forgetting to localize a data structure that was passed by reference to a function, that I changed inside the function. Conversely, I have also returned a data structure as a function result that was shared among multiple systems and the caller then proceeded to change the data by mistake, affecting these other systems in a difficult-to-trace problem usually called action at a distance. The best thing to do here would be to make a clone of the data before returning it*, or make it read-only**.
* In Perl, see the function dclone() in the built-in Storable module.
** In Perl, see lock_hash() or lock_hash_ref() in the built-in Hash::Util module).

Resources