Is there a built-in dictionary like globals() but only for user-made variables? - python-3.9

When making my tkinter app project, sometimes I need to quickly know the current values of the variables I made, for debugging purposes, so I make a line like this:
root.bind("< F1>", lambda e: print(f"var1: {var1} \n var2: {var2} \n [etc]"))
which basically prints out the values of the variables I specifically list in, when I press F1.
The problem with this is that manually typing each variable is a pain when you have a lot, so surely there is a smarter and automatic way of doing this. I'm guessing maybe there is a dictionary like "globals()" excluding the built-ins (meaning only the variables made by the user)? If there isn't such a thing, can someone give me an alternative?
Thanks

Other than globals() there is locals() but I don't think that's what you want. You can filter the globals that start with two underscores though:
>>> var = 1
>>> globals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, 'var': 1}
>>> dict(filter(lambda i: not i[0].startswith("__"), globals().items()))
{'var': 1}
>>>
If this isn't what you want, you should define "only the variables made by the user" more clearly.

Ekrem's answer is good enough for most cases, but it will also conflict with any imported modules, etc. There is simply no way of knowing what is user-made and what is imported, but you can filter globals to remove functions, classes, modules, and other items that are clearly not user-defined variables. However, if you have any lines like from module import some_constant, then it's nearly impossible to clearly categorize it as user-defined without explicitly marking a variable as user-defined (by adding it to a list or dictionary).

Related

Can FreeMarker keep the original value if variable is not found?

I am using freemarker 2.3.x. It is expected that not all variables are defined. And it is needed to be output as it is. For example, the template is
${a}
${b}
And the data model is a=name. Then the output is expected to be
name
${b}
By using TemplateExceptionHandler.IGNORE_HANDLER, the output will not contain ${b}
For now I am adding a new entry b=${b} to data model. It works but it is kind of ugly workaround. And I need to know exactly how many undefined variable will there be which is a limitation.
Is there a setting or a way to do that?
I had the same question and found another way to skip variables you know are not in your data-model.
This template should print ${b}:
${a}
${r"${b}"}
result:
name
${b}
https://stackoverflow.com/a/5207658/2618186
I would love to see how exactly your reprinting function looks like though. Might be nicer.
Maybe the least horrible way to solve this is on the data-model level. Only you shouldn't add "b=${b}" and such manually, instead, you should use a custom TemplateHashModelEx as the data-model (the "root"), which does that automatically. (That has the annoying side effect that Configuration-level shared variables, if you have any, will be hidden by the data-model root.)
Anyway, even in theory, it's quite impossible to solve correctly. Consider, what if you have ${a + b} where a is present and b is missing. Well, it could be render as ${123 + b} then, but you see things become involved. Ad then, what if you have ${a(b)}, where a is present but not b, and then in a later iteration a is missing but b is present...
As of TemplateExceptionHandler-s, while you could re-print the failing expression when it was an InvalidReferenceException, as out.write("${" + te.getBlamedExpressionString() + "}");, it won't work for non trivial interpolations. Like for the ${a + b} example, it would print ${b}, silently removing the a +.

Using the output of functions in mathematica for further computation

Mathematica has a bevy of useful functions (Solve, NDSolve, etc.). These functions output in a very strange manner, ie {{v -> 2.05334*10^-7}}. The major issue is that there does not appear to be any way to use the output of these functions in the program; that is to say all of these appear to be terminal functions where the output is for human viewing only.
I have tired multiple methods (Part, /., etc.) to try to get the output of functions into variables so the program can use them for further steps, but nothing works. The documentation says it can be done but nothing they list actually functions. For example, if I try to use /. to move variables, it continues to treat the variable I assigned to as empty and does symbolic math with it instead of seeing the value. If I try to access the variable ie [[1]], it says the variable is not that deep.
The only method I have found is to put the later steps in separate blocks and copy-paste the output to continue evaluation. Is there any way to get the output of these functions into variables programmatically?
Solve etc. produce a list of replacement rules. So you need to apply these rules to the pattern to be replaced. For instance
solutions = x /. Solve[x^2 == 3, x]
gives you all the solutions in a list.
Here is a quick way to get variable names for the solutions:
x1 = solutions[[1]]
x2 = solutions[[2]]

Mathematica - can I define a block of code using a single variable?

It has been a while since I've used Mathematica, and I looked all throughout the help menu. I think one problem I'm having is that I do not know what exactly to look up. I have a block of code, with things like appending lists and doing basic math, that I want to define as a single variable.
My goal is to loop through a sequence and when needed I wanted to call a block of code that I will be using several times throughout the loop. I am guessing I should just put it all in a loop anyway, but I would like to be able to define it all as one function.
It seems like this should be an easy and straightforward procedure. Am I missing something simple?
This is the basic format for a function definition in Mathematica.
myFunc[par1_,par2_]:=Module[{localVar1,localVar2},
statement1; statement2; returnStatement ]
Your question is not entirely clear, but I interpret that you want something like this:
facRand[] :=
({b, x} = Last#FactorInteger[RandomInteger[1*^12]]; Print[b])
Now every time facRand[] is called a new random integer is factored, global variables b and x are assigned, and the value of b is printed. This could also be done with Function:
Clear[facRand]
facRand =
({b, x} = Last#FactorInteger[RandomInteger[1*^12]]; Print[b]) &
This is also called with facRand[]. This form is standard, and allows addressing or passing the symbol facRand without triggering evaluation.

Any reason NOT to always use keyword arguments?

Before jumping into python, I had started with some Objective-C / Cocoa books. As I recall, most functions required keyword arguments to be explicitly stated. Until recently I forgot all about this, and just used positional arguments in Python. But lately, I've ran into a few bugs which resulted from improper positions - sneaky little things they were.
Got me thinking - generally speaking, unless there is a circumstance that specifically requires non-keyword arguments - is there any good reason NOT to use keyword arguments? Is it considered bad style to always use them, even for simple functions?
I feel like as most of my 50-line programs have been scaling to 500 or more lines regularly, if I just get accustomed to always using keyword arguments, the code will be more easily readable and maintainable as it grows. Any reason this might not be so?
UPDATE:
The general impression I am getting is that its a style preference, with many good arguments that they should generally not be used for very simple arguments, but are otherwise consistent with good style. Before accepting I just want to clarify though - is there any specific non-style problems that arise from this method - for instance, significant performance hits?
There isn't any reason not to use keyword arguments apart from the clarity and readability of the code. The choice of whether to use keywords should be based on whether the keyword adds additional useful information when reading the code or not.
I follow the following general rule:
If it is hard to infer the function (name) of the argument from the function name – pass it by keyword (e.g. I wouldn't want to have text.splitlines(True) in my code).
If it is hard to infer the order of the arguments, for example if you have too many arguments, or when you have independent optional arguments – pass it by keyword (e.g. funkyplot(x, y, None, None, None, None, None, None, 'red') doesn't look particularly nice).
Never pass the first few arguments by keyword if the purpose of the argument is obvious. You see, sin(2*pi) is better than sin(value=2*pi), the same is true for plot(x, y, z).
In most cases, stable mandatory arguments would be positional, and optional arguments would be keyword.
There's also a possible difference in performance, because in every implementation the keyword arguments would be slightly slower, but considering this would be generally a premature optimisation and the results from it wouldn't be significant, I don't think it's crucial for the decision.
UPDATE: Non-stylistical concerns
Keyword arguments can do everything that positional arguments can, and if you're defining a new API there are no technical disadvantages apart from possible performance issues. However, you might have little issues if you're combining your code with existing elements.
Consider the following:
If you make your function take keyword arguments, that becomes part of your interface.
You can't replace your function with another that has a similar signature but a different keyword for the same argument.
You might want to use a decorator or another utility on your function that assumes that your function takes a positional argument. Unbound methods are an example of such utility because they always pass the first argument as positional after reading it as positional, so cls.method(self=cls_instance) doesn't work even if there is an argument self in the definition.
None of these would be a real issue if you design your API well and document the use of keyword arguments, especially if you're not designing something that should be interchangeable with something that already exists.
If your consideration is to improve readability of function calls, why not simply declare functions as normal, e.g.
def test(x, y):
print "x:", x
print "y:", y
And simply call functions by declaring the names explicitly, like so:
test(y=4, x=1)
Which obviously gives you the output:
x: 1
y: 4
or this exercise would be pointless.
This avoids having arguments be optional and needing default values (unless you want them to be, in which case just go ahead with the keyword arguments! :) and gives you all the versatility and improved readability of named arguments that are not limited by order.
Well, there are a few reasons why I would not do that.
If all your arguments are keyword arguments, it increases noise in the code and it might remove clarity about which arguments are required and which ones are optionnal.
Also, if I have to use your code, I might want to kill you !! (Just kidding), but having to type the name of all the parameters everytime... not so fun.
Just to offer a different argument, I think there are some cases in which named parameters might improve readability. For example, imagine a function that creates a user in your system:
create_user("George", "Martin", "g.m#example.com", "payments#example.com", "1", "Radius Circle")
From that definition, it is not at all clear what these values might mean, even though they are all required, however with named parameters it is always obvious:
create_user(
first_name="George",
last_name="Martin",
contact_email="g.m#example.com",
billing_email="payments#example.com",
street_number="1",
street_name="Radius Circle")
I remember reading a very good explanation of "options" in UNIX programs: "Options are meant to be optional, a program should be able to run without any options at all".
The same principle could be applied to keyword arguments in Python.
These kind of arguments should allow a user to "customize" the function call, but a function should be able to be called without any implicit keyword-value argument pairs at all.
Sometimes, things should be simple because they are simple.
If you always enforce you to use keyword arguments on every function call, soon your code will be unreadable.
When Python's built-in compile() and __import__() functions gain keyword argument support, the same argument was made in favor of clarity. There appears to be no significant performance hit, if any.
Now, if you make your functions only accept keyword arguments (as opposed to passing the positional parameters using keywords when calling them, which is allowed), then yes, it'd be annoying.
I don't see the purpose of using keyword arguments when the meaning of the arguments is obvious
Keyword args are good when you have long parameter lists with no well defined order (that you can't easily come up with a clear scheme to remember); however there are many situations where using them is overkill or makes the program less clear.
First, sometimes is much easier to remember the order of keywords than the names of keyword arguments, and specifying the names of arguments could make it less clear. Take randint from scipy.random with the following docstring:
randint(low, high=None, size=None)
Return random integers x such that low <= x < high.
If high is None, then 0 <= x < low.
When wanting to generate a random int from [0,10) its clearer to write randint(10) than randint(low=10) in my view. If you need to generate an array with 100 numbers in [0,10) you can probably remember the argument order and write randint(0, 10, 100). However, you may not remember the variable names (e.g., is the first parameter low, lower, start, min, minimum) and once you have to look up the parameter names, you might as well not use them (as you just looked up the proper order).
Also consider variadic functions (ones with variable number of parameters that are anonymous themselves). E.g., you may want to write something like:
def square_sum(*params):
sq_sum = 0
for p in params:
sq_sum += p*p
return sq_sum
that can be applied a bunch of bare parameters (square_sum(1,2,3,4,5) # gives 55 ). Sure you could have written the function to take an named keyword iterable def square_sum(params): and called it like square_sum([1,2,3,4,5]) but that may be less intuitive, especially when there's no potential confusion about the argument name or its contents.
A mistake I often do is that I forget that positional arguments have to be specified before any keyword arguments, when calling a function. If testing is a function, then:
testing(arg = 20, 56)
gives a SyntaxError message; something like:
SyntaxError: non-keyword arg after keyword arg
It is easy to fix of course, it's just annoying. So in the case of few - lines programs as the ones you mention, I would probably just go with positional arguments after giving nice, descriptive names to the parameters of the function. I don't know if what I mention is that big of a problem though.
One downside I could see is that you'd have to think of a sensible default value for everything, and in many cases there might not be any sensible default value (including None). Then you would feel obliged to write a whole lot of error handling code for the cases where a kwarg that logically should be a positional arg was left unspecified.
Imagine writing stuff like this every time..
def logarithm(x=None):
if x is None:
raise TypeError("You can't do log(None), sorry!")

Fixing Combinatorica redefinition of Element

My code relies on version of Element which works like MemberQ, but when I load Combinatorica, Element gets redefined to work like Part. What is the easiest way to fix this conflict? Specifically, what is the syntax to remove Combinatorica's definition from DownValues? Here's what I get for DownValues[Element]
{HoldPattern[
Combinatorica`Private`a_List \[Element] \
{Combinatorica`Private`index___}] :>
Combinatorica`Private`a[[Combinatorica`Private`index]],
HoldPattern[Private`x_ \[Element] Private`list_List] :>
MemberQ[Private`list, Private`x]}
If your goal is to prevent Combinatorica from installing the definition in the first place, you can achieve this result by loading the package for the first time thus:
Block[{Element}, Needs["Combinatorica`"]]
However, this will almost certainly make any Combinatorica features that depend upon the definition fail (which may or may not be of concern in your particular application).
You can do several things. Let us introduce a convenience function
ClearAll[redef];
SetAttributes[redef, HoldRest];
redef[f_, code_] := (Unprotect[f]; code; Protect[f])
If you are sure about the order of definitions, you can do something like
redef[Element, DownValues[Element] = Rest[DownValues[Element]]]
If you want to delete definitions based on the context, you can do something like this:
redef[Element, DownValues[Element] =
DeleteCases[DownValues[Element],
rule_ /; Cases[rule, x_Symbol /; (StringSplit[Context[x], "`"][[1]] ===
"Combinatorica"), Infinity, Heads -> True] =!= {}]]
You can also use a softer way - reorder definitions rather than delete:
redef[Element, DownValues[Element] = RotateRight[DownValues[Element]]]
There are many other ways of dealing with this problem. Another one (which I already recommended) is to use UpValues, if this is suitable. The last one I want to mention here is to make a kind of custom dynamic scoping construct based on Block, and wrap it around your code. I personally find it the safest variant, in case if you want strictly your definition to apply (because it does not care about the order in which various definitions could have been created - it removes all of them and adds just yours). It is also safer in that outside those places where you want your definitions to apply (by "places" I mean parts of the evaluation stack), other definitions will still apply, so this seems to be the least intrusive way. Here is how it may look:
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
ClearAll[elemExec];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
Example of use:
In[10]:= elemExec[Element[1,{1,2,3}]]
Out[10]= True
Edit:
If you need to automate the use of Block, here is an example package to show one way how this can be done:
BeginPackage["Test`"]
var;
f1;
f2;
Begin["`Private`"];
(* Implementations of your functions *)
var = 1;
f1[x_, y_List] := If[Element[x, y], x^2];
f2[x_, y_List] := If[Element[x, y], x^3];
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
(* The following part of the package is defined at the start and you don't
touch it any more, when adding new functions to the package *)
mainContext = StringReplace[Context[], x__ ~~ "Private`" :> x];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
postprocessDefs[context_String] :=
Map[
ToExpression[#, StandardForm,
Function[sym,DownValues[sym] =
DownValues[sym] /.
Verbatim[RuleDelayed][lhs_,rhs_] :> (lhs :> elemExec[rhs])]] &,
Select[Names[context <> "*"], ToExpression[#, StandardForm, DownValues] =!= {} &]];
postprocessDefs[mainContext];
End[]
EndPackage[]
You can load the package and look at the DownValues for f1 and f2, for example:
In[17]:= DownValues[f1]
Out[17]= {HoldPattern[f1[Test`Private`x_,Test`Private`y_List]]:>
Test`Private`elemExec[If[Test`Private`x\[Element]Test`Private`y,Test`Private`x^2]]}
The same scheme will also work for functions not in the same package. In fact, you could separate
the bottom part (code-processing package) to be a package on its own, import it into any other
package where you want to inject Block into your functions' definitions, and then just call something like postprocessDefs[mainContext], as above. You could make the function which makes definitions inside Block (elementDef here) to be an extra parameter to a generalized version of elemExec, which would make this approach more modular and reusable.
If you want to be more selective about the functions where you want to inject Block, this can also be done in various ways. In fact, the whole Block-injection scheme can be made cleaner then, but it will require slightly more care when implementing each function, while the above approach is completely automatic. I can post the code which will illustrate this, if needed.
One more thing: for the less intrusive nature of this method you pay a price - dynamic scope (Block) is usually harder to control than lexically-scoped constructs. So, you must know exactly the parts of evaluation stack where you want that to apply. For example, I would hesitate to inject Block into a definition of a higher order function, which takes some functions as parameters, since those functions may come from code that assumes other definitions (like for example Combinatorica` functions relying on overloaded Element). This is not a big problem, just requires care.
The bottom line of this seems to be: try to avoid overloading built-ins if at all possible. In this case you faced this definitions clash yourself, but it would be even worse if the one who faces this problem is a user of your package (may be yourself a few months later), who wants to combine your package with another one (which happens to overload same system functions as yours). Of course, it also depends on who will be the users of your package - only yourself or potentially others as well. But in terms of design, and in the long term, you may be better off assuming the latter scenario from the start.
To remove Combinatorica's definition, use Unset or the equivalent form =.. The pattern to unset you can grab from the Information output you show in the question:
Unprotect[Element];
Element[a_List, {index___}] =.
Protect[Element];
The worry would be, of course, that Combinatorica depends internally on this ill-conceived redefinition, but you have reason to believe this to not be the case as the Information output from the redefined Element says:
The use of the function
Element in Combinatorica is now
obsolete, though the function call
Element[a, p] still gives the pth
element of nested list a, where p is a
list of indices.
HTH
I propose an entirely different approach than removing Element from DownValues. Simply use the full name of the Element function.
So, if the original is
System`Element[]
the default is now
Combinatorica`Element[]
because of loading the Combinatorica Package.
Just explicitly use
System`Element[]
wherever you need it. Of course check that System is the correct Context using the Context function:
Context[Element]
This approach ensures several things:
The Combinatorica Package will still work in your notebook, even if the Combinatorica Package is updated in the future
You wont have to redefine the Element function, as some have suggested
You can use the Combinatorica`Element function when needed
The only downside is having to explicitly write it every time.

Resources