Looking on simple example of cython code I wonder if CPython interpreter needed some hard-coded hacks to understand cython syntax (e.g. cdef int N) or if it is implemented using some concepts of standard python syntax (e.g. functions, classes, operators etc.)
I mean, I can roughly imagine how the cython backend can be implemented (c-code generation, compilation etc.), but I don't understand how the frontend syntax can be integrated within standard python interpreter without touching the python interpteter itself (i.e. without extending python language beyond standard).
What is cdef ?
In other words, what cdef actually is? Is it a function, operator, or some new keyword? I would understand N = cedef(int) - that would create instance of some class derived from int. But written like that, I don't see how these 3 tokens even interact (1 cdef, 2 int, 3 N) ?
Does the for-loop actually iterate?
if you write code like this:
cdef int i,N = 1000000
cdef double f = 0
for i in xrange(N):
f += (i*i)
print f
The loop for i in xrange(N): is normal python loop. What prevents python interpreter to uselessly iterate 1000000 iterations before cython compile it into to C-code?
Does it work something like this:
N is an instance of some class cdef. xrange call N.__int__() which returns 1, passing the loop only once. The expression f += (i*i) contains only cdef objects, so cython can redefine __add_(), __set__(), __get__() functions in a way that generate C-code for f+=(i*i)
But then I still don't see how the construct for i in xrange() is send to cython, to generate C code from it.
Anyway, it seems quite complicated and fragiel, so perhaps it must be otherwise
Related
Recently I started learning Julia and have studied a lot of examples. I noticed the # sign/syntax a couple of times. Here is an example:
using DataFrames
using Statistics
df = DataFrame(x = rand(10), y = rand(10))
#df df scatter(:x, :y)
This will simply create a scatterplot. You could also use scatter(df[!, :x], df[!, :y]) without the # and get the same result. I can't find any documentation about this syntax. So I was wondering what this syntax is and when you should use this in Julia?
When you do not know how something works try typing ? followed by what you want to know in Julia REPL.
For an example typing ?# and pressing ENTER yields:
The at sign followed by a macro name marks a macro call. Macros provide the ability to include generated code in the
final body of a program. A macro maps a tuple of arguments, expressed as space-separated expressions or a
function-call-like argument list, to a returned expression. The resulting expression is compiled directly into the
surrounding code. See Metaprogramming for more details and examples.
Macros are a very advanced language concept. They generally take code as an argument and generate new code that gets compiled.
Consider this macro:
macro myshow(expr)
es = string(expr)
quote
println($es," = ",$expr)
end
end
Which can be used as:
julia> #myshow 2+2
2 + 2 = 4
To understand what is really going try #macroexpand:
julia> #macroexpand #myshow 2+2
quote
Main.println("2 + 2", " = ", 2 + 2)
end
You can see that one Julia command (2+2) has been packed around with additional julia code. You can try #macroexpand with other macros that you are using.
For more information see the Metaprogramming section of Julia manual.
What is # in Julia?
Macros have a dedicated character in Julia's syntax: the # (at-sign), followed by the unique name declared in a macro NAME ... end block.
So in the example you noted, the #df is a macro, and the df is its name.
Read here about macros. This concept belongs to the meta-programming feature of Julia. I guess you used the StatsPlots.jl package since #df is one of its prominent tools; using the #macroexpand, you can investigate the functionality of the given macro:
julia> using StatsPlots
julia> #macroexpand #df df scatter(:x, :y)
:(((var"##312"->begin
((var"##x#313", var"##y#314"), var"##315") = (StatsPlots).extract_columns_and_names(var"##312", :x, :y)
(StatsPlots).add_label(["x", "y"], scatter, var"##x#313", var"##y#314")
end))(df))
I have a tricky question;
So, I know that GHC will ‘cache’ (for lack of a better term) top level definitions and only compute them once, e.g. :
myList :: [Int]
myList = fmap (*10) [0..10]
Even if I use myList in several spots, GHC notices the value has no params, so it can share it and won’t ‘rebuild’ the list.
I want to do that, but with a computation which depends on a type-level context; a simplified example is:
dependentList :: forall n. (KnownNat n) => [Nat]
dependentList = [0..natVal (Proxy #n)]
So the interesting thing here, is that there isn’t a ‘single’ cacheable value for dependentList; but once a type is applied it reduces down to a constant, so in theory once the type-checker runs, GHC could recognize that several spots all depend on the ‘same’ dependentList; e.g. (using TypeApplications)
main = do
print (dependentList #5)
print (dependentList #10)
print (dependentList #5)
My question is, will GHC recognize that it can share both of the 5 lists? Or does it compute each one separately? Technically it would even be possible to compute those values at Compile-time rather than run-time, is it possible to get GHC to do that?
My case is a little more complicated, but should follow the same constraints as the example, however my dependentList-like value is intensive to compute.
I’m not at all opposed to doing this using a typeclass if it makes things possible; does GHC cache and re-use typeclass dictionaries? Maybe I could bake it into a constant in the typeclass dict to get caching?
Ideas anyone? Or anyone have reading for me to look at for how this works?
I'd prefer to do this in such a way that the compiler can figure it out rather than using manual memoization, but I'm open to ideas :)
Thanks for your time!
As suggested by #crockeea I ran an experiment; here's an attempt using a top-level constant with a polymorphic ambiguous type variable, and also an actual constant just for fun, each one contains a 'trace'
dependant :: forall n . KnownNat n => Natural
dependant = trace ("eval: " ++ show (natVal (Proxy #n))) (natVal (Proxy #n))
constantVal :: Natural
constantVal = trace "constant val: 1" 1
main :: IO ()
main = do
print (dependant #1)
print (dependant #1)
print constantVal
print constantVal
The results are unfortunate:
λ> main
eval: 1
1
eval: 1
1
constant val: 1
1
1
So clearly it re-evaluates the polymorphic constant each time it's used.
But if we write the constants into a typeclass (still using Ambiguous Types) it appears that it will resolve the Dictionary values only once per instance, which makes sense when you know that GHC passes the same dict for the same class instances. It does of course re-run the code for different instances:
class DependantClass n where
classNat :: Natural
instance (KnownNat n) => DependantClass (n :: Nat) where
classNat = trace ("dependant class: " ++ show (natVal (Proxy #n))) (natVal (Proxy #n))
main :: IO ()
main = do
print (classNat #1)
print (classNat #1)
print (classNat #2)
Result:
λ> main
dependant class: 1
1
1
dependant class: 2
2
As far as getting GHC to do these at compile-time, it looks like you'd do that with lift from TemplateHaskell using this technique.
Unfortunately you can't use this within the typeclass definition since TH will complain that the '#n' must be imported from a different module (yay TH) and it's not known concretely at compile time. You CAN do it wherever you USE the typeclass value, but it'll evaluate it once per lift and you have to lift EVERYWHERE you use it to get the benefits; pretty impractical.
I develop mathematical model using gurobi solver, in python.
I get the following error while running:
SyntaxError: Generator expression must be parenthesized if not sole argument
My constraint is:
My code is:
for s in S:
m.addConstr(sum(x[s,s0,c,i] for s0 in S0 for c in C for i in D,s!=p) == 1,'C_3')
First of all, everything comes whenever you add that comma: ,s!=p.
I just emulated your code with a model I'm working on, and I obviously got the same error. Look around (e.g. if else in a list comprehension), and you will see that the only mistake you had was that the iterator within the generator wasn't well specified. That means, you had to use an if clause in order to achieve what you wanted:
for s in S:
m.addConstr(
quicksum(x[s,s0,c,i] for s0 in S0 for c in C for i in D if s!=p) == 1,
'C_3_'+str(s) )
By the way, as included in the code, you should use quicksum instead of sum. Furthermore, I would suggest to try changing the order of the iterators; in other words, it's not the same for a computer to enumerate a list of 5 elements 1000 times than to enumerate 5 times a list of 1000 elements, and this is something quite important in Python timing.
As a side note, I got into this question while looking for this:
TypeError: unsupported operand type(s) for +: 'generator' and 'generator'
I am concerned about writing self-modifying code in Ruby. And by self-modifying, I mean being able to write functions that take a code block as an input value, and output another code block based on this. (I am not asking about basics such as redefining methods at runtime.)
What I might want to do is, for example, having the following block,
_x_ = lambda { |a, b, c, d| b + c }
one can notice that arguments a and d are not used in the body at all, so I would like a function eg. #strip to remove them,
x = _x_.strip
which should produce same result as writing:
x = lambda { |b, c| b + c }
Now in Lisp, this would be easy, since Lisp code is easily manipulable data. But I do not know how to manipulate Ruby code. I can parse it eg. by
RubyVM::InstructionSequence.disassemble( x )
But how, based on this, do I write a modified block? Other examples of what I would want to do are are eg.
y = lambda { A + B }
y.deconstantize
# should give block same as saying
lambda { |_A, _B| _A + _B }
So far, in Ruby, I have never encountered a situation where I had to concede that something is not possible. But this time, gut feeling tells me that I might have encountered the fundamental weakness of beautifully structured code vs. code with little syntax to speak about (which would be Lisp). Please enlighten me.
Detecting whether a block variable is used or not is a complicated task, and you seem to be saying that you can do that by using RubyVM. So the question seems to be asking how to change the arity of the code.
If you have:
_x_ = ->a, b, c, d{b + c}
and suppose you were able to use RubyVM and come to know that a and d are not used, so you want to create
x = ->b, c{b + c}
out of _x_. Then, that is simple:
x = ->b, c{_x_.call(nil, b, c, nil)}
Boris do you necessarily have to rely on Ruby to begin with here?
Why not just create your own situation-specific language that the chemists can use just for the purpose to express their formulas in the most convenient way. Then you create a simple parser and compiler for this "chemical expression language".
What I mean is this parser and compiler will parse and compile the expressions the chemists write in their Ruby code. Then you could have:
ChemicalReaction.new(..., "[ATP] * [GDP] * NDPK_constant")
Voila: ultimate flexibility.
That's the approach I would take if usability is your main concern. Already writing out "lambda" seems like an unnecessarily cumbersome thing to me here, if all you want to do is express some domain-specific formula in the most compact way possible.
How can I use the same random number generator in my "Python with numpy" code as my C++0x code?
I am currently using
std::ranlux64_base_01
in C++ and
numpy.random.RandomState(10)
in Python.
I exposed C++'s random number generator:
typedef std::ranlux64_base_01 RNG;
RNG g_rng;
...
class_<RNG>("RNG");
scope().attr("g_rng") = g_rng;
How do I use it with Python's methods that take a numpy.random?
There are 2 ways:
the first is to use pythons random number generator from c++. It will probably look something like this:
boost::python::object randmod = boost::python::import("numpy.random")
boost::python::object randfunc = randmod.attr("RandomState")
randfunc(10)
The second is to wrap and expose the c++ function so that it can be used from python. The code for this is left an an exercise for the student.
Edit:
Once you have exported the c++ function you would have to make a python object that mimics the interface of numpy.random.RandomState using the c++ function for it's random bits. This is probably more work then you want to do. I have not used numpy, but from the docs it looks like the RandomState object is not-trivial.