how to understand few symbols and syntaxs in Julia? - syntax

I am very new to Julia language, so I started to read the documentation and all built-in functions. Now, I am learning one github project for my work. Since I am more comfortable with Python, I tried to translate Julia's code to python by my understanding, but I got a few weird syntaxes that I didn't understand and I got stuck with them. Can anyone point me out the meaning of those syntaxes? Thanks in advance!
syntax that I don't understand
those julia code line that I didn't understand because I didn't find them either in documentations.
var1 = Tuple{Integer, Vector}[]
here we declare object var 1, what's a real example for that? what's the python version?
also if X::Matrix, n::Int, then what's the meaning of ? in the below? How should I code this in python?
K = [( i >= j ? dot(view(X,:,i), view(X,:,j)) : 0.0 )::Float64 for i=1:n, j=1:n]
how should we code up this in python?
Also, I am not sure about meaning of -> in below:
for i=1:n
id_i = find(x -> x[1] == i, var1)
xi_i_list = map(x -> x[2], var1[id_i])
how should we translate this into python?
lastly, I just don't understand the meaning of .> in below:
act= zeros(100)
alpha = zeros(10)
for i=1:100
idx = find(x::Tuple{Integer, Vector} -> x[1] == i, var1)
act[i] = sum(alpha[idx] .> 1e-3)
As a newbie, I am trying to understand the role of find(), map(). To the best, I wish I could write the above Julia code with Python. But I have a hard time understanding the code. Can anyone give possible interpretations and corresponding python codes for learning purposes? Thanks in advance!

First of all, the Julia documentation offers a list of Noteworthy differences from Python. Now to each question:
var1 = Tuple{Integer, Vector}[]
here we declare object var 1, what's a real example for that? what's the python version?
Vector, which is sugar for Array{T,1} where T, means a 1D array with elements of any type.
Tuple{Integer, Vector} is thus a tuple with an Integer and a Vector, like, (1, [1, 2]) for example.
var1 is just an empty vector of such tuples.
You can push! elements like the latter into var1 to create a "real" example:
julia> var1 = Tuple{Integer, Vector}[]
Tuple{Integer,Array{T,1} where T}[]
julia> push!(var1, (1, [1, 2]))
1-element Array{Tuple{Integer,Array{T,1} where T},1}:
(1, [1, 2])
julia> push!(var1, (2, [3.0, "foo", 4]))
2-element Array{Tuple{Integer,Array{T,1} where T},1}:
(1, [1, 2])
(2, Any[3.0, "foo", 4])
what's the meaning of ?
You can type ? to access the "help" mode in julia, and then ask it what ? is. From its documentation:
a ? b : c
Short form for conditionals; read "if a, evaluate b otherwise evaluate c". Also known as the ternary operator.
This syntax is equivalent to if a; b else c end, but is often used to emphasize the value b-or-c which is being used as part of a larger expression, rather than the side effects that evaluating b or c may have.
See the manual section on control flow for more details.
Examples
julia> x = 1; y = 2;
julia> println(x > y ? "x is larger" : "y is larger")
y is larger
not sure about meaning of ->
This is just to create an anonymous function.
I just don't understand the meaning of .>
This is just the element-by-element "greater than" operator >. See the documentation on dotted operators for more details.

Related

Prolog program to get an (integer) number as the sum of two integer squares, why does it not work?

I'm starting learning Prolog and I want a program that given a integer P gives to integers A and B such that P = A² + B². If there aren't values of A and B that satisfy this equation, false should be returned
For example: if P = 5, it should give A = 1 and B = 2 (or A = 2 and B = 1) because 1² + 2² = 5.
I was thinking this should work:
giveSum(P, A, B) :- integer(A), integer(B), integer(P), P is A*A + B*B.
with the query:
giveSum(5, A, B).
However, it does not. What should I do? I'm very new to Prolog so I'm still making lot of mistakes.
Thanks in advance!
integer/1 is a non-monotonic predicate. It is not a relation that allows the reasoning you expect to apply in this case. To exemplify this:
?- integer(I).
false.
No integer exists, yes? Colour me surprised, to say the least!
Instead of such non-relational constructs, use your Prolog system's CLP(FD) constraints to reason about integers.
For example:
?- 5 #= A*A + B*B.
A in -2..-1\/1..2,
A^2#=_G1025,
_G1025 in 1..4,
_G1025+_G1052#=5,
_G1052 in 1..4,
B^2#=_G406,
B in -2..-1\/1..2
And for concrete solutions:
?- 5 #= A*A + B*B, label([A,B]).
A = -2,
B = -1 ;
A = -2,
B = 1 ;
A = -1,
B = -2 ;
etc.
CLP(FD) constraints are completely pure relations that can be used in the way you expect. See clpfd for more information.
Other things I noticed:
use_underscores_for_readability_as_is_the_convention_in_prolog instead ofMixingTheCasesToMakePredicatesHardToRead.
use declarative names, avoid imperatives. For example, why call it give_sum? This predicate also makes perfect sense if the sum is already given. So, what about sum_of_squares/3, for example?
For efficiency sake, Prolog implementers have choosen - many,many years ago - some compromise. Now, there are chances your Prolog implements advanced integer arithmetic, like CLP(FD) does. If this is the case, mat' answer is perfect. But some Prologs (maybe a naive ISO Prolog compliant processor), could complain about missing label/1, and (#=)/2. So, a traditional Prolog solution: the technique is called generate and test:
giveSum(P, A, B) :-
( integer(P) -> between(1,P,A), between(1,P,B) ; integer(A),integer(B) ),
P is A*A + B*B.
between/3 it's not an ISO builtin, but it's rather easier than (#=)/2 and label/1 to write :)
Anyway, please follow mat' advice and avoid 'imperative' naming. Often a description of the relation is better, because Prolog it's just that: a relational language.

How do I find the maximum positive value in an array in Mathematica?

I have an array of expressions, each depends on a.
I want to find the minimal positive value, as it depends on a, without having to substitute for a.
For example, if the array is [a^2, 1-2a, 1], then the function, call it MinPositive would return:
(MinPositive[a^2, 1-2a, 1]) /. a-> 0
0
(MinPositive[a^2, 1-2a, 1]) /. a-> 0.7
0.7^2
and so on.
Any ideas?
I would appreciate help to write the MinPositive function so that it can be used, for example, instead of the regular Min function.
Thanks.
Did you have something like this in mind? Return the expression that retsults in the min value..
minp[lst_, a_, v_] := (
pos = Select[lst, ((# /. a -> v) > 0) &];
Last#Sort[pos , ( (#1 /. a -> v ) > (#2 /. a -> v )) &])
minp[{a^2, 1 - 2 a, 1}, a, .2] -> a^2
minp[{a^2, 1 - 2 a, 1}, a, .48] -> 1-2 a
minp[{a^2, 1 - 2 a, 1}, a, 2] -> 1
The expression
[a^2, 1-2a, 1]
is not a well-formed Mathematica expression, perhaps you mean
{a^2, 1-2a, 1}
which is a valid expression for a list of 3 elements. Mathematica doesn't really do arrays as such, though lists can generally be used to model arrays.
On the other hand the expression
MinPositive[a^2, 1-2a, 1]
is a valid call to a function called MinPositive with 3 arguments.
All that to one side, I think you might be looking for a function call such as
MinPositive[{a^2, 1-2a, 1}/.a->0]
in which the value of 0 will be substituted for a inside the call to MinPositive but will not be applied outside that call.
It's not clear from your question whether you want help to write the MinPositive function; if you do, edit your question and make it clear. Further, your question title asks for the maximum positive value, while the body of your question refers to minima. You might want to sort that out too.
EDIT
I don't have Mathematica on this machine so I haven't checked this, but it should be close enough for you to finish off:
minPositive[lst_List] := Min[Select[lst,#>0&]]
which you would then call like this
minPositive[{a^2, 1-2a, 1}]
(NB: I avoid creating functions with a name with an initial capital letter.)
Or, considering your comment, perhaps you want something like
minPositive[lst_List, rl_Rule] := Min[Select[lst/.rl,#>0&]]
which you would call like this:
minPositive[{a^2, 1-2a, 1},a->2]
EDIT 2
The trouble, for you, with an expression such as
(MinPositive[a^2, 1-2a, 1]) /. a-> 0
is that the normal evaluation loop in Mathematica will cause the evaluation of the MinPositive function before the replacement rule is applied. How then can Mathematica figure out the minimum positive value in the list when a is set to a particular value ?
To prevent evaluation of the arguments before calling the body of the function is achieved by setting the attributes of the function to HoldAll (prevents evaluations of all arguments), HoldFirst (prevents the evaluation of the first argument only) or HoldRest (prevents the evaluation of all but the first argument).
In addition, since "a" by itself is not an argument, you need to use Block to isolate it from (potential) definitions for "a"
so
SetAttributes[minPositive, HoldAll]
minPositive[lst_List] := Block[{a},Min[Select[lst /. a -> 0, # > 0 &]]]
and even if you explicitly set a to some other value, say
a=3
than
minPositive[{a^2, 1 - 2 a, 100}]
returns 9 as expected
HTH
yehuda

Is there a name for the function that returns a positionally-expanding version of its argument?

Consider splatter in this Python code:
def splatter(fn):
return lambda (args): fn(*args)
def add(a, b):
return a + b
list1 = [1, 2, 3]
list2 = [4, 5, 6]
print map(splatter(add), zip(list1, list2))
Mapping an n-ary function over n zipped sequences seems like a common enough operation that there might be a name for this already, but I have no idea where I'd find that. It vaguely evokes currying, and it seems like there are probably other related argument-centric HOFs that I've never heard of. Does anyone know if this is a "well-known" function? When discussing it I am currently stuck with the type of awkward language used in the question title.
Edit
Wow, Python's map does this automatically. You can write:
map(add, list1, list2)
And it will do the right thing, saving you the trouble of splattering your function. The only difference is that zip returns a list whose length is the the length of its shortest argument, whereas map extends shorter lists with None.
I think zipWith is the function that you are searching (this name is at least used in Haskell). It is even a bit more general. In Haskell zipWith is defined as follows (where the first line is just the type):
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
zipWith f (a:as) (b:bs) = f a b : zipWith f as bs
zipWith _ _ _ = []
And your example would be something like
zipWith (+) [1, 2, 3] [4, 5, 6]
Since I do not know python very well I can only point to "zipWith analogue in Python?".
I randomly saw this in my list of "Questions asked," and was surprised that I now know the answer.
There are two interpretations of the function that I asked.
The first was my intent: to take a function that takes a fixed number of arguments and convert it into a function that takes those arguments as a fixed-size list or tuple. In Haskell, the function that does this operation is called uncurry.
uncurry :: (a -> b -> c) -> ((a, b) -> c)
(Extra parens for clarity.)
It's easy to imagine extending this to functions of more than two arguments, though it can't be expressed in Haskell. But uncurry3, uncurry4, etc. would not be out of place.
So I was right that it "vaguely evokes currying," as it is really the opposite.
The second interpretation is to take a function that takes an intentionally variable number of arguments and return a function that takes a single list.
Because splat is so weird as a syntactic construct in Python, this is hard to reason about.
But if we imagine, say, JavaScript, which has a first-class named function for "splatting:"
varFn.apply(null, args)
var splatter = function(f) {
return function(arg) {
return f.apply(null, arg);
};
};
Then we could rephrase that as merely a partial application of the "apply" function:
var splatter = function(f) {
return Function.prototype.apply.bind(f, null);
};
Or using, Underscore's partial, we can come up with the point-free definition:
var splatter = _.partial(Function.prototype.bind.bind(Function.prototype.apply), _, null)
Yes, that is a nightmare.
(The alternative to _.partial requires defining some sort of swap helper and would come out even less readable, I think.)
So I think that the name of this operation is just "a partial application of apply", or in the Python case it's almost like a section of the splat operator -- if splat were an "actual" operator.
But the particular combination of uncurry, zip, and map in the original question is exactly zipWith, as chris pointed out. In fact, HLint by default includes a rule to replace this complex construct with a single call to zipWith.
I hope that clears things up, past Ian.

Specifics of usage and internal work of *Set* functions

I just noticed one undocumented feature of internal work of *Set* functions in Mathematica.
Consider:
In[1]:= a := (Print["!"]; a =.; 5);
a[b] = 2;
DownValues[a]
During evaluation of In[1]:= !
Out[3]= {HoldPattern[a[b]] :> 2}
but
In[4]:= a := (Print["!"]; a =.; 5);
a[1] = 2;
DownValues[a]
During evaluation of In[4]:= !
During evaluation of In[4]:= Set::write: Tag Integer in 5[1] is Protected. >>
Out[6]= {HoldPattern[a[b]] :> 2}
What is the reason for this difference? Why a is evaluated although Set has attribute HoldFirst? For which purposes such behavior is useful?
And note also this case:
In[7]:= a := (Print["!"]; a =.; 5)
a[b] ^= 2
UpValues[b]
a[b]
During evaluation of In[7]:= !
Out[8]= 2
Out[9]= {HoldPattern[5[b]] :> 2}
Out[10]= 2
As you see, we get the working definition for 5[b] avoiding Protected attribute of the tag Integer which causes error in usual cases:
In[13]:= 5[b] = 1
During evaluation of In[13]:= Set::write: Tag Integer in 5[b] is Protected. >>
Out[13]= 1
The other way to avoid this error is to use TagSet*:
In[15]:= b /: 5[b] = 1
UpValues[b]
Out[15]= 1
Out[16]= {HoldPattern[5[b]] :> 1}
Why are these features?
Regarding my question why we can write a := (a =.; 5); a[b] = 2 while cannot a := (a =.; 5); a[1] = 2. In really in Mathematica 5 we cannot write a := (a =.; 5); a[b] = 2 too:
In[1]:=
a:=(a=.;5);a[b]=2
From In[1]:= Set::write: Tag Integer in 5[b] is Protected. More...
Out[1]=
2
(The above is copied from Mathematica 5.2)
We can see what happens internally in new versions of Mathematica when we evaluate a := (a =.; 5); a[b] = 2:
In[1]:= a:=(a=.;5);
Trace[a[b]=2,TraceOriginal->True]
Out[2]= {a[b]=2,{Set},{2},a[b]=2,{With[{JLink`Private`obj$=a},RuleCondition[$ConditionHold[$ConditionHold[JLink`CallJava`Private`setField[JLink`Private`obj$[b],2]]],Head[JLink`Private`obj$]===Symbol&&StringMatchQ[Context[JLink`Private`obj$],JLink`Objects`*]]],{With},With[{JLink`Private`obj$=a},RuleCondition[$ConditionHold[$ConditionHold[JLink`CallJava`Private`setField[JLink`Private`obj$[b],2]]],Head[JLink`Private`obj$]===Symbol&&StringMatchQ[Context[JLink`Private`obj$],JLink`Objects`*]]],{a,a=.;5,{CompoundExpression},a=.;5,{a=.,{Unset},a=.,Null},{5},5},RuleCondition[$ConditionHold[$ConditionHold[JLink`CallJava`Private`setField[5[b],2]]],Head[5]===Symbol&&StringMatchQ[Context[5],JLink`Objects`*]],{RuleCondition},{Head[5]===Symbol&&StringMatchQ[Context[5],JLink`Objects`*],{And},Head[5]===Symbol&&StringMatchQ[Context[5],JLink`Objects`*],{Head[5]===Symbol,{SameQ},{Head[5],{Head},{5},Head[5],Integer},{Symbol},Integer===Symbol,False},False},RuleCondition[$ConditionHold[$ConditionHold[JLink`CallJava`Private`setField[5[b],2]]],False],Fail},a[b]=2,{a[b],{a},{b},a[b]},2}
I was very surprised to see calls to Java in such a pure language-related operation as assigning a value to a variable. Is it reasonable to use Java for such operations at all?
Todd Gayley (Wolfram Research) has explained this behavior:
At the start, let me point out that in
Mathematica 8, J/Link no longer
overloads Set. An internal kernel
mechanism was created that, among
other things, allows J/Link to avoid
the need for special, er, "tricks"
with Set.
J/Link has overloaded Set from the
very beginning, almost twelve years
ago. This allows it support this
syntax for assigning a value to a Java
field:
javaObject#field = value
The overloaded definition of Set
causes a slowdown in assignments of
the form
_Symbol[_Symbol] = value
Of course, assignment is a fast
operation, so the slowdown is small in
real terms. Only highly specialized
types of programs are likely to be
significantly affected.
The Set overload does not cause a
call to Java on assignments that do
not involve Java objects (this would
be very costly). This can be verified
with a simple use of TracePrint on
your a[b]=c.
It does, as you note, make a slight
change in the behavior of assignments
that match _Symbol[_Symbol] = value.
Specifically, in f[_Symbol] = value, f
gets evaluated twice. This can cause
problems for code with the following
(highly unusual) form:
f := SomeProgramWithSideEffects[]
f[x] = 42
I cannot recall ever seeing "real"
code like this, or seeing a problem
reported by a user.
This is all moot now in 8.0.
Taking the case of UpSet first, this is expected behavior. One can write:
5[b] ^= 1
The assignment is made to b not the Integer 5.
Regarding Set and SetDelayed, while these have Hold attributes, they still internally evaluate expressions. This allows things such as:
p = n : (_List | _Integer | All);
f[p] := g[n]
Test:
f[25]
f[{0.1, 0.2, 0.3}]
f[All]
g[25]
g[{0.1, 0.2, 0.3}]
g[All]
One can see that heads area also evaluated. This is useful at least for UpSet:
p2 = head : (ff | gg);
p2[x] ^:= Print["Echo ", head];
ff[x]
gg[x]
Echo ff
Echo gg
It is easy to see that it happens also with Set, but less clear to me how this would be useful:
j = k;
j[5] = 3;
DownValues[k]
(* Out= {HoldPattern[k[5]] :> 3} *)
My analysis of the first part of your question was wrong. I cannot at the moment see why a[b] = 2 is accepted and a[1] = 2 is not. Perhaps at some stage of assignment the second one appears as 5[1] = 2 and a pattern check sets off an error because there are no Symbols on the LHS.
The behavour you show appears to be a bug in 7.0.1 (and possibly earlier) that was fixed in Mathematica 8. In Mathematica 8, both of your original a[b] = 2 and a[1] = 2 examples give the Set::write ... is protected error.
The problem appears to stem from the JLink-related down-value of Set that you identified. That rule implements the JLink syntax used to assign a value to the field of a Java object, e.g. object#field = value.
Set in Mathematica 8 does not have that definition. We can forcibly re-add a similar definition, thus:
Unprotect[Set]
HoldPattern[sym_Symbol[arg_Symbol]=val_] :=
With[{obj=sym}
, setField[obj[arg], val] /; Head[obj] === Symbol && StringMatchQ[Context[obj],"Something`*"]
]
After installing this definition in Mathematica 8, it now exhibits the same inconsistent behaviour as in Mathematica 7.
I presume that JLink object field assignment is now accomplished through some other means. The problematic rule looks like it potentially adds costly Head and StringMatchQ tests to every evaluation of the form a[b] = .... Good riddance?

How to store punch of equations / constants to solve for any element equation or numerical value

Lets say, that problems are fairly simple - something, that pre-degree theoretical physics student would solve. And student does the hardest part of the task - functional reading: parsing linguistically free form text, to get input and output variables and input variable values.
For example: a problem about kinematic equations, where there are variables {a,d,t,va,vf} and few functions that describe, how thy are dependent of each-other. So using skills acquired in playing fitting blocks where thy fit, you play with the equations to get the output variable you where looking for.
In any case, there are exactly 2 possible outputs you might want and thy are (with working example):
1) Equation for that variable
Physics[have_, find_] := Solve[Flatten[{
d == vf * t - (a * t^2) /2, (* etc. *)
have }], find]
Physics[True, {d}]
{{d -> (1/2)*(2*t*vf - a*t^2)}}
2) Exact or general numerical value for that variable
Physics[have_, find_] := Solve[Flatten[{
d == vf * t - (a * t^2) /2, (* etc. *)
have }], find]
Physics[{t == 9.7, vf == -104.98, a == -9.8}, {d}]
{{d->-557.265}}
I am not sure, that I am approaching the problem correctly.
I think that I would probably prefer an approach like
In[1]:= Physics[find_, have_:{}] := Solve[
{d == vf*t - (a*t^2)/2 (* , etc *)} /. have, find]
In[2]:= Physics[d]
Out[2]= {{d -> 1/2 (-a t^2 + 2 t vf)}}
In[2]:= Physics[d, {t -> 9.7, vf -> -104.98, a -> -9.8}]
Out[2]= {{d -> -557.265}}
Where the have variables are given as a list of replacement rules.
As an aside, in these types of physics problems, a nice thing to do is define your physical constants like
N[g] = -9.8;
which produces a NValues for g. Then
N[tf] = 9.7;N[vf] = -104.98;
Physics[d, {t -> tf, vf -> vf, a -> g}]
%//N
produces
{{d->1/2 (-g tf^2+2 tf vf)}}
{{d->-557.265}}
Let me show some advanges of Simon's approach:
You are at least approaching this problem reasonably. I see a fine general purpose function and I see you're getting results, which is what matters primarily. There is no 'correct' solution, since there might be a large range of acceptable solutions. In some scenario's some solutions may be preferred over others, for instance because of performance, while that might be the other way around in other scenarios.
The only slight problem I have with your example is the dubious parametername 'have'.
Why do you think this would be a wrong approach?

Resources