Hooking into rand has been easier in the old days... I think I followed the description in the docs, but it doesn't seem to like a sampler returning an array:
using Random
struct Shell{N}
r0::Float64
r1::Float64
end
Base.eltype(::Type{Shell{N}}) where {N} = Array{Float64, N}
function Random.rand(rng::Random.AbstractRNG, d::Random.SamplerTrivial{Shell{N}}) where {N}
# ignore the correctness of the sampling algorithm for now :)
shell = d[]
Δ = shell.r1 - shell.r0
θ = Δ .* randn(N)
r = shell.r0 .+ θ .* .√rand(N)
end
Test:
julia> rand(Shell{2}(0, 1))
2-element Array{Float64,1}:
0.5165139561555491
0.035180151872393726
julia> rand(Shell{2}(0, 1), 2)
ERROR: MethodError: no method matching Array{Float64,2}(::Array{Float64,1})
Closest candidates are:
Array{Float64,2}(::AbstractArray{S,N}) where {T, N, S} at array.jl:498
Array{Float64,2}(::UndefInitializer, ::Int64, ::Int64) where T at boot.jl:406
Array{Float64,2}(::UndefInitializer, ::Int64...) where {T, N} at boot.jl:410
...
Stacktrace:
[1] convert(::Type{Array{Float64,2}}, ::Array{Float64,1}) at ./array.jl:490
[2] setindex!(::Array{Array{Float64,2},1}, ::Array{Float64,1}, ::Int64) at ./array.jl:782
[3] rand! at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/Random/src/Random.jl:271 [inlined]
[4] rand! at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/Random/src/Random.jl:266 [inlined]
[5] rand at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/Random/src/Random.jl:279 [inlined]
[6] rand at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/Random/src/Random.jl:280 [inlined]
[7] rand(::Shell{2}, ::Int64) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/Random/src/Random.jl:283
[8] top-level scope at REPL[14]:1
What am I missing?
Your definition of rand always returns an Array{Float64,1}, yet you declared that it returns Array{Float,N} (via your definition of eltype). This is what the error tells you: to put a freshly produced Vector{Float64} into the result array with eltype Array{Float64,2}, a conversion has to be made, but no such conversion is defined, hence the error.
In this case, it seems that changing the eltype definition to Base.eltype(::Type{<:Shell}} = Array{Float64, 1} would solve your problem.
Related
What is the difference between the following two functions?
julia> one(s::String) = return 1
one (generic function with 1 method)
julia> one(::String) = return 1
one (generic function with 1 method)
Both seem to be allowed and there doesn't seem to be a difference between them. I suppose not including the v could signal to the compiler that the value of the argument is not used, but then again this is something the compiler can figure out, right? (Disclaimer: I have no idea how compilers work)
There is no difference if you don't use the argument, and yes, for the compiler this should be trivial.
You can use _ as an actual "discard argument" though, and the parser will prevent you from using it:
julia> f(_) = _ + 1
ERROR: syntax: all-underscore identifier used as rvalue around REPL[9]:1
That's useful in some situations:
julia> _, _, z = (1,2,3)
(1, 2, 3)
julia> z
3
julia> _
ERROR: all-underscore identifier used as rvalue
while
julia> x, x, z = (1,2,3)
(1, 2, 3)
julia> x
2
is slightly confusing.
More technicalities
An unused argument is preseved in IR, though:
julia> one(::String) = return 1
one (generic function with 1 method)
julia> one2(s::String) = return 1
one2 (generic function with 1 method)
julia> ir = #code_lowered one("sdf")
CodeInfo(
1 ─ return 1
)
julia> ir.slotnames
2-element Array{Symbol,1}:
Symbol("#self#")
Symbol("#unused#")
julia> ir2 = #code_lowered one2("sdf")
CodeInfo(
1 ─ return 1
)
julia> ir2.slotnames
2-element Array{Symbol,1}:
Symbol("#self#")
:s
If you ever care about slot names. I can't imagine how this would change further compilation, but it can be a corner case in metaprogramming.
I want to run a simple function on process 2. So I defined function like this:
julia> f(x,y) = x+y
f (generic function with 1 method)
and then I wanted to do it on process 2, but I got an error on it:
julia> remotecall_fetch(f,2,1,1)
ERROR: On worker 2:
UndefVarError: #f not defined
deserialize_datatype at ./serialize.jl:969
handle_deserialize at ./serialize.jl:674
deserialize at ./serialize.jl:634
handle_deserialize at ./serialize.jl:681
deserialize_msg at ./distributed/messages.jl:98
message_handler_loop at ./distributed/process_messages.jl:161
process_tcp_streams at ./distributed/process_messages.jl:118
#99 at ./event.jl:73
Stacktrace:
[1] #remotecall_fetch#141(::Array{Any,1}, ::Function, ::Function, ::Base.Distributed.Worker, ::Int64, ::Vararg{Int64,N} where N) at ./distributed/remotecall.jl:354
[2] remotecall_fetch(::Function, ::Base.Distributed.Worker, ::Int64, ::Vararg{Int64,N} where N) at ./distributed/remotecall.jl:346
[3] #remotecall_fetch#144(::Array{Any,1}, ::Function, ::Function, ::Int64, ::Int64, ::Vararg{Int64,N} where N) at ./distributed/remotecall.jl:367
[4] remotecall_fetch(::Function, ::Int64, ::Int64, ::Vararg{Int64,N} where N) at ./distributed/remotecall.jl:367
I know we can define function like this:
julia> #everywhere f(x,y)=x+y
and then we can get the result:
julia> remotecall_fetch(f,2,3,4)
7
Actually I don't know how can I define my functions In all processes or some of them by Include or using
.
#everywhere is the correct macro to use. For modules, just do #everywhere using MyModule and all the exported functions in module MyModule will be available to all worker processes.
In evaluating math expressions with SWI-Prolog I need to evaluate -1 raised to an exponential. When the exponential is an integer the result is as expected but when the exponential is a non-integer the result is undefined.
Is it possible to evaluate a -1 to a non-integer with SWI-Prolog?
e.g. (-1)^0.5
A preferred answer should not take more then several lines to accomplish. The use of a package is acceptable. The use of calling into another language would be acceptable but less preferred.
Supplement
With SWI-Prolog when using either ^/2 or **/2 with a base of -1 and a fraction as the exponent results in undefined
?- V is **(-1.0,-0.5).
ERROR: Arithmetic: evaluation error: `undefined'
ERROR: In:
ERROR: [8] _3688 is -1.0** -0.5
ERROR: [7] <user>
?- V is **(-1.0,0.5).
ERROR: Arithmetic: evaluation error: `undefined'
ERROR: In:
ERROR: [8] _43410 is -1.0**0.5
ERROR: [7] <user>
?- V is ^(-1.0,-0.5).
ERROR: Arithmetic: evaluation error: `undefined'
ERROR: In:
ERROR: [8] _6100 is -1.0^ -0.5
ERROR: [7] <user>
?- V is ^(-1.0,0.5).
ERROR: Arithmetic: evaluation error: `undefined'
ERROR: In:
ERROR: [8] _7294 is -1.0^0.5
ERROR: [7] <user>
However when using either ^/2 or **/2 with a base of -1 and an integer as the exponent results in a valid value.
?- V is ^(-1.0,-3.0).
V = -1.0.
?- V is ^(-1.0,-2.0).
V = 1.0.
?- V is ^(-1.0,-1.0).
V = -1.0.
?- V is ^(-1.0,0.0).
V = 1.0.
?- V is ^(-1.0,1.0).
V = -1.0.
?- V is ^(-1.0,2.0).
V = 1.0.
?- V is ^(-1.0,3.0).
V = -1.0.
?- V is **(-1.0,-3.0).
V = -1.0.
?- V is **(-1.0,-2.0).
V = 1.0.
?- V is **(-1.0,-1.0).
V = -1.0.
?- V is **(-1.0,0.0).
V = 1.0.
?- V is **(-1.0,1.0).
V = -1.0.
?- V is **(-1.0,2.0).
V = 1.0.
?- V is **(-1.0,3.0).
V = -1.0.
I am aware that SWI-Prolog math is based on GNU multiple precision arithmetic library (GMP), which as noted on Wikipedia does not support complex numbers.
I am also aware that a plot of -1^X is a continuous function of both real and imaginary parts. Currently I am only interested in the real part.
Noticing that the plot of (-1)^x is a periodic function similar to cos function with a frequency shift, start with
cos((10 * x) / pi)
To adjust the frequency of the function, plot the function with a translation on the X-axis by 10 then adjust the cos function to match, e.g.
cos((9.9 * x) / pi)
Then keep translating further out and adjusting. After a few iterations of adjusting this function is close to what is needed even for x of 10,000.
cos((9.8696 * x) / pi)
?- V is cos((9.8696*(10000.0))/pi).
V = 0.9999018741279994.
Supplement
To make the adjustments easier to do, the functions were plotted using Wolfram Alpha. Note that due to differences between SWI-Prolog and Wolfram Alpha, the adjustment factors are slightly different. For Wolfram Alpha the factor is 9.86955 while with SWI-Prolog it is 9.8696
I have a function f defined as follows.
f(x, y) = 3x^2 + x*y - 2y + 1
How can I retrieve the following quote block for this method, which includes the function contents?
quote # REPL[0], line 2:
((3 * x ^ 2 + x * y) - 2y) + 1
end
As folks have mentioned in the comments, digging through the fields of the methods like this isn't a stable or officially supported API. Further, your simple example is deceiving. This isn't, in general, representative of the original code you wrote for the method. It's a simplified intermediate AST representation with single-assignment variables and drastically simplified control flow. In general, the AST it returns isn't valid top-level Julia code. It just so happens that for your simple example, it is.
That said, there is a documented way to do this. You can use code_lowered() to get access to this intermediate representation without digging through undocumented fields. This will work across Julia versions, but I don't think there are official guarantees on the stability of the intermediate representation yet. Here's a slightly more complicated example:
julia> f(X) = for elt in X; println(elt); end
f (generic function with 1 method)
julia> code_lowered(f)[1]
LambdaInfo template for f(X) at REPL[17]:1
:(begin
nothing
SSAValue(0) = X
#temp# = (Base.start)(SSAValue(0))
4:
unless !((Base.done)(SSAValue(0),#temp#)) goto 13
SSAValue(1) = (Base.next)(SSAValue(0),#temp#)
elt = (Core.getfield)(SSAValue(1),1)
#temp# = (Core.getfield)(SSAValue(1),2) # line 1:
(Main.println)(elt)
11:
goto 4
13:
return
end)
julia> code_lowered(f)[1] == methods(f).ms[1].lambda_template
true
If you really want to see the code exactly as it was written, the best way is to use the embedded file and line information and refer to the original source. Note that this is precisely the manner in which Gallium.jl (Julia's debugger) finds the source to display as it steps through functions. It's undocumented, but you can even access the REPL history for functions defined interactively. See how Gallium does it through here.
First, retrieve the method using methods(f).
julia> methods(f)
# 1 method for generic function "f":
f(x, y) at REPL[1]:1
julia> methods(f).ms
1-element Array{Method,1}:
f(x, y) at REPL[1]:1
julia> method = methods(f).ms[1]
f(x, y) at REPL[1]:1
From here, retrieving the Expression is straightforward; simply use the lambda_template attribute of the method.
julia> method.lambda_template
LambdaInfo template for f(x, y) at REPL[1]:1
:(begin
nothing
return ((3 * x ^ 2 + x * y) - 2 * y) + 1
end)
Edit: This does not work in Julia v0.6+!
In another thread The best way to construct a function with memory, it was described how to back in file a function:
$runningLogFile = "/some/directory/runningLog.txt";
flog[x_, y_] := flog[x, y] = f[x, y] /.
v_ :> (PutAppend[Unevaluated[flog[x, y] = v;], $runningLogFile]; v)
I feel like I understand most of the ingredients here without understanding exactly how this works. Any chance someone could walk me through exactly how this is evaluated?
Let's walk through the evaluation of flog[1, 2], step-by-step...
flog[1, 2]
When this expression is evaluated, Mathematica will substitute 1 for x and 2 for y in the definition of flog given in the question. This yields the next step in our tour:
flog[1, 2] =
f[1, 2] /. v_ :> (PutAppend[Unevaluated[flog[1, 2] = v;], $runningLogFile];
v)
Note carefully that the assignment here, flog[1, 2] = ..., is part of the definition of flog itself.
/. is an infix operator that is an alternate representation of the ReplaceAll function. ReplaceAll will apply a replacement rule to the value of the first argument. Hold that thought -- we'll come back to it. The first argument is flog[1, 2] = f[1, 2]. This expression will evaluate f[1, 2] and then assign the result to flog[1, 2]. For the sake of discussion, let's assume that f[1, 2] returns 345. Thus, a new definition will be added to flog, namely flog[1, 2] = 345. After assignment, we can check the definition of flog:
Observe that where flog only had a single definition initially but now it has two -- the newly added flog[1, 2] definition caching the result of that call. This is frequently called "memoization".
flog[1, 2] = 345 may have the side-effect of establishing a new definition for flog but, like every expression in Mathematica, it yields a value as well. The value is 345 which, after much ado, will be the first argument to ReplaceAll.
The second argument to ReplaceAll is an invocation of the :> operator, an infix expression of the RuleDelayed function. In an effort to keep this post to a manageable size, we'll simply note that the rule evaluates to itself in this context.
So, now we have an expression that involves /. to evaluate...
345 /. v_ :> (PutAppend[Unevaluated[flog[1, 2] = v;], $runningLogFile]; v
A replacement expression matches its first argument (345) with the pattern component of the replacement rule (v_). v_ matches 345 (or anything else for that matter) and gives 345 the name v for purposes of replacement. ReplaceAll then substitutes 345 for every occurrence of v in the right hand side of the rule. The result is the next expression to be evaluated...
(PutAppend[Unevaluated[flog[1, 2] = 345;], $runningLogFile]; 345)
Here we have two expressions separated by a semicolon. Incidentally, ; is an infix operator that expands to CompoundExpression. The first expression involves PutAppend which writes the value of its first argument to the file named as the value of the second argument. Note, however, that the first argument is wrapped in Unevaluated. This suppresses the evaluation of the first argument so that it will be written exactly as-is to the file: flog[1, 2] = 345;. Should the current Mathematica session end, the written expression can be read into a future Mathematica session to re-establish the memoized result for flog[1, 2].
CompoundExpression discards the value of all arguments except the last. Here, the last argument is 345. Since we have come to the end of our expression, this will be the final return value of the original call. That is, flog[1, 2] returns 345 -- although as we saw there were side-effects that saved this result to memory and disk for future reference.
Future calls to flog[1, 2]
Now if flog[1, 2] is called again, Mathematica will find the new definition flog[1, 2] = 345. 345 will be returned directly, without any of the complications that we discussed above. In particular, it won't even call f[1, 2] again. This, of course, was the whole motivation for this example. The assumption was that f was very expensive to calculate, justifying all of these gymnastics to minimize the number of times that calculation occurs.