Function for Reshape View? - view

In Julia v0.5, how do you make a function which is like reshape but instead returns a view? ArrayViews.jl has a reshape_view function but it doesn't seem directly compatible with the new view function. I just want to reshape u to some tuple sizeu where I don't know the dimensions.

If you reshape a 'view', the output is a reshaped 'view'.
If your initial variable is a normal array, you can convert it to a view 'on the fly' during your function call.
There are no reallocations during this operation, as per your later comment: you can confirm this with the pointer function. The objects aren't the same, in the sense that they are interpreted as pointers to a different 'type', but the memory address is the same.
julia> A = ones(5,5,5); B = view(A, 2:4, 2:4, 2:4); C = reshape(B, 1, 27);
julia> is(B,C)
false
julia> pointer(B)
Ptr{Float64} #0x00007ff51e8b1ac8
julia> pointer(C)
Ptr{Float64} #0x00007ff51e8b1ac8
julia> C[1:5] = zeros(1,5);
julia> A[:,:,2]
5×5 Array{Float64,2}:
1.0 1.0 1.0 1.0 1.0
1.0 0.0 0.0 1.0 1.0
1.0 0.0 0.0 1.0 1.0
1.0 0.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0

Related

ODE in Julia -matrix form possible?

I am new to the promising language of Julia, in the hope that it will accelerate my stiff ordinary differential equations. Here is the thing:
1) The equation must be defined in matrix form, by using mass, damping, stiffness matrices outa Matlab with dimension 400x400. The common state-space represenation for 2nd order ODE's is implemented.
2) Apart from the linear dynamics, there are nonlinear forces acting, which depend on certain states of it. These forces must be defined inside the ode function.
However the state variables do not change at all, although the should, due to the inital conditions. Here is an example code, with smaller matrices, for prototyping:
#Load packages
using LinearAlgebra
using OrdinaryDiffEq
using DifferentialEquations
using Plots
# Define constant matrices (here generated for the example)
const M=Matrix{Float64}(I, 4, 4) # Mass matrix
const C=zeros(Float64, 4, 4) # Damping matrix
const K=[10.0 0.0 0.0 0.0; 0.0 7.0 0.0 0.0; 0.0 0.0 6.0 0.0;0.0 0.0 5.0 0.0] # Stiffness matrix
x0 = [0.0;0.0;0.0;0.0; 1.0; 1.0; 1.0; 1.0] # Initial conditions
tspan = (0.,1.0) # Simulation time span
#Define the underlying equation
function FourDOFoscillator(xdot,x,p,t)
xdot=[-inv(M)*C -inv(M)*K; Matrix{Float64}(I, 4, 4) zeros(Float64, 4, 4)]*x
end
#Pass to Solvers
prob = ODEProblem(FourDOFoscillator,x0,tspan)
sol = solve(prob,alg_hints=[:stiff],reltol=1e-8,abstol=1e-8)
plot(sol)
What am I missing?
Thanks
Betelgeuse
You're not mutating the output, and instead creating a new array. If you do xdot.= it works.
#Load packages
using LinearAlgebra
using OrdinaryDiffEq
using DifferentialEquations
using Plots
# Define constant matrices (here generated for the example)
const M=Matrix{Float64}(I, 4, 4) # Mass matrix
const C=zeros(Float64, 4, 4) # Damping matrix
const K=[10.0 0.0 0.0 0.0; 0.0 7.0 0.0 0.0; 0.0 0.0 6.0 0.0;0.0 0.0 5.0 0.0] # Stiffness matrix
x0 = [0.0;0.0;0.0;0.0; 1.0; 1.0; 1.0; 1.0] # Initial conditions
tspan = (0.,1.0) # Simulation time span
#Define the underlying equation
function FourDOFoscillator(xdot,x,p,t)
xdot.=[-inv(M)*C -inv(M)*K; Matrix{Float64}(I, 4, 4) zeros(Float64, 4, 4)]*x
end
#Pass to Solvers
prob = ODEProblem(FourDOFoscillator,x0,tspan)
sol = solve(prob,alg_hints=[:stiff],reltol=1e-8,abstol=1e-8)
plot(sol)

Fitting linear model in Scalanlp/Breeze

I try to fit a linear model (and get the R^2) to the following test-data
0.0 0.0
1.0 1.0
2.0 2.0
3.0 3.1
I wrote the following code using scalanlp/breeze 0.12 :
import breeze.linalg.{DenseMatrix, DenseVector}
import breeze.stats.regression.leastSquares
val indep = DenseMatrix((1.0, 0.0), (1.0, 1.0), (1.0, 2.0), (1.0, 3.0))
val dep = DenseVector(0.0, 1.0, 2.0, 3.1)
val result = leastSquares(indep, dep)
println("intercept=" + result.coefficients.data(0))
println("slope=" + result.coefficients.data(1))
println("r^2=" + result.rSquared)
the output is:
intercept=-0.020000000000000018
slope=1.03
r^2=0.0014623322596666252
Intercept and slope are reasonable, but I don't understand R-squared, it should be close to 1!
Your vector of ones needs to come last not first. Hence the r^2 = 1-r_e^2 you expected.
EDIT: While what I said above is correct about switching around your vector of 1s. I'm still getting a horribly incorrect r-squared as well even using that. My slope and intercept are spot on though, much like yours. So... to the source code!
EDIT2: Known issue. Hasn't been fixed. shrug

Ruby step function giving unexpected result

When I run the following commands,
(0..20).step(0.1) do |n|
puts n
end
I get the following return:
0.0
0.1
0.2
0.30000000000000004
0.4
0.5
0.6000000000000001
0.7000000000000001
0.8
0.9
1.0
1.1
1.2000000000000002
1.3
1.4000000000000001
1.5
1.6
1.7000000000000002
...
What is the best way to avoid this roundoff error?
Update:
My question about why this occurs has been previously answered here in another question, Is floating point math broken?, but I did not immediately find that.
You could cheat and avoid the stepping by 0.1 business:
(0..200).map { |n| n.to_f / 10 }
=> [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7,...]

Matlab impyramid issue

I have a problem in impyramid with matlab. I am trying to save one downscale version of a binary image and also a two downscaled version of this binary image. It is simple to do that in matlab as the following code shows:
scale1_2= impyramid(compressed_image, 'reduce');
scale1_4= impyramid(scale1_2, 'reduce');
So, an image with size 810x1080 is saved with 405x540 and 203x270 pixels. The problem I am facing is when I try to expand these two images back to have the same dimensions as before.
scaled_result1_2=impyramid(scale1_2,'expand');
scaled_result1_4=impyramid(impyramid(scale1_4,'expand'), 'expand');
So, it is expected that scaled_result1_2 and scaled_result1_4 are 810x1080 images again, but not:
>>size(scaled_result1_2)
809 1079
>>size(scaled_result1_4)
809 1077
I need these two images to have the same 810x1080 pixels again, but impyramid is not able to do this. If I resize these images with imresize will it perform the image pyramid decomposition by upscaling and blurring the image? which method (interpolation) should I have to use to have a similar result?
If you actually open up impyramid and see the source code, it boils down to an imresize call. Specifically, this is what happens when you use expand when calling impyramid when A is defined as the image:
M = size(A,1);
N = size(A,2);
scaleFactor = 2;
outputSize = 2*[M N] - 1;
kernel = makePiecewiseConstantFunction( ...
[1.25 0.75 0.25 -0.25 -0.75 -1.25 -Inf], ...
[0.0 0.125 0.5 0.75 0.5 0.125 0.0]);
kernelWidth = 3;
B = imresize(A, scaleFactor, {kernel, kernelWidth}, ...
'OutputSize', outputSize, 'Antialiasing', false);
As you can see, outputSize is defined as twice the image dimensions subtract 1, which is why you are off by 1 pixel per dimension. The function makePiecewiseConstantFunction is a local function that is defined in impyramid. I'll let you open it up and see that for yourself. Make sure this is defined before calling the above code.
Therefore, simply remove the subtraction of 1 to achieve what you want.
As such, call the above code, but change outputSize to:
outputSize = 2*[M N];
However, if you want to be adventurous, you can modify this source code yourself to take in a flag where if you set it to true, it won't subtract by 1 and false performs the subtraction. Therefore, you can modify the header of impyramid to do this:
function B = impyramid(A, direction, padding)
Then, at the beginning before any computation is done, you can do this:
if nargin == 2
padding = false;
end
This allows you to call impyramid without a third argument, which will default to no padding.
Once you're done, in the expand section of the if statement, you can do:
else
scaleFactor = 2;
outputSize = 2*[M N];
if ~padding %// Change
outputSize = outputSize - 1;
end
kernel = makePiecewiseConstantFunction( ...
[1.25 0.75 0.25 -0.25 -0.75 -1.25 -Inf], ...
[0.0 0.125 0.5 0.75 0.5 0.125 0.0]);
kernelWidth = 3;
end
The nested if statement then checks to see whether or not you want to allow the output image to be of size 2M x 2N or 2M - 1 x 2N - 1. As such, when you're done modifying the code, you can do:
scaled_result1_2 = impyramid(scale1_2, 'expand', true);
scaled_result1_4 = impyramid(impyramid(scale1_4,'expand', true), 'expand', true);

Adding measurement errors to pymc model

I have the following model in pymc2:
import pymc
from scipy.stats import gamma
alpha = pymc.Uniform('alpha', 0.01, 2.0)
scale = pymc.Uniform('scale', 1.0, 4.0)
#pymc.deterministic(plot=False)
def beta(scale=scale):
return 1.0 / scale
#pymc.potential
def p_factor(alpha=alpha, scale=scale, lmin=lmin, n=len(sample)):
dist = gamma(alpha, loc=0., scale=scale)
fp = 1.0 - dist.cdf(lmin)
return -(n+1)*np.log(fp)
obs = pymc.Gamma("obs", alpha=alpha, beta=beta, value=sample, observed=True)
The physical background of this model is the luminosity function of galaxies (LF), i.e., the probability of a galaxy having luminosity L. For some types of galaxies, the LF is just a gamma function. The potential accounts for data truncation, as galaxy surveys usually miss a substantial fraction of the targets, particularly those of low luminosity. In this model I miss everything below lmin
Details of this method can be found in this paper by Kelly et al.
This model works: I run MAP and MCMC on the model and I can recover the parameters alpha and scale from my simulated data sample, with increased uncertainty as lmin grows.
Now I would like to insert gaussian measurement errors. For simplicity all the data has the same precision. I'm not modifying the potential to include the errors also.
alpha = pymc.Uniform('alpha', 0.01, 2.0)
scale = pymc.Uniform('scale',1.0, 4.0)
sig = 0.1
tau = math.pow(sig, -2.0)
#pymc.deterministic(plot=False)
def beta(scale=scale):
return 1.0 / scale
#pymc.potential
def p_factor(alpha=alpha, scale=scale, lmin=lmin, n=len(sample)):
dist = gamma(alpha, loc=0., scale=scale)
fp = 1.0 - dist.cdf(lmin)
return -(n+1) * np.log(fp)
dist = pymc.Gamma("dist", alpha=alpha, beta=beta)
obs = pymc.Normal("obs", mu=dist, tau=tau, value=sample, observed=True)
But surely I'm doing something wrong here because this model does not work.
When I run pymc.MAPon this model I recover the initial values of alpha and scale
vals = {'alpha': alpha, 'scale': scale, 'beta': beta,
'p_factor': p_factor, 'obs': obs, 'dist': dist}
M2 = pymc.MAP(vals)
M2.fit()
print M2.alpha.value, M2.scale.value
>>> (array(0.010000000006018368), array(1.000000000833973))
When I run pymc.MCMC, alpha and beta are no traced at all.
M = pymc.MCMC(vals)
M.sample(10000, burn=5000)
...
M.stats()['alpha']
>>> {'95% HPD interval': array([ 0.01000001, 0.01000502]),
'mc error': 2.1442678276712383e-07,
'mean': 0.010001588137798096,
'n': 5000,
'quantiles': {2.5: 0.0100000088679046,
25: 0.010000382359859467,
50: 0.010001100377476166,
75: 0.010001668672799679,
97.5: 0.0100050194240779},
'standard deviation': 2.189828287191421e-06}
again initial values. In fact if I change alpha to start in, say, 0.02, the recovered values of alpha is 0.02.
This is a notebook with the working model plus simulated data.
This is a notebook with the error model plus simulated data.
Any guidance on making this work would be really appreciated.
It seems that is enough to change
dist = pymc.Gamma("dist", alpha=alpha, beta=beta)
by
dist = pymc.Gamma("dist", alpha=alpha, beta=beta, value=sample)
The sampled data is a reasonable initial value for dist. Anyway, I do no get the logic, as other initial values (such as an array of zeros) bring back the problem of not sampling alpha and beta again.

Resources