I have a big matrix with float entries of the form
[ a b c d
e f g h
i j k l
m n o p ]
some of the values are outliers, so I wanted to average each of the entries with values with its recent k entries in the corresponding column and preserve the shape. In other words to have something like this for k = 3:
[ a b c d
(e + a)/2 (f + b)/2 (g + c)/2 (h + d)/2
(e + a + i)/3 (f + b + j)/3 (g + c + k)/3 (h + d + l)/3
(e + i + m)/3 (f + j + n)/3 (g + k + o)/3 (h + l + p)/3 ]
etc.
You can do this with RollingFunctions and mapslices:
julia> a = reshape(1:16, 4, 4)
4×4 reshape(::UnitRange{Int64}, 4, 4) with eltype Int64:
1 5 9 13
2 6 10 14
3 7 11 15
4 8 12 16
julia> using RollingFunctions
julia> mapslices(x -> runmean(x, 3), a, dims = 1)
4×4 Matrix{Float64}:
1.0 5.0 9.0 13.0
1.5 5.5 9.5 13.5
2.0 6.0 10.0 14.0
3.0 7.0 11.0 15.0
I didn't know about RollingFunctions, but a regular loop is 4X faster. I'm not sure if it's some kind of type instability caused by mapslices?.
function runmean(a,W)
A = similar(a)
for j in axes(A,2), i in axes(A,1)
l = max(1, i-W+1)
A[i,j] = mean(a[k,j] for k=l:i)
end
A
end
Testing yields:
using RollingFunctions
#btime mapslices(x -> runmean(x, 3), A, dims = 1) setup=(A = rand(0.0:9,1000,1000))
#btime runmean(A,3) setup=(A = rand(0.0:9,1000,1000))
15.326 ms (10498 allocations: 23.45 MiB)
4.410 ms (2 allocations: 7.63 MiB)
Related
Let's define a 2-adic in Haskell as its infinite binary expansion:
data Adic = O Adic | I Adic deriving Show
We can represent finite numbers as follows:
zero = O zero
one = I zero
neg1 = I neg1
And we can easily define Inc and Add for Adic:
inc :: Adic -> Adic
inc (O a) = I a
inc (I a) = O (Inc a)
add :: Adic -> Adic -> Adic
add (O a) (O b) = O (add a b)
add (O a) (I b) = I (add a b)
add (I a) (O b) = I (add a b)
add (I a) (I b) = I (inc (add a b))
We can convert adics to ints follows:
a2i :: Int -> Adic -> Int
a2i 1 (O x) = 0
a2i 0 (I x) = -1
a2i s (O x) = 2 * i2a (s - 1) x
a2i s (I x) = 2 * i2a (s - 1) x + 1
This allows us to do basic arithmetic. For example:
print $ a2i 64 (add one neg1)
Would print 0, after computing 1 + -1 on 2-adics.
Now, I'm interested in the hyper-operation sequence function, i.e.:
hyp :: Int -> Adic -> Adic -> Adic
hyp 0 a b = add a b
hyp 1 a b = mul a b
hyp 2 a b = exp a b
hyp 3 a b = tet a b
...
An efficient way to implement it is by generalizing the exponentiation by squaring algorithm, as follows:
hyp :: Int -> Adic -> Adic -> Adic
hyp 0 a b = add a b
hyp s a 1 = a
hyp s a (O b) = let r = hyp s a b in hyp (s - 1) r r
hyp s a (I b) = let r = hyp s a (Inc b) in hyp (s - 1) r r
To visualize the function above, below is the evaluation of hyp 2 4 4, i.e., 4^4:
4^4 =
4^2 * 4^2 =
4^1 * 4^1 * 4^1 * 4^1 =
4 * 4 * 4 * 4 =
4*4 * 4*4 =
(4*2 + 4*2) * (4*2 * 4*2) =
(4*1 + 4*1 + 4*1 + 4*1) * (4*1 + 4*1 + 4*1 + 4*1) =
(4 + 4 + 4 + 4) * (4 + 4 + 4 + 4) =
16*16 =
16*8 + 16*8 =
16*4 + 16*4 + 16*4 + 16*4 =
16*2 + 16*2 + 16*2 + 16*2 + 16*2 + 16*2 + 16*2 + 16*2 =
16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 =
16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 =
256
Sadly, the function above doesn't actually work, because the hyp s a 1 = a line attempts to match an Adic against the Int 1, which makes no sense. What it should do, instead, is match against the 2-adic number 1. The problem is: 2-adics are infinite, so we can't match them. We can't even create an is one function. As such, while the function above seems elegant and efficient, it, as far as I can tell, can't be constructed. My question is: am I right in concluding hyp can't be implemented that way, or am I missing something? Is there an alternative approach that would let this work?
Can I simply ask the logical flow of the below Mathematica code? What are the variables arg and abs doing? I have been searching for answers online and used ToMatlab but still cannot get the answer. Thank you.
Code:
PositiveCubicRoot[p_, q_, r_] :=
Module[{po3 = p/3, a, b, det, abs, arg},
b = ( po3^3 - po3 q/2 + r/2);
a = (-po3^2 + q/3);
det = a^3 + b^2;
If[det >= 0,
det = Power[Sqrt[det] - b, 1/3];
-po3 - a/det + det
,
(* evaluate real part, imaginary parts cancel anyway *)
abs = Sqrt[-a^3];
arg = ArcCos[-b/abs];
abs = Power[abs, 1/3];
abs = (abs - a/abs);
arg = -po3 + abs*Cos[arg/3]
]
]
abs and arg are being reused multiple times in the algorithm.
In a case where det > 0 the steps are
po3 = p/3;
b = (po3^3 - po3 q/2 + r/2);
a = (-po3^2 + q/3);
abs1 = Sqrt[-a^3];
arg1 = ArcCos[-b/abs1];
abs2 = Power[abs1, 1/3];
abs3 = (abs2 - a/abs2);
arg2 = -po3 + abs3*Cos[arg1/3]
abs3 can be identified as A in this answer: Using trig identity to a solve cubic equation
That is the most salient point of this answer.
Evaluating symbolically and numerically may provide some other insights.
Using demo inputs
{p, q, r} = {-2.52111798, -71.424692, -129.51520};
Copyable version of trig identity notes - NB a, b, p & q are used differently in this post
Plot[x^3 - 2.52111798 x^2 - 71.424692 x - 129.51520, {x, 0, 15}]
a = 1;
b = -2.52111798;
c = -71.424692;
d = -129.51520;
p = (3 a c - b^2)/3 a^2;
q = (2 b^3 - 9 a b c + 27 a^2 d)/27 a^3;
A = 2 Sqrt[-p/3]
A == abs3
-(b/3) + A Cos[1/3 ArcCos[
-((b/3)^3 - (b/3) c/2 + d/2)/Sqrt[-(-(b^2/9) + c/3)^3]]]
Edit
There is also a solution shown here
TRIGONOMETRIC SOLUTION TO THE CUBIC EQUATION, by Alvaro H. Salas
Clear[a, b, c]
1/3 (-a + 2 Sqrt[a^2 - 3 b] Cos[1/3 ArcCos[
(-2 a^3 + 9 a b - 27 c)/(2 (a^2 - 3 b)^(3/2))]]) /.
{a -> -2.52111798, b -> -71.424692, c -> -129.51520}
10.499
I saw this in an algorithm textbook. I am confused about the middle recursive function. If you can explain it with an example, such as 4/2, that would be great!
function divide(x, y)
Input: Two n-bit integers x and y, where y ≥ 1
Output: The quotient and remainder of x divided by y
if x = 0: return (q, r) = (0, 0)
(q, r) = divide(floor(x/2), y)
q = 2 · q, r = 2 · r
if x is odd: r = r + 1
if r ≥ y: r = r − y, q = q + 1
return (q, r)
You're seeing how many times it's divisible by 2. This is essentially performing bit shifts and operating on the binary digits. A more interesting case would be 13/3 (13 is 1101 in binary).
divide(13, 3) // initial binary value - 1101
divide(6, 3) // shift right - 110
divide(3, 3) // shift right - 11
divide(1, 3) // shift right - 1 (this is the most significant bit)
divide(0, 3) // shift right - 0 (no more significant bits)
return(0, 0) // roll it back up
return(0, 1) // since x is odd (1)
return(1, 0) // r = r * 2 = 2; x is odd (3) so r = 3 and the r > y condition is true
return(2, 0) // q = 2 * 1; r = 2 * 1 - so r >= y and q = 2 + 1
return(4, 1) // q = 2 * 2; x is odd to r = 0 + 1
Is there a way to round the result of integer division up to the nearest integer, rather than down?
For example, I would like to change the default behavior:
irb(main):001:0> 5 / 2
=> 2
To the following behavior:
irb(main):001:0> 5 / 2
=> 3
The function you are looking for is ceil.
Ceil returns the nearest integer, rounded upwards, for a floating point number.
4/3 = 1
4.0/3.0 = 1.3333...3
(4.0/3.0).ceil = 2
Also, note that this rounds in the positive direction, so
(-4.0/3.0).ceil = -1, NOT -2
Also, there is the corresponding floor function which rounds downwards.
This is rather an algorithm question than a ruby specific question.
Try (a + b - 1) / b. For example
(5 + 2 - 1) / 2 #=> 3
(10 + 3 - 1) / 3 #=> 4
(6 + 3 - 1) / 3 #=> 2
You can define an instance method, say divide_by, in the Integer class (monkey patch):
class Integer
def divide_by(divisor)
(self + divisor - 1) / divisor
end
end
According to my benchmark result, it's about 1/2 times faster than the to_f then ceil solution.
CORRECTION
The method shown above gives wrong result when both the dividend and the divisor are negative.
Here's the method that gives the correct result in all cases: (a * 2 + b) / (b * 2)
a = 5
b = 2
(a * 2 + b) / (b * 2) #=> 3
a = 6
b = 2
(a * 2 + b) / (b * 2) #=> 3
a = 5
b = 1
(a * 2 + b) / (b * 2) #=> 5
a = -5
b = 2
(a * 2 + b) / (b * 2) #=> -2 (-2.5 rounded up to -2)
a = 5
b = -2
(a * 2 + b) / (b * 2) #=> -2 (-2.5 rounded up to -2)
a = -5
b = -2
(a * 2 + b) / (b * 2) #=> 3
a = 10
b = 0
(a * 2 + b) / (b * 2) #=> raises ZeroDivisionError
The monkey patch should be
class Integer
def divide_by(divisor)
(self * 2 + divisor) / (divisor * 2)
end
end
Mathematical Proof:
The dividend a and the divisor b meets the equation a = kb + m where a, b, k, m are all integers, b is not zero, and m is between b and 0 (can be 0).
For example, when a is 5 and b is 2, then a = 2b + 1, thus in this case, k = 2 and m = 1.
Another example for negative divisor, a is 5, b is -2, then a = -3b + (-1), thus k = -3 and m = -1.
(2a + b) / 2b
= (2(kb + m) + b) / 2b
= (2kb + b + 2m) / 2b
When m = 0
(2kb + b + 2m) / 2b
= (2k + 1)b / 2b
= k + (1 / 2)
= k + 0 # in Ruby
= k # in Ruby
and since k = a / b, we got the correct answer.
When m is not 0,
(2kb + b + 2m) / 2b
= ((2k + 2)b - b + 2m) / 2b
= (k + 1) + (2m - b) / 2b
If b > 0, then 2m - b < b so (2m - b) / 2b < 1 / 2. So the second term is always 0 in integer division.
If b < 0, then 2m - b > b and still (2m - b) / 2b < 1 / 2 so the second term is still 0.
In either case, (2a + b) / 2b is rounded to k + 1 when m is not 0.
If what you actually want is to Integer div and round up if there's any left over, just do it straightforward as the logic would dictate on paper using a second line of modular operation (%) to check the remainders of the division:
a = 5
b = 2
result = a / b #=> 2
result += 1 if (a % b).positive?
#=> 3
a = 6
b = 3
result = a / b #=> 2
result += 1 if (a % b).positive?
#=> 2
I'm trying to minimize a non-linear function of four variables with some linear constraints. Mathematica 8 is unable to find a good solution giving complex values of the function at some point in the iteration. This implies that one or some contraints are not being enabled in the process. Is this a bug or limitation of the optimization function ?
Function to minimize is
ff[lxw_, lwz_, c_, d_] := - J1 (lxw + lwz) - 2 J2 c +
T (-Log[2] - 1/2 (1 - lxw) Log[(1 - lxw)/4] -
1/2 (1 + lxw) Log[(1 + lxw)/4] -
1/2 (1 - lwz) Log[(1 - lwz)/4] -
1/2 (1 + lwz) Log[(1 + lwz)/4] + 1/2 (1 - d) Log[(1 - d)/16] +
1/8 (1 + 2 c + d - 2 lwz - 2 lxw) Log[
1/16 (1 + 2 c + d - 2 lwz - 2 lxw)])
where
T = 10;
J1 = 1;
J2 = -0.2;
are constant parameters. Then I try
NMinimize[{ff[lxw, lwz, c, d],
2 c + d - 2 lwz - 2 lxw >= -0.999 &&
-0.999 <= lxw <= 0.999 &&
-0.999 <= lwz <= 0.999 &&
-0.999 <= c <= 0.999 &&
d <= 0.9999}, {lxw, lwz, c, d}]
with the result
NMinimize::nrnum: "The function value 5.87777[VeryThinSpace]-4.87764\ I\n
is not a real number at {c,d,lwz,lxw} = {-0.718817,-1.28595,0.69171,-0.932461}.
I would appreciate if someone can give a hint at what is happening here.
Try this:
Clear[ff];
ff[lxw_, lwz_, c_, d_] /; 2 c + d - 2 lwz - 2 lxw >= -0.999 :=
< your function def >
This will cause the cause the function to be unevaluated in case NMinimize takes an excursion out of bounds. Sorry i cant test this from here.. If that doesn't do try asking on mathematica.stackexchange.com
Aside, why use <=.999 instead of simply < 1 ?
It just might help if you fix that too ( use integer 1, not 1. )
The warning is appearing because at the values given in the warning the last term in ff is complex, due to taking the log of a negative number, i.e.
{c, d, lwz, lxw} = {
-0.7188174745559741`,
-1.2859482844800894`,
0.6917100913968041`,
-0.9324611085040573`};
Log[1/16 (1 + 2 c + d - 2 lwz - 2 lxw)]
-2.5558 + 3.14159 i
1/16 (1 + 2 c + d - 2 lwz - 2 lxw)
-0.0776301
In Mathematica 9 a result is produced in addition to the warning :-
{-4.90045, {c -> 0.94425, d -> -0.315633, lwz -> 0.900231, lxw -> -0.191476}}
I.e.
{c, d, lwz, lxw} = {
0.9442497691706085`,
-0.31563295950647885`,
0.900230825707721`,
-0.1914760216875171`};
ff[lxw, lwz, c, d]
-4.90045