Related
I'm new to Haskell, and trying to learn it by thinking in terms of image processing.
So far, I have been stuck thinking about how you would implement a neighbourhood-filtering algorithm in Haskell (or any functional programming language, really).
How would a spatial averaging filter (say 3x3 kernel, 5x5 image) be written functionally? Coming from an entirely imperative background, I can't seem to come up with a way to either structure the data so the solution is elegant, or not do it by iterating through the image matrix, which doesn't seem very declarative.
Working with neighborhoods is easy to do elegantly in a functional language. Operations like convolution with a kernel are higher order functions that can be written in terms of one of the usual tools of functional programming languages - lists.
To write some real, useful code, we'll first play pretend to explain a library.
Pretend
You can think of each image as a function from a coordinate in the image to the value of the data held at that coordinate. This would be defined over all possible coordinates, so it would be useful to pair it with some bounds which tell us where the function is defined. This would suggest a data type like
data Image coordinate value = Image {
lowerBound :: coordinate,
upperBound :: coordinate,
value :: coordinate -> value
}
Haskell has a very similar data type called Array in Data.Array. This data type comes with an additional feature that the value function in Image wouldn't have - it remembers the value for each coordinate so that it never needs to be recomputed. We'll work with Arrays using three functions, which I'll describe in terms of how they'd be defined for Image above. This will help us see that even though we are using the very useful Array type, everything could be written in terms of functions and algebraic data types.
type Array i e = Image i e
bounds gets the bounds of the Array
bounds :: Array i e -> (i, i)
bounds img = (lowerBound img, upperBound img)
The ! looks up a value in the Array
(!) :: Array i e -> i -> e
img ! coordinate = value img coordinate
Finally, makeArray builds an Array
makeArray :: Ix i => (i, i) -> (i -> e) -> Array i e
makeArray (lower, upper) f = Image lower upper f
Ix is a typeclass for things that behave like image coordinates, they have a range. There are instances for most of the base types like Int, Integer, Bool, Char, etc. For example the range of (1, 5) is [1, 2, 3, 4, 5]. There's also an instances for products or tuples of things that themselves have Ix instances; the instance for tuples ranges over all combinations of the ranges of each component. For example, range (('a',1),('c',2)) is
[('a',1),('a',2),
('b',1),('b',2),
('c',1),('c',2)]`
We are only interested in two functions from the Ix typeclass, range :: Ix a => (a, a) -> [a] and inRange :: Ix a => a -> (a, a) -> Bool. inRange quickly checks if a value would be in the result of range.
Reality
In reality, makeArray isn't provided by Data.Array, but we can define it in terms of listArray which constructs an Array from a list of items in the same order as the range of its bounds
import Data.Array
makeArray :: (Ix i) => (i, i) -> (i -> e) -> Array i e
makeArray bounds f = listArray bounds . map f . range $ bounds
When we convolve an array with a kernel, we will compute the neighborhood by adding the coordinates from the kernel to the coordinate we are calculating. The Ix typeclass doesn't require that we can combine two indexes together. There's one candidate typeclass for "things that combine" in base, Monoid, but there aren't instances for Int or Integer or other numbers because there's more than one sensible way to combine them: + and *. To address this, we'll make our own typeclass Offset for things that combine with a new operator called .+.. Usually we don't make typeclasses except for things that have laws. We'll just say that Offset should "work sensibly" with Ix.
class Offset a where
(.+.) :: a -> a -> a
Integers, the default type Haskell uses when you write an integer literal like 9, can be used as offsets.
instance Offset Integer where
(.+.) = (+)
Additionally, pairs or tuples of things that Offset can be combined pairwise.
instance (Offset a, Offset b) => Offset (a, b) where
(x1, y1) .+. (x2, y2) = (x1 .+. x2, y1 .+. y2)
We have one more wrinkle before we write convolve - how will we deal with the edges of the image? I intend to pad them with 0 for simplicity. pad background makes a version of ! that's defined everywhere, outside the bounds of an Array it returns the background.
pad :: Ix i => e -> Array i e -> i -> e
pad background array i =
if inRange (bounds array) i
then array ! i
else background
We're now prepared to write a higher order function for convolve. convolve a b convolves the image b with the kernel a. convolve is higher order because each of its arguments and its result is an Array, which is really a combination of a function ! and its bounds.
convolve :: (Num n, Ix i, Offset i) => Array i n -> Array i n -> Array i n
convolve a b = makeArray (bounds b) f
where
f i = sum . map (g i) . range . bounds $ a
g i o = a ! o * pad 0 b (i .+. o)
To convolve an image b with a kernel a, we make a new image defined over the same bounds as b. Each point in the image can be computed by the function f, which sums the product (*) of the value in the kernel a and the value in the padded image b for each offset o in the range of the bounds of the kernel a.
Example
With the six declarations from the previous section, we can write the example you requested, a spatial averaging filter with a 3x3 kernel applied to a 5x5 image. The kernel a defined below is a 3x3 image that uses one ninth of the value from each of the 9 sampled neighbors. The 5x5 image b is a gradient increasing from 2 in the top left corner to 10 in the bottom right corner.
main = do
let
a = makeArray ((-1, -1), (1, 1)) (const (1.0/9))
b = makeArray ((1,1),(5,5)) (\(x,y) -> fromInteger (x + y))
c = convolve a b
print b
print c
The printed input b is
array ((1,1),(5,5))
[((1,1),2.0),((1,2),3.0),((1,3),4.0),((1,4),5.0),((1,5),6.0)
,((2,1),3.0),((2,2),4.0),((2,3),5.0),((2,4),6.0),((2,5),7.0)
,((3,1),4.0),((3,2),5.0),((3,3),6.0),((3,4),7.0),((3,5),8.0)
,((4,1),5.0),((4,2),6.0),((4,3),7.0),((4,4),8.0),((4,5),9.0)
,((5,1),6.0),((5,2),7.0),((5,3),8.0),((5,4),9.0),((5,5),10.0)]
The convolved output c is
array ((1,1),(5,5))
[((1,1),1.3333333333333333),((1,2),2.333333333333333),((1,3),2.9999999999999996),((1,4),3.6666666666666665),((1,5),2.6666666666666665)
,((2,1),2.333333333333333),((2,2),3.9999999999999996),((2,3),5.0),((2,4),6.0),((2,5),4.333333333333333)
,((3,1),2.9999999999999996),((3,2),5.0),((3,3),6.0),((3,4),7.0),((3,5),5.0)
,((4,1),3.6666666666666665),((4,2),6.0),((4,3),7.0),((4,4),8.0),((4,5),5.666666666666666)
,((5,1),2.6666666666666665),((5,2),4.333333333333333),((5,3),5.0),((5,4),5.666666666666666),((5,5),4.0)]
Depending on the complexity of what you want to do, you might consider using more established libraries, like the oft recommended repa, rather than implementing an image processing kit for yourself.
There is a basic monad question in here, unrelated to Repa, plus several Repa-specific questions.
I am working on a library using Repa3. I am having trouble getting efficient parallel code. If I make my functions return delayed arrays, I get excruciatingly slow code that scales very well up to 8 cores. This code takes over 20GB of memory per the GHC profiler, and runs several orders of magnitude slower than the basic Haskell unboxed vectors.
Alternatively, if I make all of my functions return Unboxed manifest arrays (still attempting to use fusion within the functions, for example when I do a 'map'), I get MUCH faster code (still slower than using Haskell unboxed vectors) that doesn't scale at all, and in fact tends to get slightly slower with more cores.
Based on the FFT example code in Repa-Algorithms, it seems the correct approach is to always return manifest arrays. Is there ever a case where I should be returning delayed arrays?
The FFT code also makes plentiful use of the 'now' function. However, I get a type error when I try to use it in my code:
type Arr t r = Array t DIM1 r
data CycRingRepa m r = CRTBasis (Arr U r)
| PowBasis (Arr U r)
fromArray :: forall m r t. (BaseRing m r, Unbox r, Repr t r) => Arr t r -> CycRingRepa m r
fromArray =
let mval = reflectNum (Proxy::Proxy m)
in \x ->
let sh:.n = extent x
in assert (mval == 2*n) PowBasis $ now $ computeUnboxedP $ bitrev x
The code compiles fine without the 'now'. With the 'now', I get the following error:
Couldn't match type r' withArray U (Z :. Int) r'
`r' is a rigid type variable bound by
the type signature for
fromArray :: (BaseRing m r, Unbox r, Repr t r) =>
Arr t r -> CycRingRepa m r
at C:\Users\crockeea\Documents\Code\LatticeLib\CycRingRepa.hs:50:1
Expected type: CycRingRepa m r
Actual type: CycRingRepa m (Array U DIM1 r)
I don't think this is my problem. It would be helpful if someone could explain the how the Monad works in 'now'. By my best estimation, the monad seems to be creating a 'Arr U (Arr U r)'. I'm expecting a 'Arr U r', which would then match the data constructor pattern. What is going on and how do I fix this?
The type signatures are:
computeUnboxedP :: Fill r1 U sh e => Array r1 sh e -> Array U sh e
now :: (Shape sh, Repr r e, Monad m) => Array r sh e -> m (Array r sh e)
It would be helpful to have a better idea of when it is appropriate to use 'now'.
A couple other Repa questions:
Should I explicitly call computeUnboxedP (as in the FFT example code), or should I use the more general computeP (because the unbox part is inferred by my data type)?
Should I store delayed or manifest arrays in the data type CycRingRepa?
Eventually I would also like this code to work with Haskell Integers. Will this require me to write new code that uses something other than U arrays, or could I write polymorphic code that creates U arrays for unbox types and some other array for Integers/boxed types?
I realize there are a lot of questions in here, and I appreciate any/all answers!
Here's the source code for now:
now arr = do
arr `deepSeqArray` return ()
return arr
So it's really just a monadic version of deepSeqArray. You can use either of these to force evaluation, rather than hanging on to a thunk. This "evalulation" is different than the "computation" forced when computeP is called.
In your code, now doesn't apply, since you're not in a monad. But in this context deepSeqArray wouldn't help either. Consider this situation:
x :: Array U Int Double
x = ...
y :: Array U Int Double
y = computeUnboxedP $ map f x
Since y refers to x, we'd like to be sure x is computed before starting to compute y. If not, the available work won't be distributed correctly among the gang of threads. To get this to work out, it's better to write y as
y = deepSeqArray x . computeUnboxedP $ map f x
Now, for a delayed array, we have
deepSeqArray (ADelayed sh f) y = sh `deepSeq` f `seq` y
Rather than computing all the elements, this just makes sure the shape is computed, and reduces f to weak-head normal form.
As for manifest vs delayed arrays, there are certainly time delayed arrays are preferable.
multiplyMM arr brr
= [arr, brr] `deepSeqArrays`
A.sumP (A.zipWith (*) arrRepl brrRepl)
where trr = computeUnboxedP $ transpose2D brr
arrRepl = trr `deepSeqArray` A.extend (Z :. All :. colsB :. All) arr
brrRepl = trr `deepSeqArray` A.extend (Z :. rowsA :. All :. All) trr
(Z :. _ :. rowsA) = extent arr
(Z :. colsB :. _ ) = extent brr
Here "extend" generates a new array by copying the values across some set of new dimensions. In particular, this means that
arrRepl ! (Z :. i :. j :. k) == arrRepl ! (Z :. i :. j' :. k)
Thankfully, extend produces a delayed array, since it would be a waste to go through the trouble of all this copying.
Delayed arrays also allow the possiblity of fusion, which is impossible if the array is manifest.
Finally, computeUnboxedP is just computeP with a specialized type. Giving computeUnboxedP explicitly might allow GHC to optimize better, and makes the code a little clearer.
Repa 3.1 no longer requires the explict use of now. The parallel computation functions are all monadic, and automatically apply deepSeqArray to their results. The repa-examples package also contains a new implementation of matrix multiply that demonstrates their use.
I hope this hasn't been asked before, if so I apologize.
EDIT: For clarity, the following notation will be used: boldface uppercase for matrices, boldface lowercase for vectors, and italics for scalars.
Suppose x0 is a vector, A and B are matrix functions, and f is a vector function.
I'm looking for the best way to do the following iteration scheme in Mathematica:
A0 = A(x0), B0=B(x0), f0 = f(x0)
x1 = Inverse(A0)(B0.x0 + f0)
A1 = A(x1), B1=B(x1), f1 = f(x1)
x2 = Inverse(A1)(B1.x1 + f1)
...
I know that a for-loop can do the trick, but I'm not quite familiar with Mathematica, and I'm concerned that this is the most efficient way to do it. This is a justified concern as I would like to define a function u(N):=xNand use it in further calculations.
I guess my questions are:
What's the most efficient way to program the scheme?
Is RecurrenceTable a way to go?
EDIT
It was a bit more complicated than I tought. I'm providing more details in order to obtain a more thorough response.
Before doing the recurrence, I'm having problems understanding how to program the functions A, B and f.
Matrices A and B are functions of the time step dt = 1/T and the space step dx = 1/M, where T and M are the number of points in the {0 < x < 1, 0 < t} region. This is also true for vector the function f.
The dependance of A, B and f on x is rather tricky:
A and B are upper and lower triangular matrices (like a tridiagonal matrix; I suppose we can call them multidiagonal), with defined constant values on their diagonals.
Given a point 0 < xs < 1, I need to determine it's representative xn in the mesh (the closest), and then substitute the nth row of A and B with the function v( x) (transposed, of course), and the nth row of f with the function w( x).
Summarizing, A = A(dt, dx, xs, x). The same is true for B and f.
Then I need do the loop mentioned above, to define u( x) = step[T].
Hope I've explained myself.
I'm not sure if it's the best method, but I'd just use plain old memoization. You can represent an individual step as
xstep[x_] := Inverse[A[x]](B[x].x + f[x])
and then
u[0] = x0
u[n_] := u[n] = xstep[u[n-1]]
If you know how many values you need in advance, and it's advantageous to precompute them all for some reason (e.g. you want to open a file, use its contents to calculate xN, and then free the memory), you could use NestList. Instead of the previous two lines, you'd do
xlist = NestList[xstep, x0, 10];
u[n_] := xlist[[n]]
This will break if n > 10, of course (obviously, change 10 to suit your actual requirements).
Of course, it may be worth looking at your specific functions to see if you can make some algebraic simplifications.
I would probably write a function that accepts A0, B0, x0, and f0, and then returns A1, B1, x1, and f1 - say
step[A0_?MatrixQ, B0_?MatrixQ, x0_?VectorQ, f0_?VectorQ] := Module[...]
I would then Nest that function. It's hard to be more precise without more precise information.
Also, if your procedure is numerical, then you certainly don't want to compute Inverse[A0], as this is not a numerically stable operation. Rather, you should write
A0.x1 == B0.x0+f0
and then use a numerically stable solver to find x1. Of course, Mathematica's LinearSolve provides such an algorithm.
I am wondering if there is a way to generate a key based on the relationship between two entities in a way that the key for relationship a->b is the same as the key for relationship b->a.
Desirably this would be a hash function which takes either relationship member but generates the same output regardless of the order the members are presented in.
Obviously you could do this with numbers (e.g. add(2,3) is equivalent to add(3,2)). The problem for me is that I do not want add(1,4) to equal add(2,3). Obviously any hash function has overlap but I mean a weak sense of uniqueness.
My naive (and performance undesirable) thought is:
function orderIndifferentHash(string val1, string val2)
{
return stringMerge(hash(val1), hash(val2));
/* String merge will 'add' each character (with wrapping).
The pre-hash is to lengthen strings to at least 32 characters */
}
In your function orderIndifferentHash you could first order val1 and val2 by some criteria and then apply any hash function you like to get the result.
function orderIndifferentHash( val1, val2 ) {
if( val1 < val2 ) {
first = val1
second = val2
}
else {
first = val2
second = val1
}
hashInput = concat( first, second )
return someHash( hashInput )
// or as an alternative:
// return concat( someHash( first ), someHash( second ) )
}
With numbers, one way to achieve that is for two numbers x and y take the x-th prime and y-th prime and calculate the product of these primes. That way you will guarantee the uniqueness of the product for each distinct pair of x and y and independence from the argument order. Of course, in order to do that with any practically meaningful efficiency you'll need to keep a prime table for all possible values of x and y. If x and y are chosen from relatively small range, this will work. But if range is large, the table itself becomes prohibitively impractical, and you'll have no other choice but to accept some probability of collision (like keep a reasonably sized table of N primes and select the x%N-th prime for the given value of x).
Alternative solution, already mentioned in the other answers is to build a perfect hash function that works on your x and y values and then simply concatenate the hashes for x and y. The order independence is achieved by pre-sorting x and y. Of course, building a perfect hash is only possible for a set of arguments from a reasonably small range.
Something tells me that the primes-based approach will give you the shortest possible hash that satisfies the required conditions. No, not true.
You you are after:
Some function f(x, y) such that
f(x, y) == f(y, x)
f(x, y) != f(a, b) => (x == a and y == b) or (x == b and y == a)
There are going to be absolutely loads of these - off hand the one I can think of is "sorted concatenation":
Sort (x, y) by any ordering
Apply a hash function u(a) to x and y individually (where u(a) == u(b) implies a == b, and the length of u(a) is constant)
Concatenate u(x) and u(y).
In this case:
If x == y then then the two hashes are trivially the same, so without loss of generality x < y, hence:
f(y, x) = u(x) + u(y) = f(x, y)
Also, if f(x, y) == f(a, b), this means that either:
u(x) == u(a) and u(y) == u(b) => x == a and y == b, or
u(y) == u(a) and u(x) == u(b) => y == a and x == b
Short version:
Sort x and y, and then apply any hash function where the resulting hash length is constant.
Suppose you have any hash h(x,y). Then define f(x,y) = h(x,y) + h(y,x). Now you have a symmetric hash.
(If you do a trivial multiplicative "hash" then 1+3 and 2+2 might hash to the same value, but even something like h(x,y) = x*y*y will avoid that--just make sure there's some nonlinearity in at least one argument of the hash function.)
I need a simple function
is_square :: Int -> Bool
which determines if an Int N a perfect square (is there an integer x such that x*x = N).
Of course I can just write something like
is_square n = sq * sq == n
where sq = floor $ sqrt $ (fromIntegral n::Double)
but it looks terrible! Maybe there is a common simple way to implement such a predicate?
Think of it this way, if you have a positive int n, then you're basically doing a binary search on the range of numbers from 1 .. n to find the first number n' where n' * n' = n.
I don't know Haskell, but this F# should be easy to convert:
let is_perfect_square n =
let rec binary_search low high =
let mid = (high + low) / 2
let midSquare = mid * mid
if low > high then false
elif n = midSquare then true
else if n < midSquare then binary_search low (mid - 1)
else binary_search (mid + 1) high
binary_search 1 n
Guaranteed to be O(log n). Easy to modify perfect cubes and higher powers.
There is a wonderful library for most number theory related problems in Haskell included in the arithmoi package.
Use the Math.NumberTheory.Powers.Squares library.
Specifically the isSquare' function.
is_square :: Int -> Bool
is_square = isSquare' . fromIntegral
The library is optimized and well vetted by people much more dedicated to efficiency then you or I. While it currently doesn't have this kind of shenanigans going on under the hood, it could in the future as the library evolves and gets more optimized. View the source code to understand how it works!
Don't reinvent the wheel, always use a library when available.
I think the code you provided is the fastest that you are going to get:
is_square n = sq * sq == n
where sq = floor $ sqrt $ (fromIntegral n::Double)
The complexity of this code is: one sqrt, one double multiplication, one cast (dbl->int), and one comparison. You could try to use other computation methods to replace the sqrt and the multiplication with just integer arithmetic and shifts, but chances are it is not going to be faster than one sqrt and one multiplication.
The only place where it might be worth using another method is if the CPU on which you are running does not support floating point arithmetic. In this case the compiler will probably have to generate sqrt and double multiplication in software, and you could get advantage in optimizing for your specific application.
As pointed out by other answer, there is still a limitation of big integers, but unless you are going to run into those numbers, it is probably better to take advantage of the floating point hardware support than writing your own algorithm.
In a comment on another answer to this question, you discussed memoization. Keep in mind that this technique helps when your probe patterns exhibit good density. In this case, that would mean testing the same integers over and over. How likely is your code to repeat the same work and thus benefit from caching answers?
You didn't give us an idea of the distribution of your inputs, so consider a quick benchmark that uses the excellent criterion package:
module Main
where
import Criterion.Main
import Random
is_square n = sq * sq == n
where sq = floor $ sqrt $ (fromIntegral n::Double)
is_square_mem =
let check n = sq * sq == n
where sq = floor $ sqrt $ (fromIntegral n :: Double)
in (map check [0..] !!)
main = do
g <- newStdGen
let rs = take 10000 $ randomRs (0,1000::Int) g
direct = map is_square
memo = map is_square_mem
defaultMain [ bench "direct" $ whnf direct rs
, bench "memo" $ whnf memo rs
]
This workload may or may not be a fair representative of what you're doing, but as written, the cache miss rate appears too high:
Wikipedia's article on Integer Square Roots has algorithms can be adapted to suit your needs. Newton's method is nice because it converges quadratically, i.e., you get twice as many correct digits each step.
I would advise you to stay away from Double if the input might be bigger than 2^53, after which not all integers can be exactly represented as Double.
Oh, today I needed to determine if a number is perfect cube, and similar solution was VERY slow.
So, I came up with a pretty clever alternative
cubes = map (\x -> x*x*x) [1..]
is_cube n = n == (head $ dropWhile (<n) cubes)
Very simple. I think, I need to use a tree for faster lookups, but now I'll try this solution, maybe it will be fast enough for my task. If not, I'll edit the answer with proper datastructure
Sometimes you shouldn't divide problems into too small parts (like checks is_square):
intersectSorted [] _ = []
intersectSorted _ [] = []
intersectSorted xs (y:ys) | head xs > y = intersectSorted xs ys
intersectSorted (x:xs) ys | head ys > x = intersectSorted xs ys
intersectSorted (x:xs) (y:ys) | x == y = x : intersectSorted xs ys
squares = [x*x | x <- [ 1..]]
weird = [2*x+1 | x <- [ 1..]]
perfectSquareWeird = intersectSorted squares weird
There's a very simple way to test for a perfect square - quite literally, you check if the square root of the number has anything other than zero in the fractional part of it.
I'm assuming a square root function that returns a floating point, in which case you can do (Psuedocode):
func IsSquare(N)
sq = sqrt(N)
return (sq modulus 1.0) equals 0.0
It's not particularly pretty or fast, but here's a cast-free, FPA-free version based on Newton's method that works (slowly) for arbitrarily large integers:
import Control.Applicative ((<*>))
import Control.Monad (join)
import Data.Ratio ((%))
isSquare = (==) =<< (^2) . floor . (join g <*> join f) . (%1)
where
f n x = (x + n / x) / 2
g n x y | abs (x - y) > 1 = g n y $ f n y
| otherwise = y
It could probably be sped up with some additional number theory trickery.