I am looking for an algorithm that will describe the transient behaviour of a fluid as it spreads across the surface of a height map. My starting conditions at t=0 are:
A 2D matrix of height values (H) of size [x, y]
A 2D matrix of fluid height values (F) of size [x, y]
A metric of the area of each point in the matrix (a), i.e. each location is 1 cm^2
A viscosity value for the fluid (u)
What I want is an algorithm that can calculate a new value for the fluid height matrix F at t'=t+1. At any point I could calculate the volume of fluid at a given point by v = a * (F(x,y) - H(x, y)). Desirable properties of this algorithm would be:
It does not need to consider the "slope" or "shape" of the top or bottom of the fluid column at each point. i.e. it can consider each value in the hieghtmap as describing a flat square of a certain height, and each value of the fluid height map as a rectangular column of water with a flat top
If a "drain" (i.e. a very low point in the height map) is encountered, fluid from all parts of the map may be affected as it is pulled towards it.
A simple example of what I'm looking for would be this:
A 5x5 height map matrix where all values are 0
A 5x5 fluid height map matrix where all values are 0 except [2, 2], which is 10.
An area per point of 1 m^2
A viscosity of u
The algorithm would describe the "column" of fluid spreading out over the 5x5 matrix over several time steps. Eventually the algorithm would settle at a uniform height of 10/25 in all locations, but I'm really interested in what happens in between.
I have attempted to search for this kind of algorithm, but all I can find are equations that describe the behaviour of particles inside of a fluid, which is too granular for my purposes. Does anyone know of any good sources I could reference for this problem, or an existing algorithm that might serve my needs.
O is your starting fluid-column
o are diffusing columns
************************
X X X X X
X X X X X
X X O X X
X X X X X
X X X X X
************************
--Get the Laplacian of the heights of each neighbour and accumulate results
in a separate matrix
--Then apply the second matrix into first one to do synchronous diffusion
--go to Laplacian step again and again
************************
X X X X X
X X o X X
X o O o X
X X o X X
X X X X X
************************
************************
X X . X X
X . o . X
. o O o .
X . o . X
X X . X X
************************
************************
X X . X X
X o o o X
. o o o .
X o o o X
X X . X X
************************
************************
X X . X X
X o o o X
. o o o .
X o o o X
X X . X X
************************
************************
X . o . X
. o o o .
o o o o o
. o o o .
X . o . X
************************
************************
. . . . .
. o o o .
. o o o .
. o o o .
. . . . .
************************
************************
. . . . .
. . . . .
. . o . .
. . . . .
. . . . .
************************
************************
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
************************
sorry for very low height-resolution
Laplacian
Laplacian's place in diffusion
Diffusion's place in Navier-Stokes equations
Discrete Laplace Operator
Simple algorithm (in pseudo):
get a cell's value in a.
get neighbour cells' values in b(sum of them)
put b/4.0 in c(getting 4 cells' values)
add a to c
build a matrix with this algorithm
apply the matrix onto old one
goto step 1
Harder algorithm (in pseudo):
apply discrete-Laplacian-operator on all neighbours(finite-differences thing)
put solution in c height-map
subtract or add c to/from starting height-map
goto step 1
Jos Stam's fluid-solver has a similar thing for the diffusion part.
Related
We are currently studying Lambda Calculus and have begun Beta Reduction.
Our lecturer is using some notation that was not properly explained to us.
Below is what we have been given.
𝛽-reduction:
(λ v . e₁) e₂
⤳𝛽
[e₂/v]e₁ where [e₂/v] is an operator that replaces each v with an e₂
Definition of operator [e/v]
[e/v]v = e
[e/v₂]v₁ = v₁ (when v₁≠v₂, i.e., different identifiers)
[e/v](e₁ e₂) = [e/v]e₁ [e/v]e₂
[e₂/v](λ v . e₁) = (λ v . e₁)
[e₂/v₂](λ v₁ . e₁) = (λ v₁ . [e₂/v₂]e₁) (when v₁∉FV(e₂))
[x/v] (λ x . f v)
⤳ λ x . f x (WRONG, x∈FV(x)={x})
I have tried to find resources online but none seem to use the same notation as him with the '/' sign.
Any explanation of the above code would be very much appreciated.
Printing the value of an expression is a common practice in debugging. For example, if I have a piece of code like this
my . super . cool . fUnCtIoN . chain $ value
and I am trying to see the output of fUnCtIoN . chain, I would add
my . super . cool . (\ x -> traceShow x x ) . fUnCtIoN . chain $ value
which is mouthful for a simple task like this, not to mention if I want to print many intermediate results:
(\ x -> traceShow x x )
. my
. (\ x -> traceShow x x )
. super
. (\ x -> traceShow x x )
. cool
. (\ x -> traceShow x x )
. fUnCtIoN
. (\ x -> traceShow x x )
. chain
$ value
It would just look awful. Is there a better way to do this?
Just use traceShowId! It does exactly what you're asking for.
my . super . cool . traceShowId . fUnCtIoN . chain $ value
Yes. join traceShow.
λ> import Control.Monad
λ> :t join
join :: Monad m => m (m a) -> m a
λ> :t join (+)
join (+) :: Num a => a -> a
In the case of the function monad, join f x = f x x, so join traceShow is equivalent to \x -> traceShow x x.
Or make a where clause that provides a new definition of (.):
--...your code without the nasty bits...
where
(.) f g a = f ( join traceShow (g a))
Which may just help, though there will be one more traceShow call than previously.
How about a helper function for adding a trace call to a function:
dbg :: Show a => String -> a -> a
dbg name x = trace (name ++ ": " ++ show x) x
main = do
let x = dbg "my" . my
. dbg "super" . super
. dbg "cool" . cool
. dbg "func" . fUnCtIoN
. dbg "chain" . chain
$ value
print x
my = (+1)
super = (+2)
cool = (+3)
fUnCtIoN = (+4)
chain = (+5)
value = 3
Output:
chain: 3
func: 8
cool: 12
super: 15
my: 17
18
You could write a higher-order function which takes a function of two arguments and uses the same value for both arguments.
applyBoth :: (a -> a -> b) -> a -> b
applyBoth f x = f x x
(Aside: this is join for the "reader" monad (->) a.)
Then you can use that combinator in curried form:
applyBoth traceShow
. my
. applyBoth traceShow
. super
. applyBoth traceShow
. cool
. applyBoth traceShow
. fUnCtIoN
. applyBoth traceShow
. chain
$ value
Or define an alias for applyBoth traceShow.
traceS = applyBoth traceShow
traceS
. my
. traceS
. super
. traceS
. cool
. traceS
. fUnCtIoN
. traceS
. chain
$ value
For maximum terseness points, you can automatically interleave traceS into a list of functions by folding it up:
showSteps :: Show a => [a -> a] -> a -> a
showSteps = foldr (\f g -> f . traceS . g) id
showSteps [my, super, cool, fUnCtIoN, chain] value
Edit Eh, what the hell... It's not entirely relevant, but here's how to make showSteps work when you want to pipeline your data through a number of types. It's an example of a program we wouldn't be able to write without GHC's advanced type system features (GADTs and RankNTypes in this instance).
Path is a GADT which explains how to walk through a directed graph of types, starting at the source type x and ending at the destination type y. It's parameterised by a category c :: * -> * -> *.
infixr 6 :->
data Path c x y where
End :: Path c z z
(:->) :: c x y -> Path c y z -> Path c x z
:-> reminds us that a journey of a thousand miles begins with a single step: if the category you're working in lets you go from x to y, and you can take a path from y to z, you can go from x to z.
End is for when you have reached your destination - it's pretty easy to walk from z to z by not walking at all.
So Path has the same recursive structure as a linked list, but with a more flexible approach to the things inside it. Rather than requiring all of its elements to have the same type, it gives you a way to join up arrows like dominos, as long as the return type of one arrow matches the input type of the next. (To use the mathematical jargon: if you view the underlying category c as a logical relation, then End augments c with reflexivity and :-> augments c with transitivity. Path c thus constructs the reflexive transitive closure of c. Another way of looking at this is that Path is the free category, much like [] is the free monoid; you can define instance Category (Path c) without any constraint on c.)
You can fold up a Path with exactly the same code as you use to fold up a list, but the type is more precise: the folding function can't know anything a priori about the types of the arrows inside the path.
foldr :: (forall x y. c x y -> r y z -> r x z) -> r z z -> Path c x z -> r x z
foldr f z End = z
foldr f z (x :-> xs) = f x $ foldr f z xs
At this point, I could define type-aligned sequences of functions (type TAS = Path (->)) and show you how f :-> g :-> h :-> End can be folded up into h . g . f, but since our goal is to print out all the intermediate values, we have to use a category with a tiny bit more structure than plain old ->. (Thanks to #dfeuer in the comments for the suggestion - I've adjusted the name he gave to better reflect the attention-seeking nature of my behaviour.)
data Showoff x y where
Showoff :: Show y => (x -> y) -> Showoff x y
Showoff is just like a regular function, except it assures you that the return value y will be Showable. We can use this extra bit of knowledge to write showSteps for paths in which each step is a Showoff.
type ShowTAS = Path Showoff
showSteps :: ShowTAS a b -> a -> b
showSteps path = foldr combine id path . traceS
where combine (Showoff f) g = g . traceS . f
It strikes me as a bit of a shame to use the impure traceS right in the midst of all this strongly typed fun. In real life I'd probably return a String along with the answer.
To prove that it does actually work, here is a chain of functions with varying types. We take in a String, read it into an Int, add one to it, convert it to a Float, then divide it by 2.
chain :: ShowTAS String Float
chain = Showoff read :-> plusOne :-> toFloat :-> divideTwo :-> End
where plusOne :: Showoff Int Int
plusOne = Showoff (+1)
toFloat :: Showoff Int Float
toFloat = Showoff fromIntegral
divideTwo :: Showoff Float Float
divideTwo = Showoff (/2)
ghci> showSteps chain "4"
"4"
4
5
5.0
2.5
2.5 -- this last one is not from a traceShow call, it's just ghci printing the result
Fun!
The "Surrounded Region" problem states:
"Given a 2D board containing 'X' and 'O', capture all regions surrounded by 'X'.
A region is captured by flipping all 'O's into 'X's in that surrounded region."
I'm confused as to what the task is for this problem. I'm not clear on what dictates when a region is 'surrounded' based on all the examples found online(which happen to all be the same example).
The example given.
input output
X X X X X X X X
X O O X X X X X
X X O X X X X X
X O X X X O X X
Both groups of O's look surrounded by X's to me. Is the rule that all four sides need to be surrounded by X's? and since the bottom O doesn't have a X below it it's not 'captured'?
what happens if this is the input? is nothing captured at all?
X X X X
X O O O
X X O X
X O X X
According to the definition, if 'O' cell is surrounded by 'X' cells, i.e. up/down/left/right cells are 'X'.
The first thought could be for each 'O' cell, add it to an array, check its up/down/left/right and if it is another 'O' cell, continue until, it hits all 'X' cells or it hits boundary. In the former case, cells in the array can all be flipped to'X'; while in the latter case, cells in the array cannot be flipped.
Yes, exactly.
This surrounding and capturing is in fact, like a game (of GO). The edges cannot be captured, that's it. If you put your dots to edges, they will be yours till the end of the game. Also, surrounding means, if O's are surrounded by X's, then X's will form a cycle around O's. And whenever such a cycle completes, all the inside O's will be flipped to X's and vice-versa.
definition of cycle:
A cycle of X's or O's is any connected region where you start from a cell and return to it, without repeating (revisiting) a cell, and each step you can take a chess piece king's move to complete the path.
So, in your input example, the path:
(1,0)->(0,1)->(0,2)->(1,3)->(2,3)->(3,2)->(2,1)->(1,0) forms a cycle.
this has been bugging me for a while.
Lets say you have a function f x y where x and y are integers and you know that f is strictly non-decreasing in its arguments,
i.e. f (x+1) y >= f x y and f x (y+1) >= f x y.
What would be the fastest way to find the largest f x y satisfying a property given that x and y are bounded.
I was thinking that this might be a variation of saddleback search and I was wondering if there was a name for this type of problem.
Also, more specifically I was wondering if there was a faster way to solve this problem if you knew that f was the multiplication operator.
Thanks!
Edit: Seeing the comments below, the property can be anything
Given a property g (where g takes a value and returns a boolean) I am simply looking for the largest f such that g(f) == True
For example, a naive implementation (in haskell) would be:
maximise :: (Int -> Int -> Int) -> (Int -> Bool) -> Int -> Int -> Int
maximise f g xLim yLim = head . filter g . reverse . sort $ results
where results = [f x y | x <- [1..xLim], y <- [1..yLim]]
Let's draw an example grid for your problem to help think about it. Here's an example plot of f for each x and y. It is monotone in each argument, which is an interesting constraint we might be able to do something clever with.
+------- x --------->
| 0 0 1 1 1 2
| 0 1 1 2 2 4
y 1 1 3 4 6 6
| 1 2 3 6 6 7
| 7 7 7 7 7 7
v
Since we don't know anything about the property, we can't really do better than to list the values in the range of f in decreasing order. The question is how to do that efficiently.
The first thing that comes to mind is to traverse it like a graph starting at the lower-right corner. Here is my attempt:
import Data.Maybe (listToMaybe)
maximise :: (Ord b, Num b) => (Int -> Int -> b) -> (b -> Bool) -> Int -> Int -> Maybe b
maximise f p xLim yLim =
listToMaybe . filter p . map (negate . snd) $
enumIncreasing measure successors (xLim,yLim)
where
measure (x,y) = negate $ f x y
successors (x,y) = [ (x-1,y) | x > 0 ] ++ [ (x,y-1) | y > 0 ] ]
The signature is not as general as it could be (Num should not be necessary, but I needed it to negate the measure function because enumIncreasing returns an increasing rather than a decreasing list -- I could have also done it with a newtype wrapper).
Using this function, we can find the largest odd number which can be written as a product of two numbers <= 100:
ghci> maximise (*) odd 100 100
Just 9801
I wrote enumIncreasing using meldable-heap on hackage to solve this problem, but it is pretty general. You could tweak the above to add additional constraints on the domain, etc.
The answer depends on what's expensive. The case that might be intersting is when f is expensive.
What you might want to do is look at pareto-optimality. Suppose you have two points
(1, 2) and (3, 4)
Then you know that the latter point is going to be a better solution, so long as f is a nondecreasing function. However, of course, if you have points,
(1, 2) and (2, 1)
then you can't know. So, one solution would be to establish a pareto-optimal frontier of points that the predicate g permits, and then evaluate these though f.
I am working in a grid-universe - objects only exist at integer locations in a 2 dimensional matrix.
Some terms:
Square - a discrete location. Each square has an int x and int y coordinate, and no two squares have the same x and y pair.
Adjacent: A square X is adjacent to another square Y if the magnitude of the difference in either their x or y coordinate is no greater than 1. Put more simply, all squares immediately in the N, NE, E, SE, S, SW, W, and NW directions are adjacent.
Legend:
'?' - Unknown Traversibility
'X' - Non Traversable Square
'O' - Building (Non Traversable)
' ' - Traversable Square
The problem:
Given the following generic situation:
? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ?
? ? ? O O ? ? ?
? ? ? O O ? ? ?
? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ?
where the builder is adjacent to one of the four buildings, I want to build two buildings such that they both share a common adjacent square that is also adjacent to at least one of the four existing buildings, and this common adjacent square is not blocked in.
Basic Valid solutions:
X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X
X X X O O X X X X X X O O X X X X X X O O X X X
X X X O O X X X X X X O O X X X X X O O O X X X
X X X O X X X O X X X X
O O X X X O X X X X X X X X
X X X X X X X X X X X
Currently, I iterate through all traversable square adjacent to the four buildings, and look for squares that have 3 adjacent traversable squares, but this sometimes produces situations such as:
X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X O X X X X X O X
X X X O O X X X X X O O O X X X O O O X X
X X X O O X X X X X O O X X X X O O X X
X X X X X X X X X X X X X X
X X X O O X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X
Any thoughts on how I can refine my algorithm?
EDIT: Added another failing case.
EDIT: I'd also like to be able to know if there isn't a possible configuration in which these conditions could be met. I'm not guaranteed a viable solution, and would like to not-try if there isn't a way to do this successfully.
Checking to ensure your new buildings aren't orthogonally adjacent will eliminate cases such as your problem case 1, and checking to ensure not more than one of your new buildings is adjacent to any of the originals will clear up problem case 2.
This should work if you can safely assume you are no more constricted than in problem case 2. If there is only one square of exit, then the only solutions will need to violate the "not more than one" condition proposed above.
Your invalid cases are due to the splitting of the free space into 2 parts right? In that case, a crude method would be to flood-fill the free space after building placement and see if the connected space has the correct size (2 squares less than prior to building placement). That seems excessive. You really want to know if the graph of the free-space squares is still connected. More specifically, you want to know if all the free-space squares around the new buildings are still connected. Do they have to be locally connected, or can the path be arbitrarily long? i.e. is this valid:
X X X X X X X X
X X X
X X X X X X
X X X X X X
X O X X
X X O O O X X
X X O O X X
X X X X
X X X X X X
X X X X X X
If that is OK, this is a hard problem because that path could be very long.
The only solution I can think of is to do pathfinding from the common adjacent square out to the edge of the map. It looks to me like all the problem cases boil down to "the adjacent square is blocked in" so the way to ensure it isn't blocked in is to find a path from that square to an open edge of the map.
I don't know if that's the most efficient approach but it would be fairly simple to implement, since A* pathfinding routines are pretty widely implemented. And actually since you don't need the shortest path, just a path, you could simply do a flood-fill of free spaces starting from the adjacent square until you hit the edge of the map.