The issue:
I want to be able to sort with a custom function that depends on an outer defined variable, for example:
k = 2
sort([1,2,3], lt=(x,y) -> x + k > y)
This works all dandy and such because k is defined in the global scope.
That's where my issue lays, as I want to do something akin to:
function
k = 2
comp = (x,y) -> x + k > y
sort([1,3,3], lt=comp)
end
Which works, but feels like a hack, because my comparison function is way bigger and it feels really off to have to have it defined there in the body of the function.
For instance, this wouldn't work:
comp = (x,y) -> x + k > y # or function comp(x,y) ... end
function
k = 2
sort([1,3,3], lt=comp)
end
So I'm just wondering if there's any way to capture variables such as k like you'd be able to do with lambda functions in C++.
Is this what you want?
julia> comp(k) = (x,y) -> x + k > y
comp (generic function with 1 method)
julia> sort([1,3,2,3,2,2,2,3], lt=comp(2))
8-element Vector{Int64}:
3
2
2
2
3
2
3
1
Related
I read that implications are functions. But I have a hard time trying to understand the example given in the above mentioned page:
The proof term for an implication P → Q is a function that takes
evidence for P as input and produces evidence for Q as its output.
Lemma silly_implication : (1 + 1) = 2 → 0 × 3 = 0. Proof. intros H.
reflexivity. Qed.
We can see that the proof term for the above lemma is indeed a
function:
Print silly_implication. (* ===> silly_implication = fun _ : 1 + 1 = 2
=> eq_refl
: 1 + 1 = 2 -> 0 * 3 = 0 *)
Indeed, it's a function. But its type does not look right to me. From my reading, the proof term for P -> Q should be a function with an evidence for Q as output. Then, the output of (1+1) = 2 -> 0*3 = 0 should be an evidence for 0*3 = 0, alone, right?
But the Coq print out above shows that the function image is eq_refl : 1 + 1 = 2 -> 0 * 3 = 0, instead of eq_refl: 0 * 3 = 0. I don't understand why the hypothesis 1 + 1 = 2 should appear in the output. Can anyone help explain what is going on here?
Thanks.
Your understanding is correct until:
But the Coq print out above shows that the function image is ...
I think you misunderstand the Print command. Print shows you the term associated with a definition, along with the type of the definition. It does not show the image/output of a function.
For example, the following prints the definition and type of the value x:
Definition x := 5.
Print x.
> x = 5
> : nat
Similarly, the following prints the definition and type of the function f:
Definition f := fun n => n + 2.
Print f.
> f = fun n : nat => n + 2
> : nat -> nat
If you want to see the function's codomain, you have to apply the function to a value, like so:
Definition fx := f x.
Print fx.
> fx = f x
> : nat
If you want to see the image/output of a function Print won't help you. What you need is Compute. Compute takes a term (e.g. a function application) and reduces it as far as possible:
Compute (f x).
> = 7
> : nat
Let's say you have to return the sum of all the multiples of 2 and 3 in a set of integers from 1-100. In Haskell, the code I would write would look something like this:
sum ([x*2 | x<-[1..100], x*2 < 100] `union` [x*3 | x<-[1..100], x*3 < 100])
This uses 2 list comprehensions with a union. Another solution would be to step through each item in the list and evaluate it (using a modulus), then add it to a separate list, which you would later add together.
Both of these solutions come out with the same answer, but which one is more optimized if you had to do the same for, say, a list from 1..1000000?
The answer to the original question is 3317 if you want to create your own algorithm.
If you are looking for performance, you can simplify this problem to the point where you don't even need a computer....
Numbers divisible by 2 or 3 fall into a pattern
0 (1) 2 3 4 (5).... 6 (7) 8 9 10 (11).... etc
or
TFTTTF.... TFTTTF....
Assume that the max bound is divisible by 6, (if not, you can just choose the highest value below the real bound and add the remaining few values by hand). Let maxBound=6*N.
For each additional N, you add the following values
6*n, 0, 6*n+2, 6*n+3, 6*n+4, 0
which sums to
24*n+9
so all you need to do is sum up
sum from n=0 to N of (24*n+9)
=24*(sum from n=0 to N of n) + 9*N
=24*N*(N-1)/2 + 9*N
=12*N^2-3*N
so a very fast Haskell program that would solve this problem would look something like this
f maxBound = 12*n^2-3*n + remainingStuff
where n = maxBound `quot` 6
remainingStuff = sum $ filter (<= maxBound) [6*n, 6*n+2, 6*n+3, 6*n+4]
The union function is a "quadratic" algorithm, so using one list comprehension will be faster.
A better way which is useful for generating these kinds of sequences is to take advantage of the fact that they are ordered and merge them together with a function like:
merge :: [Int] -> [Int] -> [Int]
merge as [] = as
merge [] bs = bs
merge as#(a:at) bs#(b:bt) =
case compare a b of
LT -> a : merge at bs
EQ -> a : merge at bt
GT -> b : merge as bt
and then generate your sequence with:
[ x | x <- merge [2,4..100] [3,6..100] ]
One last tip for writing combinatorial loops... replace expressions like x <- [1..100], 2*x < 100 with x <- [1..49], or if you can't compute the upper bound explicitly, use x <- takeWhile (\x -> 2*x < 100) [1..100]. The latter forms only generates as many items as needed.
In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia
I'm trying to obtain the real part of the result of an operation which involves an undefined variable (let's say x).
How can I have Mathematica return x when I execute Re[x] if I know that x will never be a complex number? I think this involves telling Mathematica that x is a real, but I don't know how.
In my case the expression for which I want the real part is more complicated than a simple variable, but the concept will remain the same.
Some examples:
INPUT OUTPUT DESIRED RESULT
----- ------ --------------
Re[x] Re[x] x
Re[1] 1 1
Re[Sin[x]] Re[Sin[x]] Sin[x]
Re[1+x+I] 1 + Re[x] 1+x
Re[1 + x*I] 1-Im[x] 1
You can use for example the input Simplify[Re[x], x \[Element] Reals] which will give x as output.
Use ComplexExpand. It assumes that the variables are real unless you indicate otherwise. For example:
In[76]:= ComplexExpand[Re[x]]
Out[76]= x
In[77]:= ComplexExpand[Re[Sin[x]]]
Out[77]= Sin[x]
In[78]:= ComplexExpand[Re[1+x+I]]
Out[78]= 1+x
Two more possibilities:
Assuming[x \[Element] Reals, Refine[Re[x]]]
Refine[Re[x], x \[Element] Reals]
Both return x.
It can at times be useful to define UpValues for a symbol. This is far from robust, but it nevertheless can handle a number of cases.
Re[x] ^= x;
Im[x] ^= 0;
Re[x]
Re[1]
Re[1 + x + I]
Re[1 + x*I]
x
1
1 + x
1
Re[Sin[x]] does not evaluate as you desire, but one of the transformations used by FullSimplify does place it in a form that triggers Re[x]:
Re[Sin[x]] // FullSimplify
Sin[x]
this has been bugging me for a while.
Lets say you have a function f x y where x and y are integers and you know that f is strictly non-decreasing in its arguments,
i.e. f (x+1) y >= f x y and f x (y+1) >= f x y.
What would be the fastest way to find the largest f x y satisfying a property given that x and y are bounded.
I was thinking that this might be a variation of saddleback search and I was wondering if there was a name for this type of problem.
Also, more specifically I was wondering if there was a faster way to solve this problem if you knew that f was the multiplication operator.
Thanks!
Edit: Seeing the comments below, the property can be anything
Given a property g (where g takes a value and returns a boolean) I am simply looking for the largest f such that g(f) == True
For example, a naive implementation (in haskell) would be:
maximise :: (Int -> Int -> Int) -> (Int -> Bool) -> Int -> Int -> Int
maximise f g xLim yLim = head . filter g . reverse . sort $ results
where results = [f x y | x <- [1..xLim], y <- [1..yLim]]
Let's draw an example grid for your problem to help think about it. Here's an example plot of f for each x and y. It is monotone in each argument, which is an interesting constraint we might be able to do something clever with.
+------- x --------->
| 0 0 1 1 1 2
| 0 1 1 2 2 4
y 1 1 3 4 6 6
| 1 2 3 6 6 7
| 7 7 7 7 7 7
v
Since we don't know anything about the property, we can't really do better than to list the values in the range of f in decreasing order. The question is how to do that efficiently.
The first thing that comes to mind is to traverse it like a graph starting at the lower-right corner. Here is my attempt:
import Data.Maybe (listToMaybe)
maximise :: (Ord b, Num b) => (Int -> Int -> b) -> (b -> Bool) -> Int -> Int -> Maybe b
maximise f p xLim yLim =
listToMaybe . filter p . map (negate . snd) $
enumIncreasing measure successors (xLim,yLim)
where
measure (x,y) = negate $ f x y
successors (x,y) = [ (x-1,y) | x > 0 ] ++ [ (x,y-1) | y > 0 ] ]
The signature is not as general as it could be (Num should not be necessary, but I needed it to negate the measure function because enumIncreasing returns an increasing rather than a decreasing list -- I could have also done it with a newtype wrapper).
Using this function, we can find the largest odd number which can be written as a product of two numbers <= 100:
ghci> maximise (*) odd 100 100
Just 9801
I wrote enumIncreasing using meldable-heap on hackage to solve this problem, but it is pretty general. You could tweak the above to add additional constraints on the domain, etc.
The answer depends on what's expensive. The case that might be intersting is when f is expensive.
What you might want to do is look at pareto-optimality. Suppose you have two points
(1, 2) and (3, 4)
Then you know that the latter point is going to be a better solution, so long as f is a nondecreasing function. However, of course, if you have points,
(1, 2) and (2, 1)
then you can't know. So, one solution would be to establish a pareto-optimal frontier of points that the predicate g permits, and then evaluate these though f.