Is {true} x := y { x = y } a valid Hoare triple? - logic

I am not sure that
{ true } x := y { x = y }
is a valid Hoare triple.
I am not sure one is allowed to reference a variable (in this case, y), without explicitly defining it first either in the triple program body or in the pre-condition.
{ y=1 } x := y { x = y } //valid
{true} y := 1; x := y { x = y } //valid
How is it?

I am not sure that
{ true } x := y { x = y }
is a valid Hoare triple.
The triple should be read as follows:
"Regardless of starting state, after executing x:=y x equals y."
and it does hold. The formal argument for why it holds is that
the weakest precondition of x := y given postcondition { x = y } is { y = y }, and
{ true } implies { y = y }.
However, I completely understand why you feel uneasy about this triple, and you're worried for a good reason!
The triple is badly formulated because the pre- and post condition do not provide a useful specification. Why? Because (as you've discovered) x := 0; y := 0 also satisfies the spec, since x = y holds after execution.
Clearly, x := 0; y := 0 is not a very useful implementation and the reason why it still satisfies the specification, is (according to me) due to a specification bug.
How to fix this:
The "correct" way of expressing the specification is to make sure the specification is self contained by using some meta variables that the program can't possible access (x₀ and y₀ in this case):
{ x=x₀ ∧ y=y₀ } x := y { x=y₀ ∧ y=y₀ }
Here x := 0; y := 0 no longer satisfies the post condition.

{ true } x := y { x = y } is a valid Hoare triple. The reason is as follows:
x := y is an assignment, therefore, replace that in the precondition.
The precondition stands as {y=y}, which implies {true}.
In other words, {y=y} => {true}.

* If x:=y, then Q. Q.E.D. _*

Related

How is a reference counter implemented at compile time?

Here is a made up set of function calls (I tried to make it complicated but perhaps it is easy).
function main(arg1, arg2) {
do_foo(arg1, arg2)
}
function do_foo(a, b) {
let x = a + b
let y = x * a
let z = x * b
let p = y + z
let q = x + z
let r = do_bar(&p)
let s = do_bar(&q)
}
function do_bar(&p, &q) {
*p += 1
*q += 3
let r = &p * &q
let s = &p + &q
let v = do_baz(&r, &s)
return &v
}
function do_baz(&a, &b) {
return *a + *b
}
How do you generally go about figuring out the liveness of variables and where you can insert instructions for reference counting?
Here is my attempt...
Start at the top function main. It starts with 2 arguments. Assume there is no copying that occurs. It passes the actual mutable values to do_foo.
Then we have x. X owns a and b. Then we see y. y is set to x, so link the previous x to this x. By r, we don't see x anymore, so perhaps it can be freed.... Looking at do_bar by itself, we know basically that p and q can't be garbage collected within this scope.
Basically, I have no idea how to start implementing an algorithm to implement ARC (ideally compile time reference counting, but runtime would be okay for now too to get started).
function main(arg1, arg2) {
let x = do_foo(arg1, arg2)
free(arg1)
free(arg2)
free(x)
}
function do_foo(a, b) {
let x = a + b
let y = x * a
let z = x * b
let p = y + z
free(y)
let q = x + z
free(x)
free(z)
let r = do_bar(&p)
let s = do_bar(&q)
return r + s
}
function do_bar(&p, &q) {
*p += 1
*q += 3
let r = &p * &q
let s = &p + &q
let v = do_baz(&r, &s)
free(r)
free(s)
return &v
}
function do_baz(&a, &b) {
return *a + *b
}
How do I start with implementing such an algorithm. I have searched for every paper on the topic but found no algorithms.
The following rules should do the job for your language.
When a variable is declared, increment its refcount
When a variable goes out of scope, decrement its refcount
When a reference-to-variable is assigned to a variable, adjust the reference counts for the variable(s):
increment the refcount for the variable whose reference is being assigned
decrement the refcount for the variable whose references was previously in the variable being assigned to (if it was not null)
When a variable containing a non-null reference-to-variable goes out of scope, decrement the refcount for the variable it referred to.
Note:
If your language allows reference-to-variable types to be used in data structures, "static" variables, etcetera, the rules abouve need to be extended ... in the obvious fashion.
An optimizing compiler may be able to eliminate some refcount increments and decrements.
Compile time reference counting:
There isn't really any such thing. Reference counting is done at runtime. It doesn't make sense to do it at compile time.
You are probably talking about analyzing the code to determine if runtime reference counting can be optimized or entirely eliminated.
I alluded to the former above. It is really a kind of peephole optimization.
The latter entails checking whether a reference-to-variable can ever escape; i.e. whether it could be used after the variable goes out of scope. (Try Googling for "escape analysis". This is kind of analogous to the "escape analysis" that a compiler could do to decide if an object could be allocated on the stack rather than in the heap.)

Solving linear equations

I have to find out the integral solution of a equation ax+by=c such that x>=0 and y>=0 and value of (x+y) is minimum.
I know if c%gcd(a,b)}==0 then it's always possible. How to find the values of x and y?
My approach
for(i 0 to 2*c):
x=i
y= (c-a*i)/b
if(y is integer)
ans = min(ans,x+y)
Is there any better way to do this ? Having better time complexity.
Using the Extended Euclidean Algorithm and the theory of linear Diophantine equations there is no need to search. Here is a Python 3 implementation:
def egcd(a,b):
s,t = 1,0 #coefficients to express current a in terms of original a,b
x,y = 0,1 #coefficients to express current b in terms of original a,b
q,r = divmod(a,b)
while(r > 0):
a,b = b,r
old_x, old_y = x,y
x,y = s - q*x, t - q*y
s,t = old_x, old_y
q,r = divmod(a,b)
return b, x ,y
def smallestSolution(a,b,c):
d,x,y = egcd(a,b)
if c%d != 0:
return "No integer solutions"
else:
u = a//d #integer division
v = b//d
w = c//d
x = w*x
y = w*y
k1 = -x//v if -x % v == 0 else 1 + -x//v #k1 = ceiling(-x/v)
x1 = x + k1*v # x + k1*v is solution with smallest x >= 0
y1 = y - k1*u
if y1 < 0:
return "No nonnegative integer solutions"
else:
k2 = y//u #floor division
x2 = x + k2*v #y-k2*u is solution with smallest y >= 0
y2 = y - k2*u
if x2 < 0 or x1+y1 < x2+y2:
return (x1,y1)
else:
return (x2,y2)
Typical run:
>>> smallestSolution(1001,2743,160485)
(111, 18)
The way it works: first use the extended Euclidean algorithm to find d = gcd(a,b) and one solution, (x,y). All other solutions are of the form (x+k*v,y-k*u) where u = a/d and v = b/d. Since x+y is linear, it has no critical points, hence is minimized in the first quadrant when either x is as small as possible or y is as small as possible. The k above is an arbitrary integer parameter. By appropriate use of floor and ceiling you can locate the integer points with either x as small as possible or y is as small as possible. Just take the one with the smallest sum.
On Edit: My original code used the Python function math.ceiling applied to -x/v. This is problematic for very large integers. I tweaked it so that the ceiling is computed with just int operations. It can now handle arbitrarily large numbers:
>>> a = 236317407839490590865554550063
>>> b = 127372335361192567404918884983
>>> c = 475864993503739844164597027155993229496457605245403456517677648564321
>>> smallestSolution(a,b,c)
(2013668810262278187384582192404963131387, 120334243940259443613787580180)
>>> x,y = _
>>> a*x+b*y
475864993503739844164597027155993229496457605245403456517677648564321
Most of the computation takes place in the running the extended Euclidean algorithm, which is known to be O(min(a,b)).
First let assume a,b,c>0 so:
a.x+b.y = c
x+y = min(xi+yi)
x,y >= 0
a,b,c > 0
------------------------
x = ( c - b.y )/a
y = ( c - a.x )/b
c - a.x >= 0
c - b.y >= 0
c >= b.y
c >= a.x
x <= c/x
y <= c/b
So naive O(n) solution is in C++ like this:
void compute0(int &x,int &y,int a,int b,int c) // naive
{
int xx,yy;
xx=-1; yy=-1;
for (y=0;;y++)
{
x = c - b*y;
if (x<0) break; // y out of range stop
if (x%a) continue; // non integer solution
x/=a; // remember minimal solution
if ((xx<0)||(x+y<=xx+yy)) { xx=x; yy=y; }
}
x=xx; y=yy;
}
if no solution found it returns -1,-1 If you think about the equation a bit then you should realize that min solution will be when x or y is minimal (which one depends on a<b condition) so adding such heuristics we can increase only the minimal coordinate until first solution found. This will speed up considerably the whole thing:
void compute1(int &x,int &y,int a,int b,int c)
{
if (a<=b){ for (x=0,y=c;y>=0;x++,y-=a) if (y%b==0) { y/=b; return; } }
else { for (y=0,x=c;x>=0;y++,x-=b) if (x%a==0) { x/=a; return; } }
x=-1; y=-1;
}
I measured this on my setup:
x y ax+by x+y a=50 b=105 c=500000000
[ 55.910 ms] 10 4761900 500000000 4761910 naive
[ 0.000 ms] 10 4761900 500000000 4761910 opt
x y ax+by x+y a=105 b=50 c=500000000
[ 99.214 ms] 4761900 10 500000000 4761910 naive
[ 0.000 ms] 4761900 10 500000000 4761910 opt
The ~2.0x difference for naive method times is due to a/b=~2.0and selecting worse coordinate to iterate in the second run.
Now just handle special cases when a,b,c are zero (to avoid division by zero)...

Does golang have the equivalent of this Python short circuit idiom `z = x or y`

In Python z = x or y can be understood as assign z as if x is falsey, then y, else x, is there a similar idiom in golang?
Specifically the two variables on the right are strings, I'd like to assign the first if it's non-emtpy, otherwise the second.
No, you have to use if/else:
s1, s2 := "", "hello"
var x string
if s1 == "" {
x = s2
} else {
x = s1
}

Is there a more elegant Go implementation of Newton's method?

I'm doing the Go tutorials and am wondering whether there is a more elegant way to compute a square root using Newton's method on Exercise: Loops and Functions than this:
func Sqrt(x float64) float64 {
count := 0
var old_z, z float64 = 0, 1
for ; math.Abs(z-old_z) > .001; count++ {
old_z, z = z, z - (z*z - x) / 2*z
}
fmt.Printf("Ran %v iterations\n", count)
return z
}
(Part of the specification is to provide the number of iterations.) Here is the full program, including package statement, imports, and main.
First, you algorithm is not correct. The formula is:
You modelled this with:
z - (z*z - x) / 2*z
But it should be:
z - (z*z - x)/2/z
Or
z - (z*z - x)/(2*z)
(Your incorrect formula had to run like half a million iterations even to just get as close as 0.001! The correct formula uses like 4 iterations to get as close as 1e-6 in case of x = 2.)
Next, initial value of z=1 is not the best for a random number (it might work well for a small number like 2). You can kick off with z = x / 2 which is a very simple initial value and takes you closer to the result with fewer steps.
Further options which do not necessarily make it more readable or elegant, it's subjective:
You can name the result to z so the return statement can be "bare". Also you can create a loop variable to count the iterations if you move the current "exit" condition into the loop which if met you print the iterations count and can simply return. You can also move the calculation to the initialization part of the if:
func Sqrt(x float64) (z float64) {
z = x / 2
for i, old := 1, 0.0; ; i++ {
if old, z = z, z-(z*z-x)/2/z; math.Abs(old-z) < 1e-5 {
fmt.Printf("Ran %v iterations\n", i)
return
}
}
}
You can also move the z = x / 2 to the initialization part of the for but then you can't have named result (else a local variant of z would be created which would shadow the named return value):
func Sqrt(x float64) float64 {
for i, z, old := 1, x/2, 0.0; ; i++ {
if old, z = z, z-(z*z-x)/2/z; math.Abs(old-z) < 1e-5 {
fmt.Printf("Ran %v iterations\n", i)
return z
}
}
}
Note: I started my iteration counter with 1 because the "exit" condition in my case is inside the for and is not the condition of for.
package main
import (
"fmt"
"math"
)
func Sqrt(x float64) float64 {
z := 1.0
// First guess
z -= (z*z - x) / (2*z)
// Iterate until change is very small
for zNew, delta := z, z; delta > 0.00000001; z = zNew {
zNew -= (zNew * zNew - x) / (2 * zNew)
delta = z - zNew
}
return z
}
func main() {
fmt.Println(Sqrt(2))
fmt.Println(math.Sqrt(2))
}

Can I avoid "rightward drift" in Haskell?

When I use an imperative language I often write code like
foo (x) {
if (x < 0) return True;
y = getForX(x);
if (y < 0) return True;
return x < y;
}
That is, I check conditions off one by one, breaking out of the block as soon
as possible.
I like this because it keeps the code "flat" and obeys the principle of "end
weight". I consider it to be more readable.
But in Haskell I would have written that as
foo x = do
if x < 0
then return x
else do
y <- getForX x
if y < 0
then return True
else return $ x < y
Which I don't like as much. I could use a monad that allows breaking out, but
since I'm already using a monad I'd have to lift everything, which adds words
I'd like to avoid if I can.
I suppose there's not really a perfect solution to this but does anyone have
any advice?
For your specific question: How about dangling do notation and the usage of logic?
foo x = do
if x < 0 then return x else do
y <- getForX x
return $ y < 0 || x < y
Edit
Combined with what hammar said, you can even get more beautiful code:
foo x | x < 0 = return x
| otherwise = do y <- getForX x
return $ y < 0 || x < y
Using patterns and guards can help a lot:
foo x | x < 0 = return x
foo x = do
y <- getForX x
if y < 0
then return True
else return $ x < y
You can also introduce small helper functions in a where clause. That tends to help readability as well.
foo x | x < 0 = return x
foo x = do
y <- getForX x
return $ bar y
where
bar y | y < 0 = True
| otherwise = x < y
(Or if the code really is as simple as this example, use logic as FUZxxl suggested).
The best way to do this is using guards, but then you need to have the y value first in order to use it in the guard. That needs to be gotten from getForX wich might be tucked away into some monad that you cannot get the value out from except through getForX (for example the IO monad) and then you have to lift the pure function that uses guards into that monad. One way of doing this is by using liftM.
foo x = liftM go (getForX x)
where
go y | x < 0 = True
| y < 0 = True
| otherwise = x < y
Isn't it just
foo x = x < y || y < 0 where y = getForX x
EDIT: As Owen pointed out - getForX is monadic so my code above would not work. The below version probably should:
foo x = do
y <- getForX x
return (x < y || y < 0)

Resources