How to check that only 1 element in a set has changed in a dafny ensures? - set

How can I check that only 1 element in a set has changed in a dafny ensures?
Example:
method myMethod(myParameter: int)
requires myParameter >= 0
modifies mySet
ensures ONLY_ONE_ELEMENT_IN_THE_SET_HAS_CHANGED
{
...
}

If you mean that one element was removed and another was added, you need to provide them explicitly as (ghost) return variables and use the old keyword to reference the previous value of mySet
method myMethod(myParameter: int) returns (ghost removed: int, ghost added: int)
requires myParameter >= 0
modifies mySet
ensures old(mySet) - {removed} + {added} == mySet
{
...
}

Related

F# record: ref vs mutable field

While refactoring my F# code, I found a record with a field of type bool ref:
type MyType =
{
Enabled : bool ref
// other, irrelevant fields here
}
I decided to try changing it to a mutable field instead
// Refactored version
type MyType =
{
mutable Enabled : bool
// other fields unchanged
}
Also, I applied all the changes required to make the code compile (i.e. changing := to <-, removing incr and decr functions, etc).
I noticed that after the changes some of the unit tests started to fail.
As the code is pretty large, I can't really see what exactly changed.
Is there a significant difference in implementation of the two that could change the behavior of my program?
Yes, there is a difference. Refs are first-class values, while mutable variables are a language construct.
Or, from a different perspective, you might say that ref cells are passed by reference, while mutable variables are passed by value.
Consider this:
type T = { mutable x : int }
type U = { y : int ref }
let t = { x = 5 }
let u = { y = ref 5 }
let mutable xx = t.x
xx <- 10
printfn "%d" t.x // Prints 5
let mutable yy = u.y
yy := 10
printfn "%d" !u.y // Prints 10
This happens because xx is a completely new mutable variable, unrelated to t.x, so that mutating xx has no effect on x.
But yy is a reference to the exact same ref cell as u.y, so that pushing a new value into that cell while referring to it via yy has the same effect as if referring to it via u.y.
If you "copy" a ref, the copy ends up pointing to the same ref, but if you copy a mutable variable, only its value gets copied.
You have the difference not because one is first-value, passed by reference/value or other things. It's because a ref is just a container (class) on its own.
The difference is more obvious when you implement a ref by yourself. You could do it like this:
type Reference<'a> = {
mutable Value: 'a
}
Now look at both definitions.
type MyTypeA = {
mutable Enabled: bool
}
type MyTypeB = {
Enabled: Reference<bool>
}
MyTypeA has a Enabled field that can be directly changed or with other word is mutable.
On the other-side you have MyTypeB that is theoretically immutable but has a Enabled that reference to a mutable class.
The Enabled from MyTypeB just reference to an object that is mutable like the millions of other classes in .NET. From the above type definitions, you can create objects like these.
let t = { MyTypeA.Enabled = true }
let u = { MyTypeB.Enabled = { Value = true }}
Creating the types makes it more obvious, that the first is a mutable field, and the second contains an object with a mutable field.
You find the implementation of ref in FSharp.Core/prim-types.fs it looks like this:
[<DebuggerDisplay("{contents}")>]
[<StructuralEquality; StructuralComparison>]
[<CompiledName("FSharpRef`1")>]
type Ref<'T> =
{
[<DebuggerBrowsable(DebuggerBrowsableState.Never)>]
mutable contents: 'T }
member x.Value
with get() = x.contents
and set v = x.contents <- v
and 'T ref = Ref<'T>
The ref keyword in F# is just the built-in way to create such a pre-defined mutable Reference object, instead that you create your own type for this. And it has some benefits that it works well whenever you need to pass byref, in or out values in .NET. So you should use ref. But you also can use a mutable for this. For example, both code examples do the same.
With a reference
let parsed =
let result = ref 0
match System.Int32.TryParse("1234", result) with
| true -> result.Value
| false -> result.Value
With a mutable
let parsed =
let mutable result = 0
match System.Int32.TryParse("1234", &result) with
| true -> result
| false -> result
In both examples you get a 1234 as an int parsed. But the first example will create a FSharpRef and pass it to Int32.TryParse while the second example creates a field or variable and passes it with out to Int32.TryParse

Referencing / dereferencing a vector element in a for loop

In the code below, I want to retain number_list, after iterating over it, since the .into_iter() that for uses by default will consume. Thus, I am assuming that n: &i32 and I can get the value of n by dereferencing.
fn main() {
let number_list = vec![24, 34, 100, 65];
let mut largest = number_list[0];
for n in &number_list {
if *n > largest {
largest = *n;
}
}
println!("{}", largest);
}
It was revealed to me that instead of this, we can use &n as a 'pattern':
fn main() {
let number_list = vec![24, 34, 100, 65];
let mut largest = number_list[0];
for &n in &number_list {
if n > largest {
largest = n;
}
}
println!("{}", largest);
number_list;
}
My confusion (and bear in mind I haven't covered patterns) is that I would expect that since n: &i32, then &n: &&i32 rather than it resolving to the value (if a double ref is even possible). Why does this happen, and does the meaning of & differ depending on context?
It can help to think of a reference as a kind of container. For comparison, consider Option, where we can "unwrap" the value using pattern-matching, for example in an if let statement:
let n = 100;
let opt = Some(n);
if let Some(p) = opt {
// do something with p
}
We call Some and None constructors for Option, because they each produce a value of type Option. In the same way, you can think of & as a constructor for a reference. And the syntax is symmetric:
let n = 100;
let reference = &n;
if let &p = reference {
// do something with p
}
You can use this feature in any place where you are binding a value to a variable, which happens all over the place. For example:
if let, as above
match expressions:
match opt {
Some(1) => { ... },
Some(p) => { ... },
None => { ... },
}
match reference {
&1 => { ... },
&p => { ... },
}
In function arguments:
fn foo(&p: &i32) { ... }
Loops:
for &p in iter_of_i32_refs {
...
}
And probably more.
Note that the last two won't work for Option because they would panic if a None was found instead of a Some, but that can't happen with references because they only have one constructor, &.
does the meaning of & differ depending on context?
Hopefully, if you can interpret & as a constructor instead of an operator, then you'll see that its meaning doesn't change. It's a pretty cool feature of Rust that you can use constructors on the right hand side of an expression for creating values and on the left hand side for taking them apart (destructuring).
As apart from other languages (C++), &n in this case isn't a reference, but pattern matching, which means that this is expecting a reference.
The opposite of this would be ref n which would give you &&i32 as a type.
This is also the case for closures, e.g.
(0..).filter(|&idx| idx < 10)...
Please note, that this will move the variable, e.g. you cannot do this with types, that don't implement the Copy trait.
My confusion (and bear in mind I haven't covered patterns) is that I would expect that since n: &i32, then &n: &&i32 rather than it resolving to the value (if a double ref is even possible). Why does this happen, and does the meaning of & differ depending on context?
When you do pattern matching (for example when you write for &n in &number_list), you're not saying that n is an &i32, instead you are saying that &n (the pattern) is an &i32 (the expression) from which the compiler infers that n is an i32.
Similar things happen for all kinds of pattern, for example when pattern-matching in if let Some (x) = Some (42) { /* … */ } we are saying that Some (x) is Some (42), therefore x is 42.

How to make reversed for loop with array index as a start/end point in Kotlin?

now i'm trying to make reversed for a loop.The simple way of reverse for is
for(i in start downTo end)
but,what if I use array as a start/end point?
You can loop from the last index calculated by taking size - 1 to 0 like so:
for (i in array.size - 1 downTo 0) {
println(array[i])
}
Even simpler, using the lastIndex extension property:
for (i in array.lastIndex downTo 0) {
println(array[i])
}
Or you could take the indices range and reverse it:
for (i in array.indices.reversed()) {
println(array[i])
}
Additionally to the first answer from zsmb13, some other variants.
Using IntProgression.reversed:
for (i in (0..array.lastIndex).reversed())
println("On index $i the value is ${array[i]}")
or using withIndex() together with reversed()
array.withIndex()
.reversed()
.forEach{ println("On index ${it.index} the value is ${it.value}")}
or the same using a for loop:
for (elem in array.withIndex().reversed())
println("On index ${elem.index} the value is ${elem.value}")
or if the index is not needed
for (value in array.reversed())
println(value)
Just leaving it here in case someone needs it
I created an extension function:
public inline fun <T> Collection<T>.forEachIndexedReversed(action: (index: Int, T) -> Unit): Unit {
var index = this.size-1
for (item in this.reversed()) action(index--, item)
}

Where BigDecimal "/" is defined?

I thought '3.0'.to_d.div(2) is same as '3.0'.to_d / 2, but the former return 1 while latter returns 1.5.
I searched by def / in Bigdecimal's github repository, but I couldn't find it.
https://github.com/ruby/bigdecimal/search?utf8=%E2%9C%93&q=def+%2F&type=Code
Where can I find the definition? And which method is a equivalent to / in Bigdecimal?
In Float there is a fdiv method. Is there similar one in Bigdecimal?
You can find it in the source code of the bigdecimal library, in the repository you linked to. On line 3403 of ext/bigdecimal/bigdecimal.c, BigDecimal#/ is bound to the function BigDecimal_div:
rb_define_method(rb_cBigDecimal, "/", BigDecimal_div, 1);
This function looks like this:
static VALUE
BigDecimal_div(VALUE self, VALUE r)
/* For c = self/r: with round operation */
{
ENTER(5);
Real *c=NULL, *res=NULL, *div = NULL;
r = BigDecimal_divide(&c, &res, &div, self, r);
if (!NIL_P(r)) return r; /* coerced by other */
SAVE(c); SAVE(res); SAVE(div);
/* a/b = c + r/b */
/* c xxxxx
r 00000yyyyy ==> (y/b)*BASE >= HALF_BASE
*/
/* Round */
if (VpHasVal(div)) { /* frac[0] must be zero for NaN,INF,Zero */
VpInternalRound(c, 0, c->frac[c->Prec-1], (BDIGIT)(VpBaseVal() * (BDIGIT_DBL)res->frac[0] / div->frac[0]));
}
return ToValue(c);
}
This is because BigDecimal#div takes a second argument, precision, which defaults to 1.
irb(main):017:0> '3.0'.to_d.div(2, 2)
=> 0.15e1
However, when / is defined on BigDecimal,
rb_define_method(rb_cBigDecimal, "/", BigDecimal_div, 1);
They used 1 for the # of arguments, rather than -1 which means "variable number of arguments". So BigDecimal#div thinks it takes one required argument and one optional argument, whereas BigDecimal#/ takes one required argument and the optional arg is ignored. Because the optional argument is ignored, it's not initialized correctly, it gets an empty int or 0.
This may be considered a bug. You should consider opening an issue with the ruby devs.

How to check for a Not a Number (NaN) in Swift 2

The following method calculates the percentage using two variables.
func casePercentage() {
let percentage = Int(Double(cases) / Double(calls) * 100)
percentageLabel.stringValue = String(percentage) + "%"
}
The above method is functioning well except when cases = 1 and calls = 0.
This gives a fatal error: floating point value can not be converted to Int because it is either infinite or NaN
So I created this workaround:
func casePercentage() {
if calls != 0 {
let percentage = Int(Double(cases) / Double(calls) * 100)
percentageLabel.stringValue = String(percentage) + "%"
} else {
percentageLabel.stringValue = "0%"
}
}
This will give no errors but in other languages you can check a variable with an .isNaN() method. How does this work within Swift2?
You can "force unwrap" the optional type using the ! operator:
calls! //asserts that calls is NOT nil and gives a non-optional type
However, this will result in a runtime error if it is nil.
One option to prevent using nil or 0 is to do what you have done and check if it's 0.
The second is option is to nil-check
if calls != nil
The third (and most Swift-y) option is to use the if let structure:
if let nonNilCalls = calls {
//...
}
The inside of the if block won't run if calls is nil.
Note that nil-checking and if let will NOT protect you from dividing by 0. You will have to check for that separately.
Combining second and your method:
//calls can neither be nil nor <= 0
if calls != nil && calls > 0

Resources