Set comprehension in VDM++ - set

I've defined 2 types:
public string = seq1 of char;
public config = map string to bool;
I've also defined a test set: dcl subFeatures : set of string := {"test1", "test2", "test3"}.
And I'm trying to generate a set of valid configs by:
{ elem | elem : config & dom elem = subFeatures and {true} subset rng elem }
A config is called "valid" when it has at least one true range value.
Overture is launching an error Error 4: Cannot get bind values for type config. After an investigation I found that Overture by default cannot handle type binds for infinite types, but this is not the case, I'm restricting the map domain.
Can anyone with some more experience check what I am doing wrong?

Although what you've written is valid VDM++, the interpreter can only enumerate a type bind (ie. "elem : config") if it is a finite type, as you say. However, the interpreter also cannot determine that you have reduced the infinite type to a finite number of elements. So this fails at runtime.
For the interpreter to work, you would need to use a set bind over the subFeatures and create "elem |-> true" for each.
EDIT:
After some deliberation and help, I think we can conclude that this either isn't possible with a non-type-bind comprehension, or it would be horribly complicated. The functions below will implement what you require though, I think:
PossibleMappings: set of seq1 of char * map seq1 of char to bool -> set of map seq1 of char to bool
PossibleMappings(s,m) ==
if s = dom m
then {m}
else let e in set s be st e not in set dom m in
dunion {PossibleMappings(s, m munion {e |-> true}),
PossibleMappings(s, m munion {e |-> false})};
ValidMappings: set of seq1 of char -> set of map seq1 of char to bool
ValidMappings(s) ==
{ m | m in set PossibleMappings(s, {|->}) & true in set rng m };
For example:
> p ValidMappings({"a", "b"})
= {
{"a" |-> false, "b" |-> true},
{"a" |-> true, "b" |-> false},
{"a" |-> true, "b" |-> true}
}
Executed in 0.027 secs.

My suggestion is that you would write something like:
{ elem |-> {b | b : bool} union {true} | elem in set {"test1", "test2", "test3"}}

Related

F# record: ref vs mutable field

While refactoring my F# code, I found a record with a field of type bool ref:
type MyType =
{
Enabled : bool ref
// other, irrelevant fields here
}
I decided to try changing it to a mutable field instead
// Refactored version
type MyType =
{
mutable Enabled : bool
// other fields unchanged
}
Also, I applied all the changes required to make the code compile (i.e. changing := to <-, removing incr and decr functions, etc).
I noticed that after the changes some of the unit tests started to fail.
As the code is pretty large, I can't really see what exactly changed.
Is there a significant difference in implementation of the two that could change the behavior of my program?
Yes, there is a difference. Refs are first-class values, while mutable variables are a language construct.
Or, from a different perspective, you might say that ref cells are passed by reference, while mutable variables are passed by value.
Consider this:
type T = { mutable x : int }
type U = { y : int ref }
let t = { x = 5 }
let u = { y = ref 5 }
let mutable xx = t.x
xx <- 10
printfn "%d" t.x // Prints 5
let mutable yy = u.y
yy := 10
printfn "%d" !u.y // Prints 10
This happens because xx is a completely new mutable variable, unrelated to t.x, so that mutating xx has no effect on x.
But yy is a reference to the exact same ref cell as u.y, so that pushing a new value into that cell while referring to it via yy has the same effect as if referring to it via u.y.
If you "copy" a ref, the copy ends up pointing to the same ref, but if you copy a mutable variable, only its value gets copied.
You have the difference not because one is first-value, passed by reference/value or other things. It's because a ref is just a container (class) on its own.
The difference is more obvious when you implement a ref by yourself. You could do it like this:
type Reference<'a> = {
mutable Value: 'a
}
Now look at both definitions.
type MyTypeA = {
mutable Enabled: bool
}
type MyTypeB = {
Enabled: Reference<bool>
}
MyTypeA has a Enabled field that can be directly changed or with other word is mutable.
On the other-side you have MyTypeB that is theoretically immutable but has a Enabled that reference to a mutable class.
The Enabled from MyTypeB just reference to an object that is mutable like the millions of other classes in .NET. From the above type definitions, you can create objects like these.
let t = { MyTypeA.Enabled = true }
let u = { MyTypeB.Enabled = { Value = true }}
Creating the types makes it more obvious, that the first is a mutable field, and the second contains an object with a mutable field.
You find the implementation of ref in FSharp.Core/prim-types.fs it looks like this:
[<DebuggerDisplay("{contents}")>]
[<StructuralEquality; StructuralComparison>]
[<CompiledName("FSharpRef`1")>]
type Ref<'T> =
{
[<DebuggerBrowsable(DebuggerBrowsableState.Never)>]
mutable contents: 'T }
member x.Value
with get() = x.contents
and set v = x.contents <- v
and 'T ref = Ref<'T>
The ref keyword in F# is just the built-in way to create such a pre-defined mutable Reference object, instead that you create your own type for this. And it has some benefits that it works well whenever you need to pass byref, in or out values in .NET. So you should use ref. But you also can use a mutable for this. For example, both code examples do the same.
With a reference
let parsed =
let result = ref 0
match System.Int32.TryParse("1234", result) with
| true -> result.Value
| false -> result.Value
With a mutable
let parsed =
let mutable result = 0
match System.Int32.TryParse("1234", &result) with
| true -> result
| false -> result
In both examples you get a 1234 as an int parsed. But the first example will create a FSharpRef and pass it to Int32.TryParse while the second example creates a field or variable and passes it with out to Int32.TryParse

In Dafny can I define a method that maps over sets?

Hi I can define a function:
class Node {
var next:set<Node> ;
var name:string;
constructor(n:string) {
next := {};
name :=n;
}
}
function stringfunSet(nds:set<Node>) :set<string>
decreases nds
reads nds {
if nds=={} then {} else ( var x :| x in nds ; {x.name}+stringfunSet(nds -{x} ))
}
but this is not executable. I have read "Compiling Hilbert's epsilon operator" but know that mapping form set to set is executable. So just need to check that there is no way in Dafny to define a method that mapps over sets: (set<A>, A->B) -> set<B>.
If this is the case dose this mean that sets are largely used for verification and not for implementation?
Here is a definition of mapping over a set using a comprehension.
function method SetMap<A,B>(f: A->B, s: set<A>): set<B>
{
set x | x in s :: f(x)
}

What's the closest I can get to discriminating an enum by a char?

I've written this question out many times, and have finally realized that my biggest problem is that I don't know how I want to represent this data, and that's making it really hard to reason about the rest of the code.
The way the data is represented in Python:
class LSP():
C_MASK_MAP={
"A":"Ch A",
"B":"Ch B",
"C":"Ch C",
"D":"Ch D",
"T":"Tmpr",
"Y":"Batt",
"L":"Acc"
}
ADC_CHANS= (
"Ch A",
"Ch B",
"Ch C",
"Ch D",
"Tmpr",
"Batt"
)
ADC_MAJORS = (
"Ch A",
"Ch B",
"Ch C",
)
My imaginary Rust code (I realize the names will need updating but are the same here for clarity):
enum C_MASK_MAP {
Ch_A = 'A',
Ch_B = 'B',
Ch_C = 'C',
Ch_D = 'D',
Tmpr = 'T',
Batt = 'Y',
Acc = 'L'
}
//...
let ADC_CHANS = [
C_MASK_MAP::Ch_A,
C_MASK_MAP::Ch_B,
C_MASK_MAP::Ch_C,
C_MASK_MAP::Ch_D,
C_MASK_MAP::Tmpr,
C_MASK_MAP::Batt
];
ADC_MAJORS = [
C_MASK_MAP::Ch_A,
C_MASK_MAP::Ch_B,
C_MASK_MAP::Ch_C,
];
I've considered making C_MASK_MAP a HashMap<char, &'static str>, but then I ran into a huge mess trying not to make a million copies of the strs everywhere and dealing with lifetimes while avoiding making Strings, and the syntactic mess that is a reference to a static str (&&'static str or something).
I think there'd be a real benefit to being able to use an enum (or similar) because the values wouldn't be as big and are more easily interchanged C_MASK_MAP.get(key).expect("invalid key") vs just casting.
Your strings are sentinel values; this is a common pattern in Python, but is not how things should be done in Rust: enums are what such things should be: you’re encoding the legal values in the type system.
You could end up with something like this:
#[derive(Clone, Copy)]
#[repr(u8)]
pub enum Mask {
ChA = b'A',
ChB = b'B',
ChC = b'C',
ChD = b'D',
Tmpr = b'T',
Batt = b'Y',
Acc = b'L',
}
// e.g. Mask::ChA.into() == 'A'
impl Into<char> for Mask {
fn into(self) -> char {
self as u8 as char
}
}
impl Mask {
// e.g. Mask::from('A') == Ok(Mask::ChA)
pub fn from(c: char) -> Result<Mask, ()> {
match c {
'A' => Ok(Mask::ChA),
'B' => Ok(Mask::ChB),
'C' => Ok(Mask::ChC),
'D' => Ok(Mask::ChD),
'T' => Ok(Mask::Tmpr),
'Y' => Ok(Mask::Batt),
'L' => Ok(Mask::Acc),
_ => Err(()),
}
}
// e.g. Mask::ChA.is_chan() == true
pub fn is_chan(&self) -> bool {
match *self {
Mask::ChA | Mask::ChB | Mask::ChC | Mask::ChD | Mask::Tmpr | Mask::Batt => true,
Mask::Acc => false,
}
}
// e.g. Mask::ChD.is_major() == false
pub fn is_major(&self) -> bool {
match *self {
Mask::ChA | Mask::ChB | Mask::ChC => true,
Mask::ChD | Mask::Tmpr | Mask::Batt | Mask::Acc => false,
}
}
}
If you wanted you could implement std::str::FromStr for Mask as well, which would allow "A".parse() == Ok(Mask::ChA):
impl FromStr for Mask {
type Err = ();
fn from_str(s: &str) -> Result<Mask, ()> {
match s {
"A" => Ok(Mask::ChA),
"B" => Ok(Mask::ChB),
"C" => Ok(Mask::ChC),
"D" => Ok(Mask::ChD),
"T" => Ok(Mask::Tmpr),
"Y" => Ok(Mask::Batt),
"L" => Ok(Mask::Acc),
_ => Err(()),
}
}
}
I suspect that is_chan et al. may be more suitable than ADC_CHANS et al., but if you do actually need them, they work fine (you could do [Mask; 6] too, but if you need to add new elements it’d change the type which is an API compatibility break if public):
pub static ADC_CHANS: &'static [Mask] = &[
Mask::ChA,
Mask::ChB,
Mask::ChC,
Mask::ChD,
Mask::Tmpr,
Mask::Batt,
];
pub static ADC_MAJORS: &'static [Mask] = &[
Mask::ChA,
Mask::ChB,
Mask::ChC,
];
Copying a &'static str (i.e. copying the reference only) has no cost. A deep copy of the string would be a clone and would be typed as a String.
If &'static str is too verbose for you, you can always define a type alias.
type Str = &'static str;
HashMap<char, &'static str> corresponds nicely to your original map. However, if you don't need the full range of char for the key and you don't actually need to have the value typed as a char anywhere besides indexing the map, you should use an enum instead, as that will restrict the legal values that can be used as keys.

sorted function in Swift 2

I'm sorting an Array like this:
var users = ["John", "Matt", "Mary", "Dani", "Steve"]
func back (s1:String, s2:String) -> Bool
{
return s1 > s2
}
sorted(users, back)
But I'm getting this error
'sorted' is unavailable: call the 'sort()' method on the collection
What should be the correct way to use the sort() method here?
Follow what the error message is telling you, and call sort on the collection:
users.sort(back)
Note that in Swift 2, sorted is now sort and the old sort is now sortInPlace, and both are to be called on the array itself (they were previously global functions).
Be careful, this has changed again in Swift 3, where sort is the mutating method, and sorted is the one returning a new array.
Another way to use closure is:
var numbers = [2,4,34,6,33,1,67,20]
var numbersSorted = numbers.sort( { (first, second ) -> Bool in
return first < second
})
Another way is to use closure in a simple way:
users.sort({a, b in a > b})
In swift 2.2 there are multiple ways we can use closures with sort function as follows.
Consider the array
var names:[String] = ["aaa", "ddd", "rrr", "bbb"];
The different options for sorting the array with swift closures are as added
Option 1
// In line with default closure format.
names = names.sort( { (s1: String, s2: String) -> Bool in return s1 < s2 })
print(names)
Option 2
// Omitted args types
names = names.sort( { s1, s2 in return s1 > s2 } )
print(names)
Option 3
// Omitted args types and return keyword as well
names = names.sort( { s1, s2 in s1 < s2 } )
print(names)
Option 4
// Shorthand Argument Names(with $ symbol)
// Omitted the arguments area completely.
names = names.sort( { $0 < $1 } )
print(names)
Option 5
This is the most simple way to use closure in sort function.
// With Operator Functions
names = names.sort(>)
print(names)
var array = [1, 5, 3, 2, 4]
Swift 2.3
let sortedArray = array.sort()
Swift 3.0
let sortedArray = array.sorted()

Extending Seq.sortBy in Scala

Say I have a list of names.
case class Name(val first: String, val last: String)
val names = Name("c", "B") :: Name("b", "a") :: Name("a", "B") :: Nil
If I now want to sort that list by last name (and if that is not enough, by first name), it is easily done.
names.sortBy(n => (n.last, n.first))
// List[Name] = List(Name(a,B), Name(c,B), Name(b,a))
But what, if I‘d like to sort this list based on some other collation for strings?
Unfortunately, the following does not work:
val o = new Ordering[String]{ def compare(x: String, y: String) = collator.compare(x, y) }
names.sortBy(n => (n.last, n.first))(o)
// error: type mismatch;
// found : java.lang.Object with Ordering[String]
// required: Ordering[(String, String)]
// names.sortBy(n => (n.last, n.first))(o)
is there any way that allow me to change the ordering without having to write an explicit sortWith method with multiple if–else branches in order to deal with all cases?
Well, this almost does the trick:
names.sorted(o.on((n: Name) => n.last + n.first))
On the other hand, you can do this as well:
implicit val o = new Ordering[String]{ def compare(x: String, y: String) = collator.compare(x, y) }
names.sortBy(n => (n.last, n.first))
This locally defined implicit will take precedence over the one defined on the Ordering object.
One solution is to extend the otherwise implicitly used Tuple2 ordering. Unfortunately, this means writing out Tuple2 in the code.
names.sortBy(n => (n.second, n.first))(Ordering.Tuple2(o, o))
I'm not 100% sure what methods you think collator should have.
But you have the most flexibility if you define the ordering on the case class:
val o = new Ordering[Name]{
def compare(a: Name, b: Name) =
3*math.signum(collator.compare(a.last,b.last)) +
math.signum(collator.compare(a.first,b.first))
}
names.sorted(o)
but you can also provide an implicit conversion from a string ordering to a name ordering:
def ostring2oname(os: Ordering[String]) = new Ordering[Name] {
def compare(a: Name, b: Name) =
3*math.signum(os.compare(a.last,b.last)) + math.signum(os.compare(a.first,b.first))
}
and then you can use any String ordering to sort Names:
def oo = new Ordering[String] {
def compare(x: String, y: String) = x.length compare y.length
}
val morenames = List("rat","fish","octopus")
scala> morenames.sorted(oo)
res1: List[java.lang.String] = List(rat, fish, octopus)
Edit: A handy trick, in case it wasn't apparent, is that if you want to order by N things and you're already using compare, you can just multiply each thing by 3^k (with the first-to-order being multiplied by the largest power of 3) and add.
If your comparisons are very time-consuming, you can easily add a cascading compare:
class CascadeCompare(i: Int) {
def tiebreak(j: => Int) = if (i!=0) i else j
}
implicit def break_ties(i: Int) = new CascadeCompare(i)
and then
def ostring2oname(os: Ordering[String]) = new Ordering[Name] {
def compare(a: Name, b: Name) =
os.compare(a.last,b.last) tiebreak os.compare(a.first,b.first)
}
(just be careful to nest them x tiebreak ( y tiebreak ( z tiebreak w ) ) ) so you don't do the implicit conversion a bunch of times in a row).
(If you really need fast compares, then you should write it all out by hand, or pack the orderings in an array and use a while loop. I'll assume you're not that desperate for performance.)

Resources