How to represent sequential operation schemas [Z-notation] - formal-languages

I have an operation schema C which consists of two sequential operation schemas A and B. A must be performed before B. I'm stuck on how to represent the sequence of schema activation.
Can I use schema conjunction, i.e. C == A ∧ B ?
or is there a way to 'call' schema B from A?
I'm new to Z-notation, any help would be greatly appreciated!

A schema is just a way of wrapping up a chunk of mathematics.
There is a fairly standard way of interpreting that mathematics as describing an ADT. One schema represents the state variables and constraints among them, one schema represents the initialisation, and as many schemas representing operations as there are operations in the ADT's interface.
You are probably looking for forward schema composition, C == A ⨟ B.

For an example of a big Z specification, I suggest this recently uploaded project: https://github.com/vinahradau/finma
The following schema conjunction works in CZT. Here, C is not called from B, but rather after B.
─
A == B ∧ C
└

Related

How do you read a set of atomic propositions?

I am given the above system for atomic propositions {a,b,c}.
I'm then meant to say if certain LTL formulae hold (such as ♢☐c).
I understand what the LTL formulae mean (eventually forever c holds) but I have no idea how to read the diagram and relate it to the LTL.
I assume it's like a flow chart in that we start from the top left, /{a} and can go through the different states. But what does each of it mean divided by a?
Looks like FSM/transduser rather than a Kripke structure. Input/output or more generally precondition/postcondition is a common notation for FSM and its kin. precondition/postcondition (a and b and ...) / (x and y and...). So a in the state q1. In next states either only b in q4 or b and c or q3. Might be of course or instead of and in precondition, otherwise the system might halt..

Untyped set operations in Isabelle

I have the following code in Isabelle:
typedecl type1
typedecl type2
consts
A::"type1 set"
B::"type2 set"
When I want to use union operation with A and B as bellow:
axiomatization where
c0: "A ∩ B = {}"
Since A and B are sets of different types, I get an error of clash of types which makes sense!
I am wondering if there are any counterparts for set operations that they implicitly consider their operands as pure sets (i.e., ignore their types), therefore something like A ∩' B become possible ( ∩' is ∩ operation counterpart in above sense).
PS:
Another workaround is type casting that I wrote this as a separate question here to better organize my questions.
Thanks
Sets in Isabelle/HOL are always typed, i.e., they contain only elements of one type. If you want to have untyped sets, you have to switch to another logic such as Isabelle/ZF.
Similarly, all values in HOL are annotated with their type, and this is fundamental to the logic. Equality, for example, can only be applied to two values of the same type. Hence, there is no notion of equality between values of different types. Consequently, there is no set operation that ignores the type of values, because such an operation would necessarily have to know how to identify values of different types.

Maximum difference between columns using relational algebra

Is it possible to obtain the maximum difference between two columns (for example starting and ending weights)?
Right now I'm leaning towards no as this would require a new column with the difference between the two columns for each row, then taking the max of that. Doing it the way I orginally intended doesn't work either since arithmetic operations are not allowed in the conditions of select operations (e.g. SIGMA (c1 - c2 < c3 - c4)(Table) is not allowed).
Disclosure: this is part of a homework question.
It can be done, exactly in the way you planned, but you need generalized projection for that. The generalized projection is the operator
Π(E1, E2,..., En)R
where R is a relation, and E1...En are expressions in the form a⊕b, where a and b are attributes of R or constants, and ⊕ is an arbitrary binary operator between them. The result is a relation with attributes E1...En.
This would allow you to project the differences into a new relation (R' := Π(x-y)R), then find the maximum on that, just as you planned.
If we're not allowed to use generalized projection, then I think we have no means to actually subtract an attribute from another, or to actually calculate anything from them, as the definition of projection allow only attribute names, and the definition of selection allow only expressions of the form aθb where a and b are attributes or constants and θ is a binary relational operator (this is logical, in its way, because if we have a relation R(X,Y), then we have no idea about the type of X or Y, making operations on them quite meaningless).
I think generalized projection is a great extension to relational algebra. It's obviously immensely useful in real life, and it can be defended even from a more scientific point of view: if we allow binary conditional operators on the values like "X > 50", then we made assumptions on the type already, rendering that point kind of moot. Your instructor may disagree, though.
If you're looking to do this in the real world, you should be able to do this with a subquery (or a view, which amounts to much the same thing), something like:
select max (diff) from (
select high - low as diff from blah blah blah
)
Whether this applies to the abstract world of relational algebra, I couldn't say. I'm too busy fixing those damn real-world problems :-)

2D PHP Pattern Creator Altgorithm

I'm looking for an algorithm to help me build 2D patterns based on rules. The idea is that I could write a script using a given site of parameters, and it would return a random, 2-dimensional sequence up to a given length.
My plan is to use this to generate image patterns based on rules. Things like image fractals or sprites for game levels could possibly use this.
For example, lets say that you can use A, B, C, & D to create the pattern. The rule is that C and A can never be next to each other, and that D always follows C. Next, lets say I want a pattern of size 4x4. The result might be the following which respects all the rules.
A B C D
B B B B
C D B B
C D C D
Are there any existing libraries that can do calculations like this? Are there any mathematical formulas I can read-up on?
While pretty inefficient concering runtime, backtracking is an often used algorithm for such a problem.
It follows a simple pattern, and if written correctly, you can easily replace a rule set into it.
Define your rule data structures; i.e., define the set of operations that the rules can encapsulate, and define the available cross-referencing that can be done. Once you've done this, you should have a clearer view of what type of algorithms to use to apply these rules to a potential result set.
Supposing that your rules are restricted to "type X is allowed to have type Y immediately to its left/right/top/bottom" you potentially have situations where generating possible patterns is computationally difficult. Take a look at Wang Tiles (a good source is the book Tilings and Patterns by Grunbaum and Shephard) and you'll see that with the states sets of rules you might define sets of Wang Tiles. Appropriate sets of these are Turing Complete.
For small rectangles, or your sets of rules, this may only be of academic interest. As mentioned elsewhere a backtracking approach might be appropriate for your ruleset - in which case you may want to consider appropriate heuristics for the order in which new components are added to your grid. Again, depending on your rulesets, other approaches might work. E.g. if your ruleset admits many solutions you might get a long way by randomly allocating many items to the grid before attempting to fill in remaining gaps.

lenses, fclabels, data-accessor - which library for structure access and mutation is better

There are at least three popular libraries for accessing and manipulating fields of records. The ones I know of are: data-accessor, fclabels and lenses.
Personally I started with data-accessor and I'm using them now. However recently on haskell-cafe there was an opinion of fclabels being superior.
Therefore I'm interested in comparison of those three (and maybe more) libraries.
There are at least 4 libraries that I am aware of providing lenses.
The notion of a lens is that it provides something isomorphic to
data Lens a b = Lens (a -> b) (b -> a -> a)
providing two functions: a getter, and a setter
get (Lens g _) = g
put (Lens _ s) = s
subject to three laws:
First, that if you put something, you can get it back out
get l (put l b a) = b
Second that getting and then setting doesn't change the answer
put l (get l a) a = a
And third, putting twice is the same as putting once, or rather, that the second put wins.
put l b1 (put l b2 a) = put l b1 a
Note, that the type system isn't sufficient to check these laws for you, so you need to ensure them yourself no matter what lens implementation you use.
Many of these libraries also provide a bunch of extra combinators on top, and usually some form of template haskell machinery to automatically generate lenses for the fields of simple record types.
With that in mind, we can turn to the different implementations:
Implementations
fclabels
fclabels is perhaps the most easily reasoned about of the lens libraries, because its a :-> b can be directly translated to the above type. It provides a Category instance for (:->) which is useful as it allows you to compose lenses. It also provides a lawless Point type which generalizes the notion of a lens used here, and some plumbing for dealing with isomorphisms.
One hindrance to the adoption of fclabels is that the main package includes the template-haskell plumbing, so the package is not Haskell 98, and it also requires the (fairly non-controversial) TypeOperators extension.
data-accessor
[Edit: data-accessor is no longer using this representation, but has moved to a form similar to that of data-lens. I'm keeping this commentary, though.]
data-accessor is somewhat more popular than fclabels, in part because it is Haskell 98. However, its choice of internal representation makes me throw up in my mouth a little bit.
The type T it uses to represent a lens is internally defined as
newtype T r a = Cons { decons :: a -> r -> (a, r) }
Consequently, in order to get the value of a lens, you must submit an undefined value for the 'a' argument! This strikes me as an incredibly ugly and ad hoc implementation.
That said, Henning has included the template-haskell plumbing to automatically generate the accessors for you in a separate 'data-accessor-template' package.
It has the benefit of a decently large set of packages that already employ it, being Haskell 98, and providing the all-important Category instance, so if you don't pay attention to how the sausage is made, this package is actually pretty reasonable choice.
lenses
Next, there is the lenses package, which observes that a lens can provide a state monad homomorphism between two state monads, by definining lenses directly as such monad homomorphisms.
If it actually bothered to provide a type for its lenses, they would have a rank-2 type like:
newtype Lens s t = Lens (forall a. State t a -> State s a)
As a result, I rather don't like this approach, as it needlessly yanks you out of Haskell 98 (if you want a type to provide to your lenses in the abstract) and deprives you of the Category instance for lenses, which would let you compose them with .. The implementation also requires multi-parameter type classes.
Note, all of the other lens libraries mentioned here provide some combinator or can be used to provide this same state focalization effect, so nothing is gained by encoding your lens directly in this fashion.
Furthermore, the side-conditions stated at the start don't really have a nice expression in this form. As with 'fclabels' this does provide template-haskell method for automatically generating lenses for a record type directly in the main package.
Because of the lack of Category instance, the baroque encoding, and the requirement of template-haskell in the main package, this is my least favorite implementation.
data-lens
[Edit: As of 1.8.0, these have moved from the comonad-transformers package to data-lens]
My data-lens package provides lenses in terms of the Store comonad.
newtype Lens a b = Lens (a -> Store b a)
where
data Store b a = Store (b -> a) b
Expanded this is equivalent to
newtype Lens a b = Lens (a -> (b, b -> a))
You can view this as factoring out the common argument from the getter and the setter to return a pair consisting of the result of retrieving the element, and a setter to put a new value back in. This offers the computational benefit that the 'setter' here can recycle some of the work used to get the value out, making for a more efficient 'modify' operation than in the fclabels definition, especially when accessors are chained.
There is also a nice theoretical justification for this representation, because the subset of 'Lens' values that satisfy the 3 laws stated in the beginning of this response are precisely those lenses for which the wrapped function is a 'comonad coalgebra' for the store comonad. This transforms 3 hairy laws for a lens l down to 2 nicely pointfree equivalents:
extract . l = id
duplicate . l = fmap l . l
This approach was first noted and described in Russell O'Connor's Functor is to Lens as Applicative is to Biplate: Introducing Multiplate and was blogged about based on a preprint by Jeremy Gibbons.
It also includes a number of combinators for working with lenses strictly and some stock lenses for containers, such as Data.Map.
So the lenses in data-lens form a Category (unlike the lenses package), are Haskell 98 (unlike fclabels/lenses), are sane (unlike the back end of data-accessor) and provide a slightly more efficient implementation, data-lens-fd provides the functionality for working with MonadState for those willing to step outside of Haskell 98, and the template-haskell machinery is now available via data-lens-template.
Update 6/28/2012: Other Lens Implementation Strategies
Isomorphism Lenses
There are two other lens encodings worth considering. The first gives a nice theoretical way to view a lens as a way to break a structure into the value of the field, and 'everything else'.
Given a type for isomorphisms
data Iso a b = Iso { hither :: a -> b, yon :: b -> a }
such that valid members satisfy hither . yon = id, and yon . hither = id
We can represent a lens with:
data Lens a b = forall c. Lens (Iso a (b,c))
These are primarily useful as a way to think about the meaning of lenses, and we can use them as a reasoning tool to explain other lenses.
van Laarhoven Lenses
We can model lenses such that they can be composed with (.) and id, even without a Category instance by using
type Lens a b = forall f. Functor f => (b -> f b) -> a -> f a
as the type for our lenses.
Then defining a lens is as easy as:
_2 f (a,b) = (,) a <$> f b
and you can validate for yourself that function composition is lens composition.
I've recently written on how you can further generalize van Laarhoven lenses to get lens families that can change the types of fields, just by generalizing this signature to
type LensFamily a b c d = forall f. Functor f => (c -> f d) -> a -> f b
This does have the unfortunate consequence that the best way to talk about lenses is to use rank 2 polymorphism, but you don't need to use that signature directly when defining lenses.
The Lens I defined above for _2 is actually a LensFamily.
_2 :: Functor f => (a -> f b) -> (c,a) -> f (c, b)
I've written a library that includes lenses, lens families, and other generalizations including getters, setters, folds and traversals. It is available on hackage as the lens package.
Again, a big advantage of this approach is that library maintainers can actually create lenses in this style in your libraries without incurring any lens library dependency whatsoever, by just supplying functions with type Functor f => (b -> f b) -> a -> f a, for their particular types 'a' and 'b'. This greatly lowers the cost of adoption.
Since you don't need to actually use the package to define new lenses, it takes a lot of pressure off my earlier concerns about keeping the library Haskell 98.

Resources