alloy: arity issue when checking constraint with an assertion - logic

I've recently started to experiment with alloy for a project, and I have run into an issue unequal arities. Here is a simplified example. I have four signatures:
Word
Definition
Document: a document has a text (a sequence of words)
Dictionary: a dictionary maps a sequence of words to a sequence of definitions (to keep it simple, let's say that a word should have exactly one definition)
Here is a minimal code example:
module dictionaries
open util/relation as relation
sig Word {}
sig Definition {}
sig Document {
text: seq Word
}
sig Dictionary {
entries: seq Word,
defseq: seq Definition,
define: Word->Definition,
}{
//dictionary maps word to def only for the word present in dictionary
dom[define] = elems [entries] function [define, elems [entries]]
//content of the list of defintions
defseq = entries.define
}
//assert all word in a dictionary have a definition
assert all_word_defined {
all w: Word | all dict: Dictionary | some def: Definition |
//w in dict.entries implies w->def in dict.define
}
check all_word_defined
So my questions are:
How do I constrain dictionaries so that each word in the dictionary maps to exactly one definition? Is it correct to do it as in the code above?
How do I check that this constraint is respected with an assertion? Obviously the bit of code w in dict.entries implies w->def in dict.define does not work, because w in dict.entriesand w->def in dict.define do not have the same arity, and I get the error message "in can be used only between 2 expressions of the same arity"...

I think you're struggling with seq and assertions more than arity.
seq is not very common in Alloy, you only use it when you need to change the ordering of elements. In this example, I do not see any need for ordering in the current description.
Assertions verify invariants of the whole model. What you try to assert is more a fact that you state in the model than what you assert. Assertions are useful if you have defined operations and want to verify that certain things can never happen regardless of how those operations are executed and in what order. I.e. the assertions verify the consequences of the model, not the facts. (Although sometimes it is useful to verify something in other words.)
Alloy is incredibly powerful in navigating through global tables. Despite looking object oriented, the trick is to understand that fields are actually global tables that are joined. (This is what took me a long time to get.)
You should not have redundant information in the model. You entries, defseq and define can be modelled with functions on the define table.
The base language is very powerful. For a problem of this size you often do not need any utilities. Especially relation seems quite redundant after you feel the relational model of Alloy.
Ok, step by step:
A Documents is a sequence of words:
I would just make a Document a set of Word because that more natural and a lot easier to use. In this problem, the ordering of words does not play a role so using normal sets is ok it seems? (Counting the words would require a seq.)
sig Word {}
sig Document {
text: Word
}
A dictionary maps a sequence of words to a sequence of definitions (to keep it simple, let's say that a word should have exactly one definition)
I think you mean that a Dictionary can map a word to one definition? Does EVERY word have an entry? Or are there some words that have an entry? You say 'a' which I take as some words having one Definition? If so:
sig Definition {}
sig Dictionary {
define : Word -> one Definition,
}
The define table (which is Dictionary->Word->Definition) has a constrain that for a given Dictionary->Word combination, there must be one Definition. This means not all Words have to be in the table, but if a Word is in the table then there must be exactly one Definition. (you can model this also with other constraints. Best is to write out a table and look at the columns.)
You define entries as the set of Word in the Dictionary. You can model this better as a function:
fun Dictionary.entries : set Word {
this.define.univ
}
The first join selects the this Dictionary in the define table and removes the first column. The second join removes the last column.
And similar for defseq:
fun Dictionary.defseq : set Definition {
this.define[univ]
}
The box join [] just joins the inside of the square brackets with the first column of the table before it, leaving the Definition column. That is:
(univ).(this.define)
How do I check that this constraint is respected with an assertion
I think it is not clear what you try to assert. (Which is the power of a formal language that you discover this!) In Alloy you state as a fact that a Word in a Dictionary maps to one Definition. There is no use asserting something you have defined as a fact. Before you can assert you first need more definitions.
Normally you start writing a predicate and then look for examples of your model. For example, if we want to see one of the infinite Dictionary's then we could write:
pred show( d : Dictionary ) {
d.define.univ = Word
}
run show for 5
In this example, you will see a Dictionary where every word has a Definition.
I've written a blog that might be useful for you: http://aqute.biz/2017/07/15/Alloy.html

Related

Game Data Structure in OCaml

I am currently working on a game / simulation of a computer like logistics system (like the minecraft mod applied energestics).
The main part of the game is the 2d grid of blocks.
All the blocks have common properties like a position.
But then there are supposed to be different kinds of Blocks like:
Item-Containers,
Input and Export Busses,
etc.
In an imperative object-oriented language (like Java) I would implement this with:
a main block class
with the common properties like position
then have sub classes
that inherit from the block class
These subclasses would implement the different properties of the different block types.
In ocaml I am a litle bit lost.
I could create objects which inherit but this does not work Like in Java.
For example:
I cant put objects of different subclasses in one list together.
I also wanted to approach the data structure differently by separating data from logic. I would not add methods to the objects. I tried using records instead of objects.
I don't know how to implement the different block types.
I tried using custom data types like this:
type blockType = Container | Input | Output | Air
type block = {blockType :blockType; pos :int * int}
I struggled to add the individual additional properties. I tried to add an entity field to the block record type which would hold the additional properties:
type entity = Container of inventory | OutputEntity of facing | InputEntity of facing | NoEntity
(Where inventory and facing are also custom types)
This solution doesn't really feel fitting.
One problem I have is that there are logic operations I want to perform on blocks of type Input and Output. I have to repeat code like this:
let rotateBlock block =
match block.blockType with
| Input -> {block with entity = nextDirection block.entity}
| Output -> {block with entity = nextDirection block.entity}
| _ -> block
That's not that bad with two types but I plan to add much more so it is a big negative in terms of scalability.
Another point of critism of this kind of structure is that it is a litle bit inconsistent. I use a field in the record to implement the different types on the block level and multiple constructors on the entity level. I did it to be able to access the positon of every block easily with block.pos instead of using pattern matching.
I am not really happy with this solution.
Request
I hope somebody can point me in the right direction regarding the data structure.
You're trying to satisfy competing goals. You can't have both, a rigid static model of blocks and a dynamic extensible block type. So you need to choose. Fortunately, OCaml provides solutions for both, and even for something between, but as always for the middle-ground solutions, they kind of bad in both. So let's try.
Rigid static hierarchy using ADT
We can use sum types to represent a static hierarchy of objects. In this case, it would be easy for us to add new methods, but hard to add new types of objects. As the base type, we will use a polymorphic record, that is parametrized with a concrete block type (the concrete block type could be polymorphic itself, and this will allow us to build the third layer of hierarchy and so on).
type pos = {x : int; y : int}
type 'a block = {pos : pos; info = 'a}
type block_info = Container of container | Input of facing | Air | Solid
where info is an additional concrete block specific payload, i.e., a value of type block_info. This solution allows us to write polymorphic functions that accept different blocks, e.g.,
let distance b1 b2 =
sqrt ((float (b1.x - b2.x))**2. + (float (b1.y - b2.y)) **2.)
The distance function has type 'a blk -> 'b blk -> float and will compute the distance between two blocks of any type.
This solution has several drawbacks:
Hard to extend. Adding a new kind of block is hard, you basically need to design beforehand what blocks do you need and hope that you do not need to add a new block in the future. It looks like that you're expecting that you will need to add new block types, so this solution might not suit you. However, I believe that you will actually need a very small number of block kinds if you will treat each block as a syntactical element of the world grammar, you will soon notice that the minimal set of block kinds is quite small. Especially, if you will make your blocks recursive, i.e., if you will allow block composition (i.e., mixtures of different blocks in the same block).
You can't put blocks of different kinds in the same container. Because to do this, you need to forget the type of block. If you will do this, you will eventually end up with a container of positions. We will try to alleviate this in our middle-ground solution by using existential types.
Your type model doesn't impose right constraints. The world constraint is that the world is composed of blocks, and each coordinate either has a block or doesn't have one (i.e., it is the void). In your case, two blocks may have the same coordinates.
Not so rigid hierarchy using GADT
We may relax few restrictions of the previous solution, by using existential GADT. The idea of the existential is that you can forget the kind of a block, and later recover it. This is essentially the same as a variant type (or dynamic type in C#). With existentials, you can have an infinite amount of block kinds and even put them all in the same container. Essentially, an existential is defined as a GADT that forgets its type, e.g., the first approximation
type block = Block : block_info -> {pos : pos; info : block_info}
So now we have a unified block type, that is locally quantified with the type of block payload. You may even move further, and make the block_info type extensible, e.g.,
type block_info = ..
type block_info += Air
Instead of building an existential representation by yourself (it's a nice exercise in GADT), you may opt to use some existing libraries. Search for "universal values" or "universals" in the OPAM repositories, there are a couple of solutions.
This solution is more dynamic and allows us to store values of the same type in the same container. The hierarchy is extensible. This comes with a price of course, as now we can't have one single point of definition for a particular method, in fact, method definitions would be scattered around your program (kind of close to the Common Lisp CLOS model). However, this is an expected price for an extensible dynamic system. Also, we lose the static property of our model, so we will use lots of wildcard in pattern matching, and we can't rely on the type system to check that we covered all possible combinations. And the main problem is still there our model is not right.
Not so rigid structure with OO
OCaml has the Object Oriented Layer (hence the name) so you can build classical OO hierarchies. E.g.,
class block x y = object
val x = x
val y = y
method x = x
method y = y
method with_x x = {< x = x >}
method with_y y = {< y = y >}
end
class input_block facing = object
inherit block
val facing = facing
method facing = facing
method with_facing f = {< facing = f >}
end
This solution is essentially close to the first solution, except that your hierarchy is now extensible at the price that the set of methods is now fixed. And although you can put different blocks in the same container by forgetting a concrete type of a block using upcasting, this won't make much sense since OCaml doesn't have the down-cast operator, so you will end up with a container of coordinates. And we still have the same problem - our model is not right.
Dynamic world structure using Flyweights
This solution kills two bunnies at the same time (and I believe that this should be the way how it is implemented in Minecraft). Let's start with the second problem. If you will represent every item in your world with a concrete record that has all attributes of that item you will end up with lots of duplicates and extreme memory consumption. That's why in real-world applications a pattern called Flyweight is used. So, if you think about scalability you will still end up in using this approach. The idea of the Flyweight pattern is that your objects share attributes by using finite mapping, and objects itself are represented as identifiers, e.g.,
type block = int
type world = {
map : pos Int.Map.t;
facing : facing Int.Map.t;
air : Int.Set.t;
}
where 'a Int.Map.t is a mapping from int to 'a, and Int.Set.t is a set of integers (I'm using the Core library here).
In fact, you may even decide that you don't need a closed world type, and just have a bunch of finite mappings, where each particular module adds and maintains its own set of mappings. You can use abstract types to store this mapping in a central repository.
You may also consider the following representation of the block type, instead of one integer, you may use two integers. The first integer denotes an identity of a block and the second denotes its equality.
type block = {id : int; eq : int}
The idea is that every block in the game will have a unique id that will distinguish it from other blocks even if they are equal "as two drops of water". And eq will denote structural equality of two blocks, i.e., two blocks with exact same attributes will have the same eq number. This solution is hard to implement if you world structure is not closed though (as in this case the set of attributes is not closed).
The main drawback of this solution is that it is so dynamic that it sorts of leaving the OCaml type system out of work. That's a reasonable penalty, in fact you can't have a dynamic system that is fully verified in static time. (Unless you have a language with dependent types, but this is a completely different story).
To summarize, if I were devising such kind of game, I will use the last solution. Mainly because it scales well to a large number of blocks, thanks to hashconsing (another name for Flyweight). If scalability is not an issue, then I will build a static structure of blocks with different composition operators, e.g.,
type block =
| Regular of regular
| ...
| Compose of compose_kind * block * block
type compose_kind = Horizontal | Vertical | Inplace
And now the world is just a block. This solution is pure mathematical though, and doesn't really scale to larger worlds.
Sounds like fun.
I cant put objects of different subclasses in one list together.
You can actually. Suppose you had lots of different block-objects that
all had a 'decay' method. You could have a function "get me the
decayables" and it could put all those blocks in a list for you, and
you could then at timed intervals iterate over the list and apply the
decay method on each of those blocks. This is all well typed and easy
to do with OCaml's object system. What you can't do is take out a
'decayable' from that list and say, actually, this was always also
an AirBlock, and I want to treat it like a full-fledged AirBlock now,
instead of a decayable.
...
type blockType = Container | Input | Output | Air
You can only have 240 so variants per type. If you plan on having more
blocks than this, an easy way to gain extra space would be to
categorize your blocks and work with e.g. Solid Rock | Liquid Lava
rather than Rock | Lava.
type block = {blockType :blockType; pos :int * int}
What's the position of a block in your inventory? What's the position
of a block that's been mined out of its place in the world, and is
now sort of sitting on the ground, waiting to be picked up? Why not
keep the positions in the array indices or map key that you're using
to represent the locations of blocks in the world? Otherwise you also
have to consider what it means for blocks to have the same position,
or impossible positions.
let rotateBlock block =
match block.blockType with
| Input -> {block with entity = nextDirection block.entity}
| Output -> {block with entity = nextDirection block.entity}
| _ -> block
I don't really follow this input/output stuff, but it seems that in this
function you're interested in some kind of property like "has a next direction
to face, if rotated". Why not name that property and make it what you match on?
type block = {
id : blockType;
burnable : bool;
consumable : bool;
wearable : bodypart option; (* None - not wearable *)
hitpoints : int option; (* None - not destructible *)
oriented : direction option; (* None - doesn't have distinct faces *)
}
let rotateBlock block =
match block.oriented with
| None -> block
| Some dir -> {block with oriented = Some (nextDirection dir)}
let burn block =
match block.burnable, block.hitpoints with
| false, _ | true, None -> block
| true, Some hp when hp > 5 -> { block with hitpoints = Some (hp - 5) }
| true, Some hp -> ash
The block type is interesting because each kind will have different operations.
Item-Containers,
Input and Export Busses,
etc.
type container = { specificinfo : int ; etc ...}
type bus .....
type block =
| Item of position * container
| Bus of position * bus
type inventory = block list
My intuition say me that you can perhaps use GADT for creating the type of the operations on block and implement an evaluator easily for the simulator.
UPDATE
to answer to your comment :
If you have a common information to all your variants, you need to extract them, you can imagine something like :
type block_info =
| Item of specific_item_type ....
| Bus of specific_bus_type
type block = {position:Vector.t ; information : block_info}
let get_block_position b = b.position

Describe a film (entity and attribute) using the first order logic

Good morning,
I want to understand how can I describe something using the first order logic.
For example I want to describe what is a film (an entity) and what is an attribute (for example actor: Clooney) for the film. How can I describe that using the first order logic?
******* UPDATE ********
What I need to explain in first logic order is:
ENTITY: an element, an abstraction or an object that can be described with a set of properties or attributes. So I think that I must says that the entity has got a set of attributes with their respective values. An Entity describes an element, an abstraction or an object.
ATTRIBUTE: an attribute has always got a value and it always associated to an entity. It describes a specific feature/property of the entity.
DOCUMENT: a pure text description (pure text it not contains any html tags). Every document describes only ONE entity through its attribute.
To state that an object has a certain property you would use a single place predicate. For example, to state that x is a film you could write Film(x). If you want to attribute some value to an object you can use two (or more) place predicate. Using your example you could say that Clooney starred in a film as Starred(clooney, x).
There are certain conventions that people use. For example, predicates start with capital letters (Actor, Film, FatherOf) and constants start with a lower case letter (x, clooney, batman). Constants denote objects and predicates say something about the objects. In case of predicates with more than one argument the first argument is usually the subject about which you are making the statement. That way you can naturally read the logical formula as a sentence in normal language. For example, FatherOf(x, y) would read as "x is the father of y".
Answer for the update:
I am not sure whether you can do that in first order logic. You could describe an Entity as something that has certain properties by formula such as
\forall x (Entity(x) ==> Object(x) | Element(x) | Abstraction(x))
This is a bit more difficult for the Attribute. In first order logic an attribute ascribes some quality to an object or relates it to another object. You could probably use a three place predicate as in:
\forall attribute (\exists object (\exists value (Has(object, attribute, value))))
As to the document, that would be just a conjunction of such statements. For example, the description of George Clooney could be the following:
Entity(clooney) & Has(clooney, starred, gravity) & Has(clooney, bornIn, lexington) & ...
The typical way to do this is to explain that a specific object exists and this object has certain attributes. For example:
(∃x)(property1(x) & property2(x) & ~property3(x))
aka: There exists a thing that satisfies properties 1 and 2 but does not satisfy property 3.
Your current question formulation makes it unclear as to what you mean by attributes and documents. Perhaps towards your idea of attributes: it's possible to describe as the domain of property1 all the entities that satisfy it; so, for example, the domain of blue is all blue objects.
First-order logic has nothing to do with HTML -- are you trying to use HTML to represent an entity in first-order logic somehow? It remains incredibly unclear what your question is.

debugging chained relation declaration in alloy

I am using Alloy to model graph transformation.
I specify my transformation as different transformations which are applied to different part of the graph.
So I have a signature :
sig Transformation {
nodes : some Node,
added_node : one Special_Node
}
To apply this transformation I declare 3 relations in the fact part of the signature which apply to different part of the graph. The left part of a relation is related to the input graph and the right side to the output graph :
some mapping_rel0_nodes : rel0In one -> one rel0Out|{
C1 && C2 && C3
}
&&
some mapping_rel1_nodes : rel1In -> some (rel1Out+special_Node) | {
C1' && C2' && C3'
}
&&
some mapping_rel2_nodes : rel2In -> some (rel2Out+special_Node) |{
C1'' && C2'' && C3''
} &&
out.nodes <: connections = ~mapping_rel2_nodes.inpCnx.mapping_rel2_nodes +
~mapping_rel1_nodes.inpCnx.mapping_rel1_nodes +
~mapping_rel0_nodes.inpCnx.mapping_rel0_nodes
Each relation applies to disjoint different part of the graph, but they are connected by connections between them. The CX, CX' and CX'' are constraints applied on the relations. A Node has the following signature :
sig Node{
connections : set Node
}{
this !in this.#connections
}
To obtain the new connection I take all the connections in the input graph inpCnx and use the mapping obtained for each point to get the associated connections in the new graph.
My questions are :
Do mapping_relX_nodes are still known at his step of the fact?
When I control them in the evaluator and i do the operation manually on the appropriate instance it works, but expressed as fact, it returns no instances. I read this post and I was wondering if there are other tools to control the expression and variable, like debug print or else?
The relations have the same arity, but the rel0 is bijective and the others are just binary relation. Is there any constraint due to the bijectivity of rel0 that the union of these relations has to be bijective?
In my experience in the evaluator, when there is a duplication of a tuple, one of it is deleted : {A$0->b$0, A$0->B$0} becomes {A$0->B$0}. But sometimes it could be needed to keep it both, is there any way to have it both?
Thanks in advance.
You ask:
Do mapping_relX_nodes are still known at his step of the fact?
Without a full working model to test, it's hard to give an absolutely firm answer. But Alloy is purely declarative, and the uses of mapping_rel1_nodes etc. do not appear to be local variables, so the references in the fourth conjunct of your fact will be bound in the same way as the references in the other conjuncts. (Or not bound, if they are not bound.)
When I control them in the evaluator and i do the operation manually on the appropriate instance it works, but expressed as fact, it returns no instances. I read this post and I was wondering if there are other tools to control the expression and variable, like debug print or else?
Not that I know of. In my experience, when something seems to work as expected in the evaluator but I can't get it to work in a fact or predicate, it's almost invariably a failure on my part to get the semantics of the fact or predicate correct.
The relations have the same arity, but the rel0 is bijective and the others are just binary relation. Is there any constraint due to the bijectivity of rel0 that the union of these relations has to be bijective?
No (not unless I am totally misunderstanding your question).
In my experience in the evaluator, when there is a duplication of a tuple, one of it is deleted : {A$0->b$0, A$0->B$0} becomes {A$0->B$0}. But sometimes it could be needed to keep it both, is there any way to have it both?
Yes; Alloy works with sets. (So the duplicate is not "deleted" -- it's just that sets don't have duplicates.) To distinguish two tuples which would otherwise be identical, you can either (a) add another value to the tuple (so pairs become triples, triples become 4-tuples, and n-tuples become tuples with arity n+1), or (b) define a signature for objects representing the tuples. Since members of signatures have object identity, rather than value identity, they can be used to distinguish different occurrences of pairs like A$0->B$0.

How to have nested conditions for PMD Xpath rules

My rule requires me to apply them only to methods without 'get' as part of their name. In another words, my rules need to apply to only non-getter methods in the class. I know to get a hold of all the non-getter methods, I can use
//MethodDeclarator[not(contains(#Image,'get'))]
However, I don't know the syntax about where I insert my logic for the rules. Is it like
//MethodDeclarator[
not(contains(#Image,'get'))
'Some Rule Statements'
]
I saw the use of . in the beginning of statement inside [] in some example code. what are they used for?
In my particular case, I need to combine following pieces together but so far I am unable to accomplish it yet.
Piece 1:
//PrimaryExpression[not(PrimarySuffix/Arguments)]
Piece 2:
//MethodDeclarator[not(contains(#Image,'get'))]
Piece 3:
//PrimaryExpression[PrimaryPrefix/#Label='this']
You need to have at least some basic knowledge/understanding of XPath.
I saw the use of . in the beginning of statement inside [] in some
example code. what are they used for?
[] is called predicate. It must contain a boolean expression. It must immediately follow a node-test. This specifies an additional condition for a node that satisfies the node-test to be selected.
For example:
/*/num
selects all elements named num that are children of the top element of the XML document.
However, if we want to select only such num elements, whose value is an odd integer, we add this additional condition inside a predicate:
/*/num[. mod 2 = 1]
Now this last expression selects all elements named num that are children of the top element of the XML document and whose string value represents an odd integer.
. denotes the context node -- this is the node that has been selected so-far (or the starting node off which the complete XPath expression is evaluated).
In my particular case, I need to combine following pieces together ...
You forgot to say in what way / how the three expressions should be combined. In XPath some of the frequently used "combinators" are the operators and, or, and the function not().
For example, if you want to select elements that are selected by all three provided XPath expressions, you can use the and operator:
//PrimaryExpression
[not(PrimarySuffix/Arguments)
and
PrimaryPrefix/#Label='this'
]

Two DrRacket/scheme questions

Programming language: DrRacket/scheme
Hey guys,
I am preparing for my first comp sci midterm, and have two quick questions that I'd love to get some input on:
(1) What exactly is the difference between a data definition and a
structure definition?
I know that for a data definition, I can have something like:
;; a student is a
;; - (make-student ln id height gradyear), where
;; - ln is last name, and
;; - id is ID number, and
;; - height is height in inches, and
;; -gradyear is graduation year
but what is a structure definition?
(2) What exactly are the alphas and betas in contracts that come before functions, i.e.
take : num α-list -> α-list
Thank you in advance!
Quote from How to Design Programs (HtDP):
A DATA DEFINITION states, in a mixture of English and Scheme, how we
intend to use a class of structures and how we construct elements of
this class of data.
Given a problem to solve you as a programmer must decide how your input data is represented as values. In order for others to understand your program it is important to document how this is done in detail.
Some input data are simple and can be represented by a single number (e.g. temperature, pressure, etc).
Other types of data can be represented as fixed number of numbers/strings. (e.g. a cd can be represented as an author name (string), a title (string) and a price (number)). To pack a fixed number of values as one value one can represent this a structure.
If one needs to represent an unknown number of something, say cds, then one must use a list.
The data definition is simply your description of how the data are represented in your program.
To explain what a structure definition is, I'll quote from HtDP:
A STRUCTURE DEFINITION is, as the term says, a new form of definition.
Here >is DrScheme's definition of posn:
(define-struct posn (x y))
Let us look at the cd example again. Since there are no builtin "cd values" in Racket, one must define what a cd value is. This is done with a structure definition:
(define-struct cd (author title price))
After the definition is made one can use make-cd to construct cd values.
In order to explain that autor and title are expected to be strings, and price is expected to be a number, you must write down a data definition that explains how make-cd is supposed to be used.
I forgot to answer your second question:
(2) What exactly are the alphas and betas in contracts that come
before functions, i.e.
take : num α-list -> α-list
The alpha is supposed to be replaced with a type.
If take get a integer-list (list of integers as input) then the output is an integer-list.
If take get a string-list (list of strings as input) then the output is an string-list.
In short if take gets a list of values with some type (alpha) as input, then the output is a list of values with the same type (alsp alpha).
Jens Axel Soegaard's answer is correct, but does not elaborate on the relationship between the two, which I would state as follows.
A DATA DEFINITION describes to the reader how a value is going to be represented using Racket values.
Sometimes, the "built-in" values aren't enough, and we need to define a new kind of data, like the "CD" that Jens refers to. In order to define a new kind of data, we often use a STRUCTURE DEFINITION.
Put differently: some data definitions require structure definitions. Some do not.
If I were to elaborate any more, I would just be badly recapitulating HtDP; if what I've said thus far does not make sense, go read HtDP. :)

Resources