At first, I thought that set! assignment in scheme is more like assignment = in python other than in c/c++.
in python:
>>> x = 1
>>> id(x)
15855960
>>> x = 2
>>> id(x)
15855936 # different from above!
the second assignment means rebind variable/name to another value instead of overwrite original value in memory location of x.
But c/c++ is the latter case:
int x = 1;
cout << &x << endl; // 0x7ffe3e6adba4
x = 2;
cout << &x << endl; // 0x7ffe3e6adba4 (the same memory location!)
But in 1.3 section of R6RS:
Scheme allows identifiers to stand for locations containing
values. These identifiers are called variables. In many
cases, specifically when the location’s value is never modified
after its creation, it is useful to think of the variable
as standing for the value directly.
And in 1.8 Assignment section:
Scheme variables bound by definitions or let or lambda
expressions are not actually bound directly to the objects
specified in the respective bindings, but to locations
containing these objects. The contents of these locations
can subsequently be modified destructively via assignment.
So it seems in scheme assignment set! is just like what c/c++ assignment = does?
But then how to explain the following:
=> (define x 10)
=> x
10
=> (set! x "lonnnnnnnnnnnnng")
=> x
"lonnnnnnnnnnnnng"
At first x is a location to store an small integer. But later set! put a LONG string into that SMALL memory location? But the location have no enough memory space to do that I think?
Both the citations from the manual say the same thing: an identifier (e.g. a thing like x or y), is associated (“bound”) to a location (a memory cell, e.g. the cell at address 0x7ffe3e6adba4), that can hold different values (for instance an integer, a string etc.), and that the content of the location can be modified with a new value. That is:
Identifier -> Location -> Value
The value, depending from the combination language/compiler, can be either a direct value (or an “unboxed” values, as sometimes it is called, typically in case of values like integers or float), or a “boxed” value, that is a pointer to some memory area the contains the value (typically for complex values like cons cells, arrays, strings, etc.).
As I said, the key point is that the implementation is free to chose how to store values in location, for instance if use a uniform method of storing values according to their type, if optimize and allocate values with different strategies, etc. The only important thing is to strictly follow the specification of the language that requires the double map above. The crucial point here is that in languages like Lisp pointers are (and should remain) “invisible” to the user (the programmer).
Related
I have read almost every definition of immutable/mutable variables on the internet but as a beginner I just do not grasp it fully so I was wondering if someone could really explain it in layman terms.
Immutable variables (or objects) in any programming language is what I understand when you cannot change the value of that variable after it has been assigned a value. For example, I am using the Haskell programming language, and I write:
let x = 5
Since Haskell has immutable variables, x can never have any other value than 5. So if I after that line of code write:
x = 2
I have in fact not changed the value of x but made a new variable with the same name, which will now be the one that is referenced when I call x, so after both lines of code I can only reach an x with the value of 2.
But what is a mutable variable then, and what programming languages have it? This is where it gets foggy for me. Because when people say mutable variable, they are obviously referring to a variable or object which value you can indeed change after it has been assigned an initial value.
Does this mean that if you have a mutable variable you actually manipulate that place in the computers memory for that variable, and in case of immutable variable you cannot manipulate that place in the computers memory or what?
I don't know how to explain my question any further, as I said, I understand that mutable = can change value of variable after initial value assignment, immutable = cannot. I get the definition. But I don't understand what it actually means in terms of what is going on "behind the scenes". I guess I am looking for easy examples on actual mutable variables.
This has nothing to do with immutability
let x = 5
x = 2
This is reassignment and definitely not allowed in Haskell
First let's look at a regular let assignment
Prelude> let x = 5 in x
5
it :: Num a => a
You can bind x using let, and rebind a new x in a nested let – this effectively shadows the outer x
Prelude> let x = 5 in let x = 2 in x
2
it :: Num a => a
Remember a let is basically a lambda
Prelude> (\x -> x) 5
5
it :: Num a => a
And of course a lambda can return a lambda; illustrates shadowing agian
Prelude> (\x -> (\x -> x)) 5 2
2
it :: Num a => a
I believe a shorthand answer to your question would be that a Mutable variable that holds a value that you would later want to be adjusted.
Depending on the language you're using is its method.
In Kotlin val and var are used to declare variables.
val is a constant while var is adjustable.
Mutable respectively immutable does not concern the variables but the values. Note: one also says the type is (im-)mutable.
For instance if the value is of an immutable type StudentCard, with fields ID and Name, then after creation the fields no longer can be changed. On a name change the student card must be reissued, a new StudentCard must be created.
A more elementary immutable type is String in java. Assigning the same value to two variables is no problem as one variable may not change the shared value. Sharing a mutable value could be dangerous.
A commont mutable type is the array in java and others. Sharing an array, say storing an array parameter in a field, risks that somewhere else the array content is changed inadvertently changing your field.
I am looking for a general-purpose way of defining textual expressions which allow a value to be validated.
For example, I have a value which should only be set to 1, 2, 3, 10, 11, or 12.
Its constraint might be defined as: (value >= 1 && value <= 3) || (value >= 10 && value <= 12)
Or another value which can be 1, 3, 5, 7, 9 etc... would have a constraint like value % 2 == 1 or IsOdd(value).
(To help the user correct invalid values, I'd like to show the constraint - so something descriptive like IsOdd is preferable.)
These constraints would be evaluated both on client-side (after user input) and server-side.
Therefore a multi-platform solution would be ideal (specifically Win C#/Linux C++).
Is there an existing language/project which allows evaluation or parsing of similar simple expressions?
If not, where might I start creating my own?
I realise this question is somewhat vague as I am not entirely sure what I am after. Searching turned up no results, so even some terms as a starting point would be helpful. I can then update/tag the question accordingly.
You may want to investigate dependently typed languages like Idris or Agda.
The type system of such languages allows encoding of value constraints in types. Programs that cannot guarantee the constraints will simply not compile. The usual example is that of matrix multiplication, where the dimensions must match. But this is so to speak the "hello world" of dependently typed languages, the type system can do much more for you.
If you end up starting your own language I'd try to stay implementation-independent as long as possible. Look for the formal expression grammars of a suitable programming language (e.g. C) and add special keywords/functions as required. Once you have a formal definition of your language, implement a parser using your favourite parser generator.
That way, even if your parser is not portable to a certain platform you at least have a formal standard from where to start a separate parser implementation.
You may also want to look at creating a Domain Specific Language (DSL) in Ruby. (Here's a good article on what that means and what it would look like: http://jroller.com/rolsen/entry/building_a_dsl_in_ruby)
This would definitely give you the portability you're looking for, including maybe using IronRuby in your C# environment, and you'd be able to leverage the existing logic and mathematical operations of Ruby. You could then have constraint definition files that looked like this:
constrain 'wakeup_time' do
6 <= value && value <= 10
end
constrain 'something_else' do
check (value % 2 == 1), MustBeOdd
end
# constrain is a method that takes one argument and a code block
# check is a function you've defined that takes a two arguments
# MustBeOdd is the name of an exception type you've created in your standard set
But really, the great thing about a DSL is that you have a lot of control over what the constraint files look like.
there are a number of ways to verify a list of values across multiple languages. My preferred method is to make a list of the permitted values and load them into a dictionary/hashmap/list/vector (dependant on the language and your preference) and write a simple isIn() or isValid() function, that will check that the value supplied is valid based on its presence in the data structure. The beauty of this is that the code is trivial and can be implemented in just about any language very easily. for odd-only or even-only numeric validity again, a small library of different language isOdd() functions will suffice: if it isn't odd it must by definition be even (apart from 0 but then a simple exception can be set up to handle that, or you can simply specify in your code documentation that for logical purposes your code evaluates 0 as odd/even (your choice)).
I normally cart around a set of c++ and c# functions to evaluate isOdd() for similar reasons to what you have alluded to, and the code is as follows:
C++
bool isOdd( int integer ){ return (integer%2==0)?false:true; }
you can also add inline and/or fastcall to the function depending on need or preference; I tend to use it as an inline and fastcall unless there is a need to do otherwise (huge performance boost on xeon processors).
C#
Beautifully the same line works in C# just add static to the front if it is not going to be part of another class:
static bool isOdd( int integer ){ return (integer%2==0)?false:true; }
Hope this helps, in any event let me know if you need any further info:)
Not sure if it's what you looking for, but judging from your starting conditions (Win C#/Linux C++) you may not need it to be totally language agnostic. You can implement such a parser yourself in C++ with all the desired features and then just use it in both C++ and C# projects - thus also bypassing the need to add external libraries.
On application design level, it would be (relatively) simple - you create a library which is buildable cross-platform and use it in both projects. The interface may be something simple like:
bool VerifyConstraint_int(int value, const char* constraint);
bool VerifyConstraint_double(double value, const char* constraint);
// etc
Such interface will be usable both in Linux C++ (by static or dynamic linking) and in Windows C# (using P/Invoke). You can have same codebase compiling on both platforms.
The parser (again, judging from what you've described in the question) may be pretty simple - a tree holding elements of types Variable and Expression which can be Evaluated with a given Variable value.
Example class definitions:
class Entity {public: virtual VARIANT Evaluate() = 0;} // boost::variant may be used typedef'd as VARIANT
class BinaryOperation: public Entity {
private:
Entity& left;
Entity& right;
enum Operation {PLUS,MINUS,EQUALS,AND,OR,GREATER_OR_EQUALS,LESS_OR_EQUALS};
public:
virtual VARIANT Evaluate() override; // Evaluates left and right operands and combines them
}
class Variable: public Entity {
private:
VARIANT value;
public:
virtual VARIANT Evaluate() override {return value;};
}
Or, you can just write validation code in C++ and use it both in C# and C++ applications :)
My personal choice would be Lua. The downside to any DSL is the learning curve of a new language and how to glue the code with the scripts but I've found Lua has lots of support from the user base and several good books to help you learn.
If you are after making somewhat generic code that a non programmer can inject rules for allowable input it's going to take some upfront work regardless of the route you take. I highly suggest not rolling your own because you'll likely find people wanting more features that an already made DSL will have.
If you are using Java then you can use the Object Graph Navigation Library.
It enables you to write java applications that can parse,compile and evaluate OGNL expressions.
OGNL expressions include basic java,C,C++,C# expressions.
You can compile an expression that uses some variables, and then evaluate that expression
for some given variables.
An easy way to achieve validation of expressions is to use Python's eval method. It can be used to evaluate expressions just like the one you wrote. Python's syntax is easy enough to learn for simple expressions and english-like. Your expression example is translated to:
(value >= 1 and value <= 3) or (value >= 10 and value <= 12)
Code evaluation provided by users might pose a security risk though as certain functions could be used to be executed on the host machine (such as the open function, to open a file). But the eval function takes extra arguments to restrict the allowed functions. Hence you can create a safe evaluation environment.
# Import math functions, and we'll use a few of them to create
# a list of safe functions from the math module to be used by eval.
from math import *
# A user-defined method won't be reachable in the evaluation, as long
# as we provide the list of allowed functions and vars to eval.
def dangerous_function(filename):
print open(filename).read()
# We're building the list of safe functions to use by eval:
safe_list = ['math','acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'degrees', 'e', 'exp', 'fabs', 'floor', 'fmod', 'frexp', 'hypot', 'ldexp', 'log', 'log10', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh']
safe_dict = dict([ (k, locals().get(k, None)) for k in safe_list ])
# Let's test the eval method with your example:
exp = "(value >= 1 and value <= 3) or (value >= 10 and value <= 12)"
safe_dict['value'] = 2
print "expression evaluation: ", eval(exp, {"__builtins__":None},safe_dict)
-> expression evaluation: True
# Test with a forbidden method, such as 'abs'
exp = raw_input("type an expression: ")
-> type an expression: (abs(-2) >= 1 and abs(-2) <= 3) or (abs(-2) >= 10 and abs(-2) <= 12)
print "expression evaluation: ", eval(exp, {"__builtins__":None},safe_dict)
-> expression evaluation:
-> Traceback (most recent call last):
-> File "<stdin>", line 1, in <module>
-> File "<string>", line 1, in <module>
-> NameError: name 'abs' is not defined
# Let's test it again, without any extra parameters to the eval method
# that would prevent its execution
print "expression evaluation: ", eval(exp)
-> expression evaluation: True
# Works fine without the safe dict! So the restrictions were active
# in the previous example..
# is odd?
def isodd(x): return bool(x & 1)
safe_dict['isodd'] = isodd
print "expression evaluation: ", eval("isodd(7)", {"__builtins__":None},safe_dict)
-> expression evaluation: True
print "expression evaluation: ", eval("isodd(42)", {"__builtins__":None},safe_dict)
-> expression evaluation: False
# A bit more complex this time, let's ask the user a function:
user_func = raw_input("type a function: y = ")
-> type a function: y = exp(x)
# Let's test it:
for x in range(1,10):
# add x in the safe dict
safe_dict['x']=x
print "x = ", x , ", y = ", eval(user_func,{"__builtins__":None},safe_dict)
-> x = 1 , y = 2.71828182846
-> x = 2 , y = 7.38905609893
-> x = 3 , y = 20.0855369232
-> x = 4 , y = 54.5981500331
-> x = 5 , y = 148.413159103
-> x = 6 , y = 403.428793493
-> x = 7 , y = 1096.63315843
-> x = 8 , y = 2980.95798704
-> x = 9 , y = 8103.08392758
So you can control the allowed functions that should be used by the eval method, and have a sandbox environment that can evaluate expressions.
This is what we used in a previous project I worked in. We used Python expressions in custom Eclipse IDE plug-ins, using Jython to run in the JVM. You could do the same with IronPython to run in the CLR.
The examples I used in part inspired / copied from the Lybniz project explanation on how to run a safe Python eval environment. Read it for more details!
You might want to look at Regular-Expressions or RegEx. It's proven and been around for a long time. There's a regex library all the major programming/script languages out there.
Libraries:
C++: what regex library should I use?
C# Regex Class
Usage
Regex Email validation
Regex to validate date format dd/mm/yyyy
I want to make a list with its elements representing the logic map given by
x_{n+1} = a*x_n(1-x_n)
I tried the following code (which adds stuff manually instead of a For loop):
x0 = Input["Enter x0"]
a = Input["a"]
M = {x0}
L[n_] := If[n < 1, x0, a*M[[n]]*(1 - M[[n]])]
Print[L[1]]
Append[M, L[1]]
Print[M]
Append[M, L[2]]
Print[M]
The output is as follows:
0.3
2
{0.3}
0.42
{0.3,0.42}
{0.3}
Part::partw: Part 2 of {0.3`} does not exist. >>
Part::partw: Part 2 of {0.3`} does not exist. >>
{0.3, 2 (1 - {0.3}[[2]]) {0.3}[[2]]}
{0.3}
It seems that, when the function definition is being called in Append[M,L[2]], L[2] is calling M[[2]] in the older definition of M, which clearly does not exist.
How can I make L use the newer, bigger version of M?
After doing this I could use a For loop to generate the entire list up to a certain index.
P.S. I apologise for the poor formatting but I could find out how to make Latex code work here.
Other minor question: What are the allowed names for functions and lists? Are underscores allowed in names?
It looks to me as if you are trying to compute the result of
FixedPointList[a*#*(1-#)&, x0]
Note:
Building lists element-by-element, whether you use a loop or some other construct, is almost always a bad idea in Mathematica. To use the system productively you need to learn some of the basic functional constructs, of which FixedPointList is one.
I'm not providing any explanation of the function I've used, nor of the interpretation of symbols such as # and &. This is all covered in the documentation which explains matters better than I can and with which you ought to become familiar.
Mathematica allows alphanumeric (only) names and they must start with a letter. Of course, Mathematic recognises many Unicode characters other than the 26 letters in the English alphabet as alphabetic. By convention (only) intrinsic names start with an upper-case letter and your own with a lower-case.
The underscore is most definitely not allowed in Mathematica names, it has a specific and widely-used interpretation as a short form of the Blank symbol.
Oh, LaTeX formatting doesn't work hereabouts, but Mathematica code is plenty readable enough.
It seems that, when the function definition is being called in
Append[M,L2], L2 is calling M[2] in the older definition of M,
which clearly does not exist.
How can I make L use the newer, bigger version of M?
M is never getting updated here. Append does not modify the parameters you pass to it; it returns the concatenated value of the arrays.
So, the following code:
A={1,2,3}
B=Append[A,5]
Will end up with B={1,2,3,5} and A={1,2,3}. A is not modfied.
To analyse your output,
0.3 // Output of x0 = Input["Enter x0"]. Note that the assignment operator returns the the assignment value.
2 // Output of a= Input["a"]
{0.3} // Output of M = {x0}
0.42 // Output of Print[L[1]]
{0.3,0.42} // Output of Append[M, L[1]]. This is the *return value*, not the new value of M
{0.3} // Output of Print[M]
Part::partw: Part 2 of {0.3`} does not exist. >> // M has only one element, so M[[2]] doesn't make sense
Part::partw: Part 2 of {0.3`} does not exist. >> // ditto
{0.3, 2 (1 - {0.3}[[2]]) {0.3}[[2]]} (* Output of Append[M, L[2]]. Again, *not* the new value of M *)
{0.3} // Output of Print[M]
The simple fix here is to use M=Append[M, L[1]].
To do it in a single for loop:
xn=x0;
For[i = 0, i < n, i++,
M = Append[M, xn];
xn = A*xn (1 - xn)
];
A faster method would be to use NestList[a*#*(1-#)&, x0,n] as a variation of the method mentioned by Mark above.
Here, the expression a*#*(1-#)& is basically an anonymous function (# is its parameter, the & is a shorthand for enclosing it in Function[]). The NestList method takes a function as one argument and recursively applies it starting with x0, for n iterations.
Other minor question: What are the allowed names for functions and lists? Are underscores allowed in names?
No underscores, they're used for pattern matching. Otherwise a variable can contain alphabets and special characters (like theta and all), but no characters that have a meaning in mathematica (parentheses/braces/brackets, the at symbol, the hash symbol, an ampersand, a period, arithmetic symbols, underscores, etc). They may contain a dollar sign but preferably not start with one (these are usually reserved for system variables and all, though you can define a variable starting with a dollar sign without breaking anything).
I have an Ada enum with 2 values type Polarity is (Normal, Reversed), and I would like to convert them to 0, 1 (or True, False--as Boolean seems to implicitly play nice as binary) respectively, so I can store their values as specific bits in a byte. How can I accomplish this?
An easy way is a lookup table:
Bool_Polarity : constant Array(Polarity) of Boolean
:= (Normal=>False, Reversed => True);
then use it as
B Boolean := Bool_Polarity(P);
Of course there is nothing wrong with using the 'Pos attribute, but the LUT makes the mapping readable and very obvious.
As it is constant, you'd like to hope it optimises away during the constant folding stage, and it seems to: I have used similar tricks compiling for AVR with very acceptable executable sizes (down to 0.6k to independently drive 2 stepper motors)
3.5.5 Operations of Discrete Types include the function S'Pos(Arg : S'Base), which "returns the position number of the value of Arg, as a value of type universal integer." Hence,
Polarity'Pos(Normal) = 0
Polarity'Pos(Reversed) = 1
You can change the numbering using 13.4 Enumeration Representation Clauses.
...and, of course:
Boolean'Val(Polarity'Pos(Normal)) = False
Boolean'Val(Polarity'Pos(Reversed)) = True
I think what you are looking for is a record type with a representation clause:
procedure Main is
type Byte_T is mod 2**8-1;
for Byte_T'Size use 8;
type Filler7_T is mod 2**7-1;
for Filler7_T'Size use 7;
type Polarity_T is (Normal,Reversed);
for Polarity_T use (Normal => 0, Reversed => 1);
for Polarity_T'Size use 1;
type Byte_As_Record_T is record
Filler : Filler7_T;
Polarity : Polarity_T;
end record;
for Byte_As_Record_T use record
Filler at 0 range 0 .. 6;
Polarity at 0 range 7 .. 7;
end record;
for Byte_As_Record_T'Size use 8;
function Convert is new Ada.Unchecked_Conversion
(Source => Byte_As_Record_T,
Target => Byte_T);
function Convert is new Ada.Unchecked_Conversion
(Source => Byte_T,
Target => Byte_As_Record_T);
begin
-- TBC
null;
end Main;
As Byte_As_Record_T & Byte_T are the same size, you can use unchecked conversion to convert between the types safely.
The representation clause for Byte_As_Record_T allows you to specify which bits/bytes to place your polarity_t in. (i chose the 8th bit)
My definition of Byte_T might not be what you want, but as long as it is 8 bits long the principle should still be workable. From Byte_T you can also safely upcast to Integer or Natural or Positive. You can also use the same technique to go directly to/from a 32 bit Integer to/from a 32 bit record type.
Two points here:
1) Enumerations are already stored as binary. Everything is. In particular, your enumeration, as defined above, will be stored as a 0 for Normal and a 1 for Reversed, unless you go out of your way to tell the compiler to use other values.
If you want to get that value out of the enumeration as an Integer rather than an enumeration value, you have two options. The 'pos() attribute will return a 0-based number for that enumeration's position in the enumeration, and Unchecked_Conversion will return the actual value the computer stores for it. (There is no difference in the value, unless an enumeration representation clause was used).
2) Enumerations are nice, but don't reinvent Boolean. If your enumeration can only ever have two values, you don't gain anything useful by making a custom enumeration, and you lose a lot of useful properties that Boolean has. Booleans can be directly selected off of in loops and if checks. Booleans have and, or, xor, etc. defined for them. Booleans can be put into packed arrays, and then those same operators are defined bitwise across the whole array.
A particular pet peeve of mine is when people end up defining themselves a custom boolean with the logic reversed (so its true condition is 0). If you do this, the ghost of Ada Lovelace will come back from the grave and force you to listen to an exhaustive explanation of how to calculate Bernoulli sequences with a Difference Engine. Don't let this happen to you!
So if it would never make sense to have a third enumeration value, you just name objects something appropriate describing the True condition (eg: Reversed_Polarity : Boolean;), and go on your merry way.
It seems all I needed to do was pragma Pack([type name]); (in which 'type name' is the type composed of Polarity) to compress the value down to a single bit.
I'm trying to make a function that defines a vector that varies based on the function's input, and set! works great for this in Scheme. Is there a functional equivalent for this in OCaml?
I agree with sepp2k that you should expand your question, and give more detailed examples.
Maybe what you need are references.
As a rough approximation, you can see them as variables to which you can assign:
let a = ref 5;;
!a;; (* This evaluates to 5 *)
a := 42;;
!a;; (* This evaluates to 42 *)
Here is a more detailed explanation from http://caml.inria.fr/pub/docs/u3-ocaml/ocaml-core.html:
The language we have described so far is purely functional. That is, several evaluations of the same expression will always produce the same answer. This prevents, for instance, the implementation of a counter whose interface is a single function next : unit -> int that increments the counter and returns its new value. Repeated invocation of this function should return a sequence of consecutive integers — a different answer each time.
Indeed, the counter needs to memorize its state in some particular location, with read/write accesses, but before all, some information must be shared between two calls to next. The solution is to use mutable storage and interact with the store by so-called side effects.
In OCaml, the counter could be defined as follows:
let new_count =
let r = ref 0 in
let next () = r := !r+1; !r in
next;;
Another, maybe more concrete, example of mutable storage is a bank account. In OCaml, record fields can be declared mutable, so that new values can be assigned to them later. Hence, a bank account could be a two-field record, its number, and its balance, where the balance is mutable.
type account = { number : int; mutable balance : float }
let retrieve account requested =
let s = min account.balance requested in
account.balance <- account.balance -. s; s;;
In fact, in OCaml, references are not primitive: they are special cases of mutable records. For instance, one could define:
type 'a ref = { mutable content : 'a }
let ref x = { content = x }
let deref r = r.content
let assign r x = r.content <- x; x
set! in Scheme assigns to a variable. You cannot assign to a variable in OCaml, at all. (So "variables" are not really "variable".) So there is no equivalent.
But OCaml is not a pure functional language. It has mutable data structures. The following things can be assigned to:
Array elements
String elements
Mutable fields of records
Mutable fields of objects
In these situations, the <- syntax is used for assignment.
The ref type mentioned by #jrouquie is a simple, built-in mutable record type that acts as a mutable container of one thing. OCaml also provides ! and := operators for working with refs.