Check if array element empty in Pascal - pascal

I'm very sorry to bother you on this outdated language, but is there any way to check if the particular array element is empty in pascal?
It's an integer array, so checking element against empty string causes type mismatch (I love this language!).
Thanks for your time.

An integer value cannot be empty. It always holds a value. It's not like a nullable type in certain modern languages.
Sometimes, by convention, certain values are used as sentinels, but you obviously need to apply this convention consistently across all uses of the variable. What's more, a sentinel is only viable if you have some spare values that do not have a meaning in whatever calculation you are performing.

Related

Does using enums instead of booleans really affect cache usage?

I saw a comment thread where it was suggested that enums should be used instead of booleans in general since it's clearer what the parameters do at the call site, and it's easier to refactor if you need to add a case.
Then someone else claimed that that was a terrible idea since it would often create an unnecessary variable for each call, which would be an unnecessary use of compute resources and cache space.
Does this second claim hold up? My understanding is that booleans are usually not stored as single bits, but as sets of bits with more than enough room for the amount of extra options in a typical enum. So the same amount of data would need to be moved around.
If I understand correctly, there would be an extra variable required if the enum has 3 or more options, (one to store the enum and one for a derived boolean on each check,) but in that case, you actually needed the three options so what can you do? In the case that you have exactly two enum options, then couldn't a compiler just transform it into a boolean in the same register, (assuming the enum value was specified as not having one 0 value and one non-zero value,) therefore not using any extra space?
One extra compare instruction I suppose, but cache usage seems to be the much bigger deal performance-wise these days. And if you have an enum that's isomorphic to a boolean, you'll often make the values automatic anyway, so the compiler should be free to fully optimize it.
Do I understand this correctly, or am I missing something?

What is the difference between length of a list and number of a list in AppleScript

Is there any difference between the 2?
I've been using number of list for quite a long time now, but I noticed that length was also reserved for Applescript, and that it seemed to have the same function as number....
But its highlighted purple instead of blue.
Are they exactly the same, or are they different? And which one would you suggest using?
Although both expressions have the same result, there is a difference.
number of — which is a synonym for count — evaluates the number of items when it's called.
length is a property of the class list which implies that the class maintains the value constantly and there is no further evaluation when it's called.
I'd prefer the latter.

Why is useful to have a atom type (like in elixir, erlang)?

According to http://elixir-lang.org/getting-started/basic-types.html#atoms:
Atoms are constants where their name is their own value. Some other
languages call these symbols
I wonder what is the point of have a atom type. Probably to help build a parser or for macros? But in everyday use how it help the programmer?
BTW: Never use elixir or erlang, just note it exist (also in kdb)
They're basically strings that can easily be tested for equality.
Consider a string. Conceptually, we generally want to think of strings as being equal if they have the same contents. For example, "dog" == "dog" but "dog" != "cat". However, to check the equality of strings, we have to check to see if each letter in one string is equal to the letter in the same position in another string, which means that we have to walk through each element of the string and check each character for equality. This becomes a bit more cumbersome if dealing with Unicode strings and having to consider different ways of composing identical characters (for example, the character é has two representations in UTF-8).
It would be much simpler if we stored identical strings at the same location in memory. Then, checking equality would be a simple pointer or index comparison.
As a consequence of storing identical strings in the same location in memory, we can also store one copy of each unique kind of string regardless of how many times it is used in the program, thus saving some memory for commonly-used strings as well.
At a higher level, using atoms also lets us think of strings the same way we think of other primitive data types like integers.
I think that one of the most common usage in erlang is to tag variables and messages, with the benefit of fast comparison (pattern match) as mipadi says.
For example you write a function that may fail depending on parameters provided, the status of connection to a server, or any reason. A very frequent usage is to return a tuple {ok,Value} in case of success, {error,Reason} in case of error. The calling function will have the choice to manage only the success case coding {ok,Value} = yourModule:yourFunction(Param...). Doing this it is clear that you consider only the success case, you extract directly the Value from the function return, it is fast, and you don't have to share any header with yourModule to decode the ok atom.
In messages you will often see things like {add,Key,Value}, {delete,Key},{delete_all}, {replace,Key,Value}, {append,Key,Value}... These are explicit messages, with the same advantages as mentioned before: fast,sensible,no share of header...
Atoms are constants with itself as value.
This is a concept very usefull in distributed systems, where constants can be defined differently on each system, while atoms are self-containing with no need for definement.

Fastest data structure with default values for undefined indexes?

I'm trying to create a 2d array where, when I access an index, will return the value. However, if an undefined index is accessed, it calls a callback and fills the index with that value, and then returns the value.
The array will have negative indexes, too, but I can overcome that by using 4 arrays (one for each quadrant around 0,0).
You can create a Matrix class that relies on tuples and dictionary, with the following behavior :
from collections import namedtuple
2DMatrixEntry = namedtuple("2DMatrixEntry", "x", "y", "value")
matrix = new dict()
defaultValue = 0
# add entry at 0;1
matrix[2DMatrixEntry(0,1)] = 10.0
# get value at 0;1
key = 2DMatrixEntry(0,1)
value = {defaultValue,matrix[key]}[key in matrix]
Cheers
This question is probably too broad for stackoverflow. - There is not a generic "one size fits all" solution for this, and the results depend a lot on the language used (and standard library).
There are several problems in this question. First of all let us consider a 2d array, we say this is simply already part of the language and that such an array grows dynamically on access. If this isn't the case, the question becomes really language dependent.
Now often when allocating memory the language automatically initializes the spots (again language dependent on how this happens and what the best method is, look into RAII). Though I can foresee that actual calculation of the specific cell might be costly (compared to allocation). In that case an interesting thing might be so called "two-phase construction". The array has to be filled with tuples/objects. The default construction of an object sets a bit/boolean to false - indicating that the value is not ready. Then on acces (ie a get() method or a operator() - language dependent) if this bit is false it constructs, else it just reads.
Another method is to use a dictionary/key-value map. Where the key would be the coordinates and the value the value. This has the advantage that the problem of construct-on-access is inherit to the datastructure (though again language dependent). The drawback of using maps however is that lookup speed of a value changes from O(1) to O(logn). (The actual time is widely different depending on the language though).
At last I hope you understand that how to do this depends on more specific requirements, the language you used and other libraries. In the end there is only a single data structure that is in each language: a long sequence of unallocated values. Anything more advanced than that depends on the language.

Mapping Untyped Lisp data into a typed binary format for use in compiled functions

Background: I'm writing a toy Lisp (Scheme) interpreter in Haskell. I'm at the point where I would like to be able to compile code using LLVM. I've spent a couple days dreaming up various ways of feeding untyped Lisp values into compiled functions that expect to know the format of the data coming at them. It occurs to me that I am not the first person to need to solve this problem.
Question: What are some historically successful ways of mapping untyped data into an efficient binary format.
Addendum: In point of fact, I do know which of about a dozen different types the data is, I just don't know which one might be sent to the function at compile time. The function itself needs a way to determine what it got.
Do you mean, "I just don't know which [type] might be sent to the function at runtime"? It's not that the data isn't typed; certainly 1 and '() have different types. Rather, the data is not statically typed, i.e., it's not known at compile time what the type of a given variable will be. This is called dynamic typing.
You're right that you're not the first person to need to solve this problem. The canonical solution is to tag each runtime value with its type. For example, if you have a dozen types, number them like so:
0 = integer
1 = cons pair
2 = vector
etc.
Once you've done this, reserve the first four bits of each word for the tag. Then, every time two objects get passed in to +, first you perform a simple bit mask to verify that both objects' first four bits are 0b0000, i.e., that they are both integers. If they are not, you jump to an error message; otherwise, you proceed with the addition, and make sure that the result is also tagged accordingly.
This technique essentially makes each runtime value a manually-tagged union, which should be familiar to you if you've used C. In fact, it's also just like a Haskell data type, except that in Haskell the taggedness is much more abstract.
I'm guessing that you're familiar with pointers if you're trying to write a Scheme compiler. To avoid limiting your usable memory space, it may be more sensical to use the bottom (least significant) four bits, rather than the top ones. Better yet, because aligned dword pointers already have three meaningless bits at the bottom, you can simply co-opt those bits for your tag, as long as you dereference the actual address, rather than the tagged one.
Does that help?
Your default solution should be a simple tagged union. If you want to narrow your typing down to more specific types, you can do it - but it won't be that "toy" any more. A thing to look at is called abstract interpretation.
There are few successful implementations of such an optimisation, with V8 being probably the most widespread. In the Scheme world, the most aggressively optimising implementation is Stalin.

Resources