throw without catch in ISO Prolog - iso-prolog

I am struggling with what is the precise semantics of throw/1 without a suitable catch/3 in ISO Prolog. I am reading the ISO Prolog specification, and it seems to me that the execution will end up in an infinite recursion.
(Note that you must have access to the ISO Prolog standard in order to interpret this question).
Step 1. Let's say we call throw(unknown), and there is no catch/3 on the stack whatsoever.
Step 2. It will end up in 7.8.10.1 c), saying that it will be an system error (7.12.2 j).
Step 3. This is the same kind of formulation used at other places for other places, so I assume it should be interpreted in the same way. Therefore 7.12.1 applies, and the current goal (throw/1) will be replaced by throw(error(system_error, Imp_def)).
Step 4. Executing this goal will find no active "catch" for it on the stack. It should therefore attempt the same steps, continue at Step 2, and so forth => an infinite recursion.
You may say that the transformation of an uncaught "throw" to a system_error is "final" and not subject to further handling as other errors are, and it probably has to be so, in order to avoid the problem I described, but my question is, where in the standard is this covered?
Some other notes, for completeness:
Note 4 in 7.12.2 also mentions a possibility of a System Error under these circumstances. I think that the formulation used there ("....there is no active goal catch/3") introduces some confusion in yet another respect, because it should qualify that with the condition that the Catcher must unify with the error term (B).
What is the idea behind transforming uncaught throw-s to a system error? It looks like that it may exist to make a life "easier" for a top-level of the Prolog processor, so that it only receives one, predictable, kind of errors? To me, that brings more problems than benefits - the true cause of the error would thus disappear - any opinion or comment?
The formal semantics (Annex A) seems to struggle with this somehow as well, although I have not studied it to the detail. In A.2.5, it mentions that "... However in the formal specification there is a catcher at the root...", and relates it, for example, to the execution of findall/3. So the formal spec differs here from the body text?

(We are talking here about ISO/IEC 13211-1:1995)
The definition of the control construct throw/1 (7.8.10) states in two places, that in this situation there shall be a system error. First, as you have observed, there is 7.8.10.1 c:
c) It shall be a system error (7.12.2 j) if S is now
empty,
and then, there is the error clause:
7.8.10.3 Errors
a) B is a variable
— instantiation_error.
b) B does not unify with the C argument of any call of catch/3
— system_error.
To see what a system error is, we need to look at subclause 7.12.2 j:
j) There may be a System Error at any stage of execution. The
conditions in which there shall be a system error, and the
action taken by a processor after a system error are
implementation dependent. It has the form system_error.
So, the action taken by a processor after a system error is implementation dependent. An infinite loop would be fine, too.
In a standard conforming program you cannot rely on any particular action in that situation.
Ad Note 1: A Note is not normative. See 1.1 Notes. It is essentially a summary.
Ad Note 2: This is to avoid any overspecification. Parts as these are kept as vague as possible in the standard because a system error might have corrupted a Prolog system. It is quite similar with resource errors. Ideally a system would catch them and continue execution, but many implementations have difficulties to guarantee this in the general case.
Ad Note 3: The formal semantics is quintessentially another implementation. And in some parts, an implementation has to make certain decisions, whereas a specification can leave open all possibilities. Apart from that, note that the formal semantics is not normative. It did help to debug the standard, though.
Edit: In the comments you say:
(1mo) So you are saying that this allows the processor to immediately do its implementation dependent action when a system error occurs - not only after it is uncaught? (2do) Thus relieving it from the necessity to apply the goal transformation in 7.12.1 ? (3tio) It would then also be OK if system errors are not catchable (by catch/3) at all?
1mo
The sentence in 7.12.2 j
... the action taken by a processor after a system error are implementation dependent.
effectively overrules 7.12.1. Similarly to 7.12.2 h which may occur "at any stage of execution", too.
Just to be sure that we are reading the codex correctly assume the opposite for a moment. Imagine that a system error occurs and 7.12.1 would now produce such an error that is not caught anywhere, then we again would have a system error etc. Thus: the sentence above would never apply. This alone suggests that we have read here something incorrectly.
On the other hand, imagine a situation where a system error occurs when the system is completely corrupted. How should now 7.12.1 be executed anyway? So the Prolog system would be unable to execute this loop. Does this mean that a Prolog processor can only be conforming if we can prove that there will never be a system error? This is practically impossible, in particular, because
7.12.2 Error classification
NOTES
...
4 A System Error may happen for example (a) in interactions with the
operating system (for example, a disc crash or interrupt), ...
So effectively this would mean that there cannot be any conforming Prolog processor.
2do
7.12.1 describes a way how an error is handled within Prolog. That is, if you are able to handle the error within Prolog, then a Prolog system should use this method. However, there might be situations where it is very difficult or even impossible (see above case) to handle an error in Prolog at all. In such situations a system might bail out.
3tio
Short answer: Yes. It is a quite extreme, but valid, reading that system errors are not catchable and execution would be terminated then. But maybe, take first a step back to understand (a) what a technical standard is for and what not, (b) the scope of a standard.
Scope
Or, rather start with b: In 1 Scope, we have:
NOTE - This part of ISO/IEC 13211 does not specify:
...
f) the user environment (top level loop, debugger, library
system, editor, compiler etc.) of a Prolog processor.
(Strictly speaking this is only a note, but if you leaf through the standard, you will realize that these aspects are not specified.) I somewhat suspect that what you actually want is to understand what a toplevel loop shall do with an uncaught error. However, this question is out of scope of 13211-1. It probably makes a lot of sense to report such an error and continue execution.
Purpose
The other point here is what a technical standard is actually for. Technical standards are frequently misunderstood to be a complete guarantee that a system will work "properly". However, if a system is conforming to a technical standard, this does not imply that it is fit for any purpose or use. To show you the very extreme, consider the shell command exit 1 which might be considered to be a processor conforming to 13211-1 (provided it comes accompanied with documentation that defines all implementation defined features). Why? Well, when the system starts up, it might realize that the minimal requirements (1 Scope, Note b) are not met and thus it produces a system error which is handled by producing error code 1.
What you actually want is rather a system that does conform and is fit for certain purposes (that are out of scope of 13211-1). So in addition of asking yourself whether or not a certain behavior is conforming or not, you will also look about how this system is fit for certain purposes.
A good example to this end are resource errors. Many Prolog systems are able to handle certain resource errors. Consider:
p(-X) :- p(X).
and the query catch(p(X),error(E,_),true). A system has now several opportunities: loop infinitely (which requires a very, very smart GC), succeed with E = resource_error(Resource), succeed with E = system_error, stop execution with some error code, eat up all resources.
While there is no statement that a system has to catch such an error, all the mechanism is provided in the standard to do so.
Similarly in the case of a system_error: If it makes sense, it might be a good idea to report system errors properly within Prolog, however, if things go too far, bailing out safely is still not the worst that can happen. In fact, the worst (yet still conforming) way would be to continue execution "as if" everything is fine, when it is not.

Related

Space / time requirements for ISO-Prolog processor compliance

All implementations of the functional programming language scheme are required to perform tail-call-optimization whenever it is applicable.
Does iso-prolog have this one and / or similar requirements?
It's clear to me that Prolog processor features like first argument principal functor indexing and atom garbage collection are widely adopted, but are not prescribed by the ISO standard.
But what about prolog-cut?
Make believe that some Prolog system gets the semantics right, but does not guarantee that ...
rep.
rep :- !, rep.
rep.
?- rep, false.
... can run forever with constant stack space?
Could that system still be ISO-Prolog compliant?
Whenever you are reading a standard, first look at its scope (domaine d'application, область применения, Anwendungsbereich). Thus whether or not the standard applies to what you want to know. And in 13211-1:1995, 1 Scope there is a note:
NOTE - This part of ISO/IEC 13211 does not specify:
a) the size or complexity of Prolog text that will exceed the
capacity of any specific data processing system or language
processor, or the actions to be taken when the corresponding
limits are exceeded;
b) the minimal requirements of a data processing system
that is capable of supporting an implementation of a Prolog
processor;
...
Strictly speaking, this is only a note. But if you leaf through the standard, you will realize that there are no such requirements. For a similar situation see also this answer.
Further, resource errors (7.12.2 h) and system errors may occur at "any stage of execution".
Historically, the early implementations of DEC10 did not contain last call optimizations and a lot of effort was invested by programmers to either use failure driven loops or enable logarithmic stack usage.
In your example rep, a conforming system may run out of space. And that overflow may be signaled with a resource error, but even that is not required since the system might bail out with a system error. What is more irritating to me is the following program
rep2 :- rep2.
rep2.
Even this program may run infinitely without ever running out of space! And this although nobody cuts away the extra choice point.
In summary, recall that conformance with a standard is just a precondition for a working system.

Does a rule without passing a variable against the philosophy of declarative programming or prolog?

cancer():-
pain(strong),
mood(depressed),
fever(mild),
bowel(bloody),
miscellaneous(giddy).
diagnose():-
nl,
cancer()->write("has cancer").
for example, dog(X) says that X is a dog but my cancer statement just checks whether the following conditions meet. Is there a better way to do that?
In pure Prolog, a predicate without any arguments can only succeed or fail (or not terminate at all).
Thus, it can encode only very little information. A predicate that always succeeds is already available: true/0, having zero arguments. A predicate that always fails is also already available: false/0, also having zero arguments. A predicate that never terminates can be easily constructed.
So, in this sense, you do not need more predicates with zero arguments, and I think you are perfectly justified in being suspicous about such predicates.
Predicates with zero arguments are of limited use since they are so specific. They may however be used for example to describe a fixed set of tests, or be useful only for their side-effects. This is also what you are using, by emitting output on the terminal in case the predicate succeeds.
This means that you are leaving the pure subset of Prolog, and now relying on features that are beyond pure logic.
This is typically a very bad idea, because it:
prevents or at least complicates many forms of reasoning about your program
makes it much harder to test your predicates
is not thread safe in general
etc.
Therefore, suppose your write your program as follows:
cancer(Patient):-
patient_pain(Patient, strong),
patient_mood(Patient, depressed),
patient_fever(Patient, mild),
patient_bowel(Patient, bloody),
patient_miscellaneous(Patient, giddy).
This predicate is now parametrized by a patient, and thus significantly more general than what you have posted.
It can now be used to reason about several patients, it can be used to reason in parallel about different patients, you can use a Prolog query to test the predicate etc.
You can further generalize the predicate by defining for example patient_diagnosis/2, keeping everything completely pure and benefiting from the above advantages. Note that a patient may have several illnesses, which can be emitted on backtracking.
Thus: Yes, a rule without arguments is at least suspicious and atypical if it arises in your actual code. Leaving aside scenarios such as "test case" and "consistency check", it can only be useful for its side-effects, and I recommend you avoid side-effects if you can.
For more information about this topic, see logical-purity.
cancer() isn't legal syntax, but the idea's perfectly fine.
Just do the call as
cancer
and define it as a fact or rule.
cancer. % fact
cancer :- blah blah %rule
in fact, you use a system predicate with no args in your program -
nl is a predicate that always succeeds, and prints a newline.
There are many reasons to have a predicate with no arguments. Suppose you have a server that runs in a slightly different configuration in production than in development. Developer access API is off in production.
my_handler(Request) :-
development,
blah blah
development only succeeds if we're in development environment
or you might have a side effect set off, or be using state.

Understanding parallel usage of Fortran 90

y(1:n-1) = a*y(2:n) + x(1:n-1)
y(n) = c
In the above Fortran 90 code I want to know how it is executed in term of synchronization, communication and arithmetic.
What I understand is:
Communication is the need for different task to communication with each other. E.g. if there's some variable that have dependencies with some other variable. But the above code doesn't show that there is some communication. As it seems to be no dependencies, am I right?
Synchronization is somewhat related to communication, but it also involves if there has been some use of barriers. But in the above code there is no barrier. Therefore the only synchronization that is involved is if there are any data dependencies.
Arithmetic I have no clue regarding this point, and would be gladly if someone could explain it to me.
The rule in Fortran is fairly simple: the right hand side is completely evaluated before the result is assigned to the left.
Thus you could claim there is a communication upon assigning (sending the result to y), which is at the same time a synchronization point.
The actual evaluation of the right side could be vectorized/parallelized by the compiler, resulting in arbitrary orders of the evaluations for all entries in the array, except for the last one, which is only set after the first assignment.
However, except for pipelining, there is no real parallelism introduced here by common compilers.
Without stopping too much at the given snippet, it looks you could perhaps be interested (tell me if I'm wrong) at for example, Using OpenMP book (presentation here). It is a nice gentle introduction to the world of parallel computing (memory shared). For larger systems you would do well to google "MPI" and its related subjects. There is really a plethora of material on the matter (a lot of them deal with fortran+mpi / fortran+openmp) so I'll skip giving examples here.
Is this what you were aiming for?

chain of events analysis and reasoning

My boss said logs in current state are not acceptable for the customer. If there is a fault, a dozen of different modules of the device report their own errors and they all land in logs. The original reason of the fault may be buried somewhere in the middle of the list, may not appear on the list (given module being too damaged to report), or appear way late after everything else finished reporting problems that result from the original fault. Anyway, there are few people outside the system developers who can properly interprete the logs and come up with what actually happened.
My current task is writing a module that does a customer-friendly fault-reporting. That is, gather all the events that were reported over the last ~3 seconds (which is about the max interval between origin of the fault occurring and the last resulting after-effects), do some magic processing of this data, and come up with one clear, friendly line what is broken and needs to be fixed.
The problem is the magic part: how, given a number of fault reports, to come up with the original source of the fault. There is no simple list of cause-effect list. There are just commonly occurring chains of events displaying certain regularities.
Examples:
short circuit detected, resulting in limited operation mode, the limited operation does not remove the fault, so emergency state is escalated, total output power disconnected.
safety line got engaged. No module reported engaging it within 3s since it was engaged, so an "unknown-source or interference" is attributed as the reason of system halt.
most output modules report no output voltage. About 1s later the power supply monitoring module reports power is out, which is the original reason.
an output module reports no output voltage in all of its output lines. No report from power supply module. The reason is a power line disconnected from the module.
an output module reports no output voltage in one of its output lines. No other faults reported. The reason is a burnt fuse.
an output module did not report back with applying received state. Shortly after, control module reports illegal state or output lines, (resulting from the output module really not updating the state in a timely manner.) The cause is the output module (which introduced the fault), not the control module (which halted the system due to fault detected).
a fault of input module switches the device to backup-failsafe mode. An output module not used so far, which was faulty gets engaged in this mode and the fault mode gets escalated to critical. The original reason is not the input, which is allowed to report false-positives concerning faults, but the broken backup output which aborted the operation.
there is no activity of any kind from an output module, for the last 2 seconds. This means it's broken and a fault mode must be entered.
There is no comprehensive list of rules as to what causes what. The rules will be added as new kinds of faults occur "in the wild" and are diagnosed and fixed. Some of them are heuristics - if this error is accompanied with these errors, then the fault is most likely this. Some faults will not be solved - a bland list of module reports will have to suffice. Some answers will be ambigous, one set of symptoms may suggest two different faults. This is more of a "best effort" than a "guaranteed solution" one.
Now for the (overly general and vague) question: how to solve this? Are there specific algorithms, methods or generalized solutions to this kind of problem? How to write the generalized rulesets and match against them? How to do the soft-matching? (say, an input module broke right in the middle of an emergency halt, it's a completely unrelated event to be ignored.) Help please?
In all honesty, I would just write a series of simple rules and be done with it. It will be a pain maintenance wise, but getting this right may be time consuming and brittle.
If you insist, I would approach this by having each error drop some sort of symbol/token for each error code - you'll make this much harder if you try to do some bag of words/keyword matching. You would then input the outputted tokens in some sort of classifier.
At heart, you need some sort of rules engine - be it fuzzy or exact. The first thing that comes to mind is a hand-built Bayesian network. This would allow for fuzzy matching as you would calculate the most probable 'report' as a function of the tokens you receive. It also allows you to set a threshold for token groups that aren't really indicative of anything by specifying the minimum probability to return an answer.
You could also train a Bayes net or other type classifier, but you'll need quite a bit of data that you've manually labeled (token1,token2,token3->faultxyz) and it might be more accurate to do it yourself.

Pseudocode interpreter?

Like lots of you guys on SO, I often write in several languages. And when it comes to planning stuff, (or even answering some SO questions), I actually think and write in some unspecified hybrid language. Although I used to be taught to do this using flow diagrams or UML-like diagrams, in retrospect, I find "my" pseudocode language has components of C, Python, Java, bash, Matlab, perl, Basic. I seem to unconsciously select the idiom best suited to expressing the concept/algorithm.
Common idioms might include Java-like braces for scope, pythonic list comprehensions or indentation, C++like inheritance, C#-style lambdas, matlab-like slices and matrix operations.
I noticed that it's actually quite easy for people to recognise exactly what I'm triying to do, and quite easy for people to intelligently translate into other languages. Of course, that step involves considering the corner cases, and the moments where each language behaves idiosyncratically.
But in reality, most of these languages share a subset of keywords and library functions which generally behave identically - maths functions, type names, while/for/if etc. Clearly I'd have to exclude many 'odd' languages like lisp, APL derivatives, but...
So my questions are,
Does code already exist that recognises the programming language of a text file? (Surely this must be a less complicated task than eclipse's syntax trees or than google translate's language guessing feature, right?) In fact, does the SO syntax highlighter do anything like this?
Is it theoretically possible to create a single interpreter or compiler that recognises what language idiom you're using at any moment and (maybe "intelligently") executes or translates to a runnable form. And flags the corner cases where my syntax is ambiguous with regards to behaviour. Immediate difficulties I see include: knowing when to switch between indentation-dependent and brace-dependent modes, recognising funny operators (like *pointer vs *kwargs) and knowing when to use list vs array-like representations.
Is there any language or interpreter in existence, that can manage this kind of flexible interpreting?
Have I missed an obvious obstacle to this being possible?
edit
Thanks all for your answers and ideas. I am planning to write a constraint-based heuristic translator that could, potentially, "solve" code for the intended meaning and translate into real python code. It will notice keywords from many common languages, and will use syntactic clues to disambiguate the human's intentions - like spacing, brackets, optional helper words like let or then, context of how variables are previously used etc, plus knowledge of common conventions (like capital names, i for iteration, and some simplistic limited understanding of naming of variables/methods e.g containing the word get, asynchronous, count, last, previous, my etc). In real pseudocode, variable naming is as informative as the operations themselves!
Using these clues it will create assumptions as to the implementation of each operation (like 0/1 based indexing, when should exceptions be caught or ignored, what variables ought to be const/global/local, where to start and end execution, and what bits should be in separate threads, notice when numerical units match / need converting). Each assumption will have a given certainty - and the program will list the assumptions on each statement, as it coaxes what you write into something executable!
For each assumption, you can 'clarify' your code if you don't like the initial interpretation. The libraries issue is very interesting. My translator, like some IDE's, will read all definitions available from all modules, use some statistics about which classes/methods are used most frequently and in what contexts, and just guess! (adding a note to the program to say why it guessed as such...) I guess it should attempt to execute everything, and warn you about what it doesn't like. It should allow anything, but let you know what the several alternative interpretations are, if you're being ambiguous.
It will certainly be some time before it can manage such unusual examples like #Albin Sunnanbo's ImportantCustomer example. But I'll let you know how I get on!
I think that is quite useless for everything but toy examples and strict mathematical algorithms. For everything else the language is not just the language. There are lots of standard libraries and whole environments around the languages. I think I write almost as many lines of library calls as I write "actual code".
In C# you have .NET Framework, in C++ you have STL, in Java you have some Java libraries, etc.
The difference between those libraries are too big to be just syntactic nuances.
<subjective>
There has been attempts at unifying language constructs of different languages to a "unified syntax". That was called 4GL language and never really took of.
</subjective>
As a side note I have seen a code example about a page long that was valid as c#, Java and Java script code. That can serve as an example of where it is impossible to determine the actual language used.
Edit:
Besides, the whole purpose of pseudocode is that it does not need to compile in any way. The reason you write pseudocode is to create a "sketch", however sloppy you like.
foreach c in ImportantCustomers{== OrderValue >=$1M}
SendMailInviteToSpecialEvent(c)
Now tell me what language it is and write an interpreter for that.
To detect what programming language is used: Detecting programming language from a snippet
I think it should be possible. The approach in 1. could be leveraged to do this, I think. I would try to do it iteratively: detect the syntax used in the first line/clause of code, "compile" it to intermediate form based on that detection, along with any important syntax (e.g. begin/end wrappers). Then the next line/clause etc. Basically write a parser that attempts to recognize each "chunk". Ambiguity could be flagged by the same algorithm.
I doubt that this has been done ... seems like the cognitive load of learning to write e.g. python-compatible pseudocode would be much easier than trying to debug the cases where your interpreter fails.
a. I think the biggest problem is that most pseudocode is invalid in any language. For example, I might completely skip object initialization in a block of pseudocode because for a human reader it is almost always straightforward to infer. But for your case it might be completely invalid in the language syntax of choice, and it might be impossible to automatically determine e.g. the class of the object (it might not even exist). Etc.
b. I think the best you can hope for is an interpreter that "works" (subject to 4a) for your pseudocode only, no-one else's.
Note that I don't think that 4a,4b are necessarily obstacles to it being possible. I just think it won't be useful for any practical purpose.
Recognizing what language a program is in is really not that big a deal. Recognizing the language of a snippet is more difficult, and recognizing snippets that aren't clearly delimited (what do you do if four lines are Python and the next one is C or Java?) is going to be really difficult.
Assuming you got the lines assigned to the right language, doing any sort of compilation would require specialized compilers for all languages that would cooperate. This is a tremendous job in itself.
Moreover, when you write pseudo-code you aren't worrying about the syntax. (If you are, you're doing it wrong.) You'll wind up with code that simply can't be compiled because it's incomplete or even contradictory.
And, assuming you overcame all these obstacles, how certain would you be that the pseudo-code was being interpreted the way you were thinking?
What you would have would be a new computer language, that you would have to write correct programs in. It would be a sprawling and ambiguous language, very difficult to work with properly. It would require great care in its use. It would be almost exactly what you don't want in pseudo-code. The value of pseudo-code is that you can quickly sketch out your algorithms, without worrying about the details. That would be completely lost.
If you want an easy-to-write language, learn one. Python is a good choice. Use pseudo-code for sketching out how processing is supposed to occur, not as a compilable language.
An interesting approach would be a "type-as-you-go" pseudocode interpreter. That is, you would set the language to be used up front, and then it would attempt to convert the pseudo code to real code, in real time, as you typed. An interactive facility could be used to clarify ambiguous stuff and allow corrections. Part of the mechanism could be a library of code which the converter tried to match. Over time, it could learn and adapt its translation based on the habits of a particular user.
People who program all the time will probably prefer to just use the language in most cases. However, I could see the above being a great boon to learners, "non-programmer programmers" such as scientists, and for use in brainstorming sessions with programmers of various languages and skill levels.
-Neil
Programs interpreting human input need to be given the option of saying "I don't know." The language PL/I is a famous example of a system designed to find a reasonable interpretation of anything resembling a computer program that could cause havoc when it guessed wrong: see http://horningtales.blogspot.com/2006/10/my-first-pli-program.html
Note that in the later language C++, when it resolves possible ambiguities it limits the scope of the type coercions it tries, and that it will flag an error if there is not a unique best interpretation.
I have a feeling that the answer to 2. is NO. All I need to prove it false is a code snippet that can be interpreted in more than one way by a competent programmer.
Does code already exist that
recognises the programming language
of a text file?
Yes, the Unix file command.
(Surely this must be a less
complicated task than eclipse's syntax
trees or than google translate's
language guessing feature, right?) In
fact, does the SO syntax highlighter
do anything like this?
As far as I can tell, SO has a one-size-fits-all syntax highlighter that tries to combine the keywords and comment syntax of every major language. Sometimes it gets it wrong:
def median(seq):
"""Returns the median of a list."""
seq_sorted = sorted(seq)
if len(seq) & 1:
# For an odd-length list, return the middle item
return seq_sorted[len(seq) // 2]
else:
# For an even-length list, return the mean of the 2 middle items
return (seq_sorted[len(seq) // 2 - 1] + seq_sorted[len(seq) // 2]) / 2
Note that SO's highlighter assumes that // starts a C++-style comment, but in Python it's the integer division operator.
This is going to be a major problem if you try to combine multiple languages into one. What do you do if the same token has different meanings in different languages? Similar situations are:
Is ^ exponentiation like in BASIC, or bitwise XOR like in C?
Is || logical OR like in C, or string concatenation like in SQL?
What is 1 + "2"? Is the number converted to a string (giving "12"), or is the string converted to a number (giving 3)?
Is there any language or interpreter
in existence, that can manage this
kind of flexible interpreting?
On another forum, I heard a story of a compiler (IIRC, for FORTRAN) that would compile any program regardless of syntax errors. If you had the line
= Y + Z
The compiler would recognize that a variable was missing and automatically convert the statement to X = Y + Z, regardless of whether you had an X in your program or not.
This programmer had a convention of starting comment blocks with a line of hyphens, like this:
C ----------------------------------------
But one day, they forgot the leading C, and the compiler choked trying to add dozens of variables between what it thought was subtraction operators.
"Flexible parsing" is not always a good thing.
To create a "pseudocode interpreter," it might be necessary to design a programming language that allows user-defined extensions to its syntax. There already are several programming languages with this feature, such as Coq, Seed7, Agda, and Lever. A particularly interesting example is the Inform programming language, since its syntax is essentially "structured English."
The Coq programming language allows "syntax extensions", so the language can be extended to parse new operators:
Notation "A /\ B" := (and A B).
Similarly, the Seed7 programming language can be extended to parse "pseudocode" using "structured syntax definitions." The while loop in Seed7 is defined in this way:
syntax expr: .while.().do.().end.while is -> 25;
Alternatively, it might be possible to "train" a statistical machine translation system to translate pseudocode into a real programming language, though this would require a large corpus of parallel texts.

Resources