Reading an array from a text file - pascal

I have a problem. When the program reads a text file, it always get out of the line and crashed.
var f:text;
i,j,cs:byte;
a:array[0..10,0..10] of int64
begin
assign(f,'anything.txt');
reset(f);
cs:=0;
while eoln(f)=false do
begin
read(f,a[0,cs]);
inc(cs);
end;
close(f);
end.
Here is the content of anything.txt:
2 4 8 16
exitcode=201

You have not told us which compiler you are using.
In Delphi and Turbo Pascal which preceded it, run-time error 201 means "range check error". I don not still have Turbo Pascal installed but your program compiles and runs as a "console application" correctly in Delphi with only one minor change, namely to insert a semi-colon (';') after int64. It runs correctly whether the compiler has range-checking turned on or not.
It also runs correctly in FreePascal + Lazarus.
So, unless you are using a different compiler which also happens to have a run-time error code 201, your problem seems to be caused by something you have not included in your question. In any case you should learn to debug this kind of problem yourself. So:
Look up how to use use the debugger in your Pascal compiler. Place a breakpoint on the line inc(cs) e.g. by pressing F5and run the program. When it stops at the BP, place debug watches (using Ctrl-F5 in Delphi/TP) on the values of cs and a and observe the values carefully. Press F8 repeatedly to single step the program, and see if you can see where and why it goes wrong.
One possibility for what is causing your problem is that you are not reading the copy on anything.txt you think you are: because you don't include a path to the file in assign(f Windows will use the copy of anything.txt, if any, in whatever it thinks the current directory is. To avoid this, include the path to where the file is, as in
assign(f, 'C:\PascalData\Anything.Txt');
Also btw, you don't need to compare a boolean function (or expression) againt true or false as in
while eoln(f)=false do
Instead you can simply do
while not eoln(f) do

Related

C-u M-x - recompile error: "Wrong type argument: consp, nil"

If I do C-u M-x recompile inside a buffer that's not the *compilation* buffer, (i.e. the source file for instance), I get this error - "Wrong type argument: consp, nil" after it prompts for the compilation command. Why is this? I want to run recompile interactively as comint works, sometimes outside the compilation buffer. How do I do this?
Try using emacs -Q, just to be sure (yes, I know you said you commented out all of your init file, but just to be sure -- and it's a lot easier to do than comment-out everything).
Next, set debug-on-error to t -- You can do M-x toggle-debug-on-error to do that, if you prefer.
Next, provoke the error and look at the debugger *Backtrace*. It will show you not only which function raised the error because it expected a cons and got nil instead, but also what function called it, passing the bad argument. And so on down the stack.
If necessary, you can click mouse-2 on functions on the stack (at the left, to see their source code. Or put the cursor on them and use C-h f to see their doc -- in particular, what arguments they expect and what their return values should be.
In this way it's pretty easy to find the code that is the culprit. (Most likely, in spite of what you said, it is some non-vanilla Emacs Lisp code you loaded somehow.)
Also, state your emacs version : M-x emacs-version. If you are using a development snapshot then the problem could come from vanilla code (i.e., emacs -Q); otherwise, that's not so likely.
Also, you say you get the error after it prompts you. Immediately after it prompts, before you type anything? After you type a command name and hit RET? Try to be more specific.
Update after your comment:
Load library compile.el (not .elc). Then do M-x debug-on-entry recompile, then step through the debugger using d when function recompile is entered. What you are interested in is when compilation-start is called (applied to its args).
It seems that the value of compilation-arguments that is passed to it is no good. The command name you enter at the prompt becomes the first of the list of compilation-arguments. The others are taken from when you last invoked compile: recompile just reuses the same arguments (except the command name): (mode name-function highlight-regexp)mode name-function highlight-regexp).
However, be aware that compilation-arguments is buffer-local. So if you changed to a different buffer then its value is likely not what you need. You need the value from your last compile, so you should do the recompile in the same buffer where you did compile.
(FWIW, I don't use (re)compile myself, as I don't develop software anymore. I just took a look at the source code.)
Such kind of errors usually depicts a problem with your configuration. Try to investigate messages buffer output. There can be some clues there.
And of course, it is normal to call a recompile command from a buffer with your code. It is a convention to bind it to C-c C-c.

When I use Conditional Compilation Arguments to Exclude Code, why doesn't VB6 EXE file size change?

Basically, when declaring Windows API functions in my VB6 code, there comes with these many constants that need to be declared or used with this function, in fact, usually most of these constants are not used and you only end up using one of them or so when making your API calls, so I am using Conditional Compilation Arguments to exclude these (and other things) using something like this:
IncludeUnused = 0 : Testing = 1
(this is how I set two conditional compilation arguments (they are of Boolean type by default).
So, many unused things are excluded like this:
#If IncludeUnused Then
' Some constant declarations and API declarations go here, sometimes functions
' and function calls go here as well, so it's not just declarations and constants
#End If
I also use a similar wrapper using the Testing Boolean declared in the Conditional Compilation Argument input field in the VB6 Properties windows "Make" tab. The Testing Boolean is used to display message boxes and things like that when I am in testing mode, and of course, these message boxed are removed (not displayed) if I have Testing set to 0 (and it is obviously 1 when I am Testing).
The problem is, I tried setting IncludeUnused and Testing to 0 and 1 and visa versa, a total of four (4) combinations, and no matter what combination I set these values to, the output EXE file size for my VB6 EXE does not change! It is always 49,152 when compiled to Native Code using Fast Code, and when using Small Code.
Additionally, if I compile to p-code under the four (4) combinations of Testing and IncludeUnused, i always end up with the file size 32,768 no matter what.
This is driving me crazy, since it is leading me to believe that no change is actually occuring, even though it is. Why is it that when segments of code are excluded from compilation, the file size is still the same? What am I missing or doing wrong, or what have I miscalculated?
I have considered the option that perhaps VB6 automatically does not compile code which is not used into the final output EXE, but I have read from a few sources that this is not true, in that, if it's included, it is compiled (correct me if I am wrong), and if this is right, then there is no need to use the IncludeUnused Boolean to remove unused code...?
If anyone can shed some light on these thoughts, I'd greatly appreciate it.
It could well be that the size difference is very small and that the exe size is padded to the next 512 or 1024 byte alignment. Try compressing the exe's with zip and see if the zip-file sizes differ.
You misunderstand what a compiler does. The output of the VB6 compiler is code. Constants are merely place holders for values, they are not code. The compiler adds them to its symbol table. And when it later encounters a statement in your code that uses the constant then it replaces the constant by its value. That statement produces the exact same code whether you use a constant or hard-code the value in the statement.
So this automatically implies that if you never actually use the constant anywhere then there is no difference at all in the generated code. All that you accomplished by using the #If is to keep the compiler's symbol table smaller. Which is something that makes very little sense to do, the actual gain from compilation speed you get is not measurable. Symbol tables are implemented as hash tables, they have O(1) amortized complexity.
You use constants only to make your code more readable. And to make it easy to change a constant value if the need ever arises. By using #If, you actually made your code less readable.
You can't test runtime data in conditional compilation directives.
These directives use expressions made up of literal values, operators, and CC constants. One way to set constant values is:
#Const IncludeUnused = 0
#Const Testing = 1
You can also define them via Project Properties for IDE testing. Go to the Make tab in that dialog and click the Help button for details.
Perhaps this is where you are setting the values? If so, consider this just additional info for later readers rather than an answer.
See #If...Then...#Else Directive
VB6 executable sizes are padded to 4KB blocks, so if the code difference is small it will make no difference to the executable.

Formatting in Fortran 90/95

I am learning Fortran 90/95, and the book I am using had a discussion about the influence of line printers on the format statement. According to the book, the program uses the first character of the line to decide the line's position relative to the previous line (i.e. '1' starts a new page, '0' skips a line, '+' overwrites the previous line, and ' ' or any other character writes the new line below the previous line). I compiled and ran a simple program in the console to test this, but did not observe this behavior.
program test
integer :: i = 123
character(13) :: hello = 'Hello, World!'
100 format ('0','Count = ',I3)
write (*,*) hello
write (*,100) i
end program
The output is
Hello, World!
0Count = 123
where I would have expected
Hello, World!
Count = 123
Does anyone know why this is? Is this a legacy feature that is not used in Fortran 90/95? Is it specific behavior for a print to the console? I would like to know when (if ever) I need to declare a special first character in the format statement when writing.
My compiler is Force 2.0.9, which I believe is based on gfortran. I am running it on Windows 7, and the console is PowerShell.
Thanks for the help!
This was used in the 70s and even 80s with line printers in FORTRAN 77 and earlier ... but when did you last see a line printer? Any Fortran 90/95 book that teaches this feature should be thrown away.
This has already been answered on Stack Overflow: Are Fortran control characters (carriage control) still implemented in compilers?
This applies only to old "fixed form" Fortran (77 and earlier), not to the new "free form" Fortran (90 and later), when all commands had to be indented by 6 spaces. You can still use fixed-form with appropriate compiler flags. Sometimes this is even the default if the extension is .f rather than .f90.
Fortran 90/95 is much too recent for me, but I don't recall those formatting options from using FORTRAN in the seventies.
Nevertheless, I think it's a legacy feature, and the fact that you've got '+' to overwrite a line suggests the options are intended for output to a line printer not the console screen.
I am learning Fortran 90/95, and the book I am using had a discussion about the influence of line printers on the format statement. According to the book, the program uses the first character of the line to decide the line's position relative to the previous line (i.e. '1' starts a new page, '0' skips a line, '+' overwrites the previous line, and ' ' or any other character writes the new line below the previous line). I compiled and ran a simple program in the console to test this, but did not observe this behavior.
That is one of the legacy features of Fortran which is mostly ignored today, since (if nothing else) one does not print to printers directly. In any case, most compilers (I really couldn't say for gfortran specifically now, since I don't have it installed) have options to disregard that behaviour, and some of them ignore the first column by default. From what you've shown it is reasonable to assume your is one of them, so yes, you can ignore it.
Out of habit in regards to this practice, a lot of fortran programmers start any string with a blank or 1x in the write statement.

Does it make a difference to the debugger that it is Scala code I'm debugging?

I am wondering what difference it makes to the debugger (Intellij IDEA + Scala plug-in) when I debug Scala code and not Java code. To my understanding, a debugger is tightly coupled with the language i.e. a Java debugger can not handle Scala code but apparently the JVM is the center of attention here meaning as long as it is byte-code, any debugger would do. right ?
IMPORTANT UPDATE: The problem was to give an example of how a byte-code debugger may be limiting for Scala. Assume a break point is reached and I don't want to go to the next line but I want the debugger to evaluate an Scala expression in the context of the application (e.g. I like to call an operator method from a singleton object). The debugger is stuck because it can not understand Scala. I have to do the transformation myself and input the resulting Java to the debugger.
The problem is that only "breakpoint stuff" could be handled in byte-code level. What if you want to put an expression under watch? Debugger has to understand Scala to evaluate the watched expression,right? This time I'm sure I'm right. Vengeance is mine, Saith the Lord ;-)
Short answer your assumptions are wrong.
The reason is the debugger does not care what language your debugging. It stops at breakpoints which in turn include the line of a particular source file. Note that the source file is merely text for you to read - the debugger never scans the source files. If you change the spot where source files are to another directory with a text file in the right directory w/ the right filename as a breakpoint that has been set, the debugger will happily show it when the breakpoint happens. Everytime you set a breakpoint your ide is telling the debugger hey scan this class for any byte code on this line and stop when you hit it. This of course does not work if your ide is attempting to compile the same text file into a class file - however it will work if you create fake text files as source for a jar file and do the source file mapping thing.
If one thinks about it, writing a simple template and compiling it while support debugging is not that difficult. Simply use asm to create all the print statements and tell asm this print statement is from the template file on this line. After that you can add more clever stuff while keeping things debuggable.

How do Perl file descriptors work on Windows?

Are file descriptors supported on windows? Why do things "seem to work" in Perl with fds?
Things like "fileno", "dup" and "dup2" were working but then randomly inside some other environment, stopped working. It's hard to give details, mostly what I'm looking for is answers from experienced Windows programmers and how file descriptors work/don't work on Windows.
I would guess that it's the PerlIO layer playing games and making it seem as though file descriptors work, but that's only a guess.
Example of what is happening:
open($saveout, ">&STDOUT") or die();
...
open(STDOUT, ">&=".fileno($saveout)) or die();
The second line die()s but only in certain situations (which I have yet to nail down).
Windows uses file descriptors natively. See Low-Level I/O on MSDN. They all report errors through the C variable errno, which means they show up in Perl's $!.
Note that you can save yourself a bit of typing:
open(STDOUT, ">&=", $saveout) or ...;
This works because the documentation for open in perlfunc provides:
If you use the 3-arg form then you can pass either a number, the name of a filehandle or the normal “reference to a glob.”
Finally, always include meaningful diagnostics when you call die! The program below identifies itself ($0), tells what it was trying to do (open), and why it failed ($!). Also, because the message doesn't end with a newline, die adds the name of the file and line number where it was called.
my $fakefd = 12345;
open(STDOUT, ">&=", $fakefd) or die("$0: open: $!");
This produces
prog.pl: open: Bad file descriptor at foo.pl line 2.
According to the documentation for _fdopen (because you used >&= and not >&), it has two failure modes:
If execution is allowed to continue, errno is set either to EBADF, indicating a bad file descriptor, or EINVAL, indicating that mode was a null pointer.
The second would be a bug in perl and highly unlikely because I don't see anywhere in perlio.c that involves a computed mode: they're all static strings.
Something appears to have gone wrong with $saveout. Could $saveout have been closed before you try to restore it? From your example, it's unclear whether you enabled the strict pragma. If it's not lexical (declared with my), are you calling a function that also monkeys with $saveout?

Resources