Error Handling in LC-3? - lc3

I have this question on my review sheet that I cant seem to get but rather than ask you for the answer I would rather like to learn the difference between these specific concepts.
For reference the question is An LC-3 instruction ADD R1,R2, #45 produces an error. It will be caught at a. assembly time b. link time c. run time d. compile time. Rather than just finding out the answer what would rather like to know is what is the difference between these and how do they differ when it comes to error handling?

Using the C programming language as an example the 4 steps to create an executable program Preprocessing, Compiling, Assembling, and Linking.
Compile time
These are generally common and are caused by a malformed user program that the compiler can't process, things such as forgetting a semicolon can cause a compiler error.
Assembly Time
Something went wrong with the assembler. This includes using instructions incorrectly as above, not defining a LABEL but using it in an instruction etc.
Link Time
As part of the C compile process to form an executable many object files generated by the assemble step are linked together. In C programming you can specify that some symbol is defined externally via the extern keyword, other things like function prototypes will tell the compiler a function is defined somewhere.
The linker will resolve where those variables/functions live. If you haven't declared a function/variable and something references it then you will get a undefined reference error. Same for if something is defined multiple times.
Run Time
An error occurred during running your program, this is something such as accessing a pointer that is null, or dividing by zero.

Related

What's the differences from inline and block compilation of SBCL?

Several weeks ago, SBCL updated 2.0.2 and brought the Block compilation feature. I have read this article to understand what it is.
I have a question, what's the difference between (declaim (inline 'some-function)) and Block compilation? Block compilation is automatic by the compiler?
Thanks.
Inline compilation is a specific optimization technique. A function being called is directly integrated into the calling function - usually using its source code - and then compiled.
This means that the inlined function might not be inlined only in one function, but in multiple functions.
Advantage: the overhead of calling a function disappears.
Disadvantage: the code size increases and the calling function(s) needs to be recompiled, when the inlined function changed and we want this change to become visible. Macros have the same problem.
Block compilation means that a bunch of code gets compiled together with different semantic constraints and that this enables the compiler to do a bunch of new optimizations.
Common Lisp has in the standard support for block compilation of single files. It allows the file compiler to assume that a file is such a block of code.
Example from the Common Lisp standard:
3.2.2.3 Semantic Constraints
A call within a file to a named function that is defined in the same file refers to that function, unless that function has been declared notinline. The consequences are unspecified if functions are redefined individually at run time or multiply defined in the same file.
This allows the code to call a global function and not use the symbol's function cell for the call. Thus this disables late binding for global function calls - in this file and for functions in this file.
It's not said how this can be achieved, but the compiler might just allocate the code somewhere and the calls just jump there.
So this part of block compilation is defined in the standard and some compilers are doing that.
Block compilation for multiple files
If the file compiler can use block compilation for one file, then what about multiple files? A few compilers can also tell the file compiler that several files make a block for compilation. CMUCL does that. SBCL was derived and simplified from CMUCL and lacks it until now. I think Lucid Common Lisp (which is no longer actively sold) did support something like that, too.
Might be useful to add this to SBCL, too.

Intel Fortran to GNU Fortran Conversion [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am working on a custom CFD Solver written in Fortran 90 and MPI.
The code contain 15+ Modules and was initially designed to work with the Intel Fortran compiler. Now since i do not have access to the Intel compiler I need to make it work using the GNU Fortran Compiler.
I made changes in the Makefile that initially had flags suitable for the ifort.
I am using it on Ubuntu with GNU Fortran and Openmpi
I am sorry I am unable to put in anything from the code structure or terminal output due to IP restrictions of my university. Nevertheless,I will try to best describe the issues
So now when I compile the code I am having some strange issues.
The GNU Fortran is not able to read lines that are too long and I get errors during compilation. As a result I have to break it into multiple lines using the '&' symbol
A module D.f90 contains all the Global variables declared. However, now I during compilation i get error is in module B.F90.
The error I get is 'Unclassified Statement Error', I was able to fix it in some subroutines and functions by locally declaring the variables again.
I am not the most experienced person in Fortran, but I thought that the change in compiler should not be a reason for new found syntax errors.
The errors described above so far could be remedied but considering the expanse of the code it is impractical.
I was hoping if anyone could share views on this matter and provide guidance on how to tackle it.
You should start reading three pieces of documentation:
The Fortran 90 standard (alternatively, other versions), which tells you what is legal, standard Fortran and what is not. Whenever you find some error, look at your code and check if what you are doing is legal, standard Fortran. Likely, the code in question will either be completely nonstandard (e.g. REAL*8, although that extension is fairly well understood) or rely on unspecified behaviour that Intel Fortran and GFortran are interpreting in different ways.
The GFortran manual for your version, which tells you how GFortran decides such unspecified cases, what intrinsic functions are available, how to change some options/flags, etc. This would tell you that your problem with the line lengths would be solved by adding -ffree-line-length-none.
The Intel Fortran manual for your version, which in cases of non-standard or unspecified behaviour, will allow you to know what the code you are reading was written to do, e.g. the behaviour that you would expect. In particular, it will allow you to decipher what the compiler flags that are currently being used mean. They may or may not need translation to GFortran, e.g. /Qsave will need to become -f-no-automatic.
A concrete example of interpretative differences within the range allowed be the standard: until Fortran 2003, the units for the "record length" in random access record files were left unspecified. Intel Fortran used "one machine word" (4 bytes in x86) while GFortran used 1 byte. Both were compliant with the standard letter, but incompatible.
Furthermore, even when coding "to standard", you may hit a wall if the compiler does not implement part of the Fnn standard, or it is buggy. Case in point: Intel Fortran 12.0 (old, but it's what I work with) does not the implement the ALLOCATE(y, SOURCE=x) construct for polymorphic x (the "clone allocation"). On the other hand, GFortran has not completely implemented FINAL type-bound procedures (destructors).
In both cases, you will need to find workarounds. For example, for the first issue you can use a special form of the INQUIRE statement (kudos to #haraldkl). In other cases, the workaround might even involve using some kind of feature detection (see autoconf, CMake, etc.) and storing the results as PARAMETER variables in a config.f90 file that is included by your code. Your code would then take decisions based on it, as in:
! config.f90.in (things in #x# would get subtituted by automake, for example)
INTEGER, PARAMETER :: RECORD_LEN_BYTES = #RECORD_LEN_BYTES#
! Some other file which opens a file
INCLUDE "config.f90"
!...
OPEN(u, FILE='DE430.BIN', ACCESS='direct', FORM='unformatted', RECL=56 / RECORD_LEN_BYTES)
People have been having complaints about following the standard since at least the 60s. But those cDEC$ features were put in a for good reasons...
It is valuable to cross compile though and you usually have things caught in one compiler or the other.
For you question #1 "The GNU Fortran is not able to read lines that are too long and I get errors during compilation. As a result I have to break it into multiple lines using the '&' symbol"
In the days of old there was:
options/extended_source
SUBROUTINE...
In fort it is -132, but I have not found a gfortran equivalent to -132 . It may be -ffixed-line-length-n -ffixed-line-length-none -ffree-line-length-n -ffree-line-length-none per the link: http://www.math.uni-leipzig.de/~hellmund/Vorlesung/gfortran.html#SEC8
Also the ifort standard for .f90 and .f95 is the the compiler switch '-free' '-fixed' is the standard <.f90... However one can use -fixed with .f90 and use column 6 and 'D' in column #1... Which is handy with '-D_lines' or '-DD'.
Per the link: https://software.intel.com/sites/default/files/m/f/8/5/8/0/6366-ifort.txt
For you question #2: "A module D.f90 contains all the Global variables declared. However, now I during compilation i get error is in module B.F90. The error I get is 'Unclassified Statement Error', I was able to fix it in some subroutines and functions by locally declaring the variables again."
You probably need to put in the offending line, if you can get an IP waiver.
Making variables local if they are expected to be shared in a /common/ or shared in a module will not work.
If there were in /common/ or PUBLIC then they are shared.
If they are local then they are PRIVATE.
it would be easy to get that error if a PRIVATE statement was in the wrong place, or a USE statement was omitted.

Xcode error: Command /Developer/usr/bin/clang++ failed with exit code 1 due to duplicate symbol

I'm trying to write a program in C++ which runs Conway's Game of Life. I think I have everything that I need, but I'm having some trouble with compiling.
The program is composed of four files: gameoflife.h, a header file which contains my global constants and function declarations, gameoflife.cpp, which defines the functions, main.cpp, which uses the functions, and seeds.cpp, which contains a list of predefined seeds to be used.
When I go to compile the application, I seem to have a clash of duplicate symbols between main.cpp and gameoflife.cpp over an array called currGen which is declared in gameoflife.h.
Both main.cpp and gameoflife.cpp include gameoflife.h, which of course is necessary so that they have access to the global constants and function declarations.
The exact error I receive is the following:
duplicate symbol _currGen in /(same_path)/ConwaysGameOfLife.build/Objects-normal/
x86_64/gameoflife.o and
/(same_path)/ConwaysGameOfLife.build/Objects-normal/x86_64/main.o
for architecture x86_64
Command /Developer/usr/bin/clang++ failed with exit code 1
I've looked around on Stack Overflow but haven't found anything which matches my problem. Any help would be greatly appreciated!
You are probably defining the variable currGen in your header file, not just declaring it.
There needs to be exactly one definition, in one .cpp file. The .h file should just declare it, using extern.
This answer goes into much more detail.

multiple definition error from dl-addr.c with addition of printf, in glibc compilation

In glibc-2.13/nptl/sigaction.c, i just put a simple printf("test\n"); and my glibc compilation fails. Just adding a printf gives me multiple definitions of _itoa from dl-addr.c and i have no idea why.
Can any body please tell me why is this happening and possible solution. The error:
test/glibc-build/libc_pic.a(_itoa.os): In function `_itoa':
test/glibc-2.13/stdio-common/_itoa.c:215: multiple definition of
`_itoa'
test/glibc-build/elf/dl-allobjs.os:test/glibc-2.13/elf/dl-minimal.c:300: first defined here
test/glibc-build/libc_pic.a(dl-addr.os): In function `_dl_addr_inside_object':
test/SOURCE/glibc-2.13/elf/dl-addr.c:156: multiple definition of
`_dl_addr_inside_object'
test/glibc-build/elf/dl-allobjs.os:glibc-2.13/elf/dl-open.c:658: first defined here
Just adding a printf gives me multiple definitions of _itoa
Don't do that.
Glibc is quite complicated, and you need to know what you are doing when you modify it.
What's happening is that the link for elf/ld.so fails (you didn't say what target is failing, but I am pretty sure it's the ld.so; in the future please show the entire error message, not just parts of it).
The ld.so is the dynamic linker, that will eventually bind your program to printf in libc.so.6. For obvious reason, ld.so itself can't dynamically link to printf -- it must execute before libc.so.6 has even been mmaped. As such, it links in minimal parts of libc.a, just enough for it to get running. The printf is not part of that minimal runtime, so you can't "just add a call to it".

How does the compiler detect duplicate definition across translation units

How does a compiler detect duplicate definition across translation unit. Suppose there were a extern const variable declaration in an header file. If this header file was used in more than one translation unit - each having a separate definition - each TU object creation would be successful, however when the final executable is created the error is thrown.
Is there a reference table created to account these duplication while linking each of these TU (during the creation of the executable)?
Any link on this topic would be helpful.
Thanks in advance for your explanation.
Normally this would be detected by the linker, rather than the compiler. The linker can then either coalesce the variables (often required for sloppy C/C++ coding) or report an error.
Yes, the linker builds a list of unresolved external references and then eventually goes on to attempt to resolve them one by one.

Resources