OCAML compilation error Out_of_memory - compilation

when I compile on my mutualized server the source of last version of OCaml ocaml-4.00.0, I have the following Out_of_memory error message:
"
Fatal error: exception Out_of_memory
Exit code 2 while executing this command:
../ocamlcomp.sh -c -g -warn-error A -w a -I camlp4/boot -I camlp4 -I stdlib -o camlp4/boot/camlp4boot.cmo camlp4/boot/camlp4boot.ml"
my initial bash command is : make world
Would anyody have an idea where this error might come from?
Thanks

I've compiled OCaml 4.00.0 many times recently, and I have some saved logs. The exact failing command you give here appears at around the halfway point in the logs, which then go on to build the compiler successfully. I would conclude from this that you're actually running out of memory. I.e., that the compilation takes more memory than your system has available. Is this possible? (I don't know what you mean by a mutualized server.)

Related

PVS-Studio: No compilation units were found

I'm using PVS-Studio in docker image based on ubuntu:18.04 for cross-compiling a couple of files with arm-none-eabi-gcc. After doing pvs-studio-analyzer trace -- .test/compile_with_gcc.sh strace_out file is successfully created, it's not empty and contains calls to arm-none-eabi-gcc.
However pvs-studio-analyzer analyze complains that "No compilation units were found". I tried using --compiler arm-none-eabi-gcc key with no success.
Any ideas?
The problem was in my approach to compilation. Instead of using a proper build system, I used a wacky shell script (surely, I thought, using a build system for 3 files is an overkill, shell script won't hurt anybody). And in that script I used grep to redefine one constant in the source - kinda like that: grep -v -i "#define[[:blank:]]\+${define_name}[[:blank:]]" ${project}/src/main/main.c | ~/opt/gcc-arm-none-eabi-8-2018-q4-major/bin/arm-none-eabi-gcc -o main.o -xc
So compiler didn't actually compiled a proper file, it compiled output of grep. So naturally, PVS-Studio wasn't able to analyze it.
TL;DR: Don't use shell scripts as build system.
We have reviewed the stace_out file. It can be handled correctly by the analyzer, if the source files and compilers are located by the absolute path in the stace_out file. We have a suggestion what might help you. You can "wrap" the build command in a call to pvs-studio-analyzer -- trace and pvs-studio-analyzer analyze and place them inside your script (compile_with_gcc.sh). Thus, the script should start with the command:
pvs-studio-analyzer trace --
and end with the command:
pvs-studio-analyzer analyze
This way we will make sure that the build and analysis were started at the same container run. If the proposed method does not help, please describe in more detail, by commands, the process of building the project and running the analyzer. Also tell us whether the container reruns between the build and the formation of strace_out, and the analysis itself.
It would also help us a lot if you ran the pvs-studio-analyzer command with the optional --dump-log flag and provided it to us. An example of a command that can be used to do this:
pvs-studio-analyzer analyze --dump-log ex.log
Also, it seems that it is not possible to quickly solve the problem and it is probably more convenient to continue the conversation via the feedback form on the product website.

GNU make error code (e = -1)

I have a strange problem. I have been previously using GNU make for the last 4 weeks with no issues. I'm pointing my make file to an avr toolchain to cross compile for an ATMEL processor. A few days ago, GNU make stopped working. When I run make I get the following output:
make (e=-1): Error -1
make: *** [main.o] Error -1
To try and get more insight to the problem, I ran the code with just the echo output using make -n. It prints all of the statements in the makefile, including the commands. A short snippet of this output is as follows:
echo
echo "=================================="
echo "Compiling: " main.c
echo "=================================="
"/c/Users/Shane Reynolds/Documents/CDU_embeddedSystems/CDUEmbeddedToolbox/avr_tools/bin/avr-gcc" -c -std=gnu99 -g -mmcu=atmega1281 -DF_CPU=16000000UL -Wall -Wstrict-prototypes -Os main.c -o main.o
You can see that the command is printed at the end of this short snippet of the output. If I copy and paste the command into the terminal then the process works fine - but doing this every time is annoying. To try and understand why GNU make was failing I ran the debug tag make -d and received a lot of output. The snippet of what I think is important is:
CreateProcess(C:\Users\Shane Reynolds\Documents\CDU_embeddedSystems\CDUEmbeddedToolbox\avr_tools\utils\bin\echo.exe,echo,...)
Putting child 0x0043fdf0 (main.o) PID 4486808 on the chain.
Live child 0x0043fdf0 (main.o) PID 4486808
Main thread handle = 0x000000a8
Reaping winning child 0x0043fdf0 PID 4486808
Live child 0x0043fdf0 (main.o) PID 4488168
Reaping losing child 0x0043fdf0 PID 4488168
make (e=-1): Error -1
make: *** [main.o] Error -1
Removing child 0x0043fdf0 PID 4488168 from chain.
Can anyone help me with this? I've spent a couple of days trying to figure this out. I really hope that it is not something glaringly obvious, or simple.
EDIT: to add a little bit more background, I am using this on Windows 8 - when I run make from either bash or cmd I get the same error message. The link to my makefile is:
https://pastebin.com/j7uMSLic
FURTHER EDIT: I've created a very simple makefile with a very simple source code - running this produces the same error, but I can use the AVR toolchain to compile and link manually just like the previous case.
Okay.
So it turns out that it was something to do with Git bash (the terminal that I was using). Somewhere, somehow, I think one of the path variables got messed up. I completely uninstalled Git, and then re-installed and it works fine now. Not sure how this happened, but am glad to have it fixed.
If anyone else has an explanation, or can add more insight to the problem and how it can be avoided then feel free to post below.

Compiler outputs the errors under Wine, but not on Windows

I've got .mqh source code file with syntax error, for example created by the following command:
echo some_error > fail.mqh
Now, I'm using Metaeditor compiler to check the syntax and my goal is to print the errors to the standard output (CON), instead of logging them to the file (/log:file.log). See: Compiling.
The following syntax works fine on Linux/macOS as follow (also under wine cmd.exe):
$ wine metaeditor.exe /s /log:CON /compile:fail.mqh
??fail.mqh : information: Checking 'fail.mqh'
fail.mqh(1,1) : error 116: 'some_error' - declaration without type
fail.mqh(1,1) : error 161: 'some_error' - unexpected end of program
: information: Result 2 error(s), 0 warning(s)
Please note that the /log parameter is required, otherwise the compiler doesn't print anything by default. So if /log is specified, then by default it logs the compilation result to the file. And I'm using special CON device to display the errors.
The problem is when I'm running the same command on Windows (cmd), then I've got no output:
> metaeditor.exe /s /log:CON /compile:fail.mqh
Same for CON:/con: as well. Also on PowerShell.
Although CON works for echo, e.g.: echo test > CON.
I could assume it could be a bug of the compiler, but then it works fine under Wine. Why would this work only under Wine?
Is there another way of outputting the errors to the terminal screen on Windows, instead of log file?
Note: You can install compiler from the site or download the binary (32bit or 64bit) to test above.
Clarification: My main blocker for using two separate commands (compile and print the error log after that) is that CI test may fail before the errors are printed, which makes the tests useless and it's a story for another question. So my goal is to check the syntax and print the errors at one go.
According to Support Team, Metaeditor application does not have a console, so it cannot output logs to the screen. So it seems wine handles special CON device differently. I've reported the issue to the Service Desk and it's still open, so they may implement the console support in the future.
Currently the only workaround is to use type command for output log file to console after compiling the files (or emulate it under wine). Even if the compiler could display it to the console, it won't work properly with CI either (in terms of handling the error codes), because logic of return exit of metaeditor.exe is completely broken as it returns the number of successfully compiled files instead of the error code (e.g. if you compile 20 files, you'll get 20 error code?!)! So relying on return exit of metaeditor.exe is a mistake and MQL team isn't planning to fix it anyway, since they say this is how it should work in their opinion.

Output from fortran application not showing up in Matlab

I'm having some issues with output from a fortran application being executed from within Matlab. We use Matlab to call a number of fortran applications and to display output and results.
I'm using gfortran on OSX to build one of these programs, which does a large amount of file output and a little output to stdout to track progress. stdout output is accomplished mainly through print * statements, but I've tried write( * , * ) as well. The program uses OpenMP, but none of the print * or write( * , * ) statements are performed within OpenMP parallel sections.Everything works fine when the program is executed from a terminal. However, when the program is executed from within matlab, there is no output from stdout. The file output works fine though.
Additionally, the same code, when compiled with Intel's ifort, displays its output in matlab without issue. Unfortunately I don't have regular access to the Intel compiler.
I'm positive that the output is going to stdout (not stderr), and I've tried flushing both from within the code (call flush(6) & call flush(0)), but this doesn't seem to make a difference.
I'm not sure what could be causing this. Any thoughts?
some relevant information:
OS: OSX 10.6.8 (64bit mode)
Matlab: R2012b
gfortran: 4.7.2 (obtained via fink)
compile flags: -cpp -fopenmp -ffree-line-length-0 -fno-range-check -m64 -static-libgfortran -fconvert=little-endian -fstrict-aliasing
EDIT:
I've done some more testing, creating a simple 'hello' program:
program printTest
write (*,*) 'hello'
end program
compiled with...
gfortran test.f90 -o test
which exhibits the same behavior.
I've also tried compiling with an earlier version of gfortran (4.2.1), which produced some interesting results. it executes fine in terminal, but in matlab I get the following:
!./test
dyld: lazy symbol binding failed: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
dyld: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
./test: Trace/breakpoint trap
This leads me to believe its a library issue. using -static-libgfortran produces the same result in this case.
I believe Matlab is a single threaded application. When you invoke a multithreaded executive, I have seen various issues with piping the output back to Matlab. Have you considered recompiling into a Fortran mex file?
I am not sure a mex file would print to stdout any better than a standalone executable.
There are other options. One is to write(append) all your diagnostics to a file and just look at the file when you want to. Emacs, for example, automatically "revert"s the contents of a file every second or whatever you set the interval to. Another option might be to convert the fortran source into matlab source (see f2matlab) and keep it all in matlab.
bb
According to the system function documentation
[status, result] = system('command') returns completion status to the status variable and returns the result of the command to the result variable.
[status,result] = system('command','-echo') also forces the output to the Command Window.
So you should use '-echo' parameter to the system call to see the output directly in the command window
system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'], '-echo')
or you can assign the stdout to a variable:
[ret txt] = system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'])

LLVM command line on OSX

I'm working though [http://llvm.org/docs/WritingAnLLVMPass.html][1], trying to write a very simple pass. I've written the pass and compiled it (thanks in part to the Stackoverflow community) but now I'm having trouble running it...
The documentation reads:
To test it, follow the example at the end of the Getting Started Guide
to compile "Hello World" to LLVM. We can now run the bitcode file
(hello.bc) for the program through our transformation like this (or
course, any bitcode file will work):
$ opt -load ../../../Debug+Asserts/lib/Hello.so -hello < hello.bc >
/dev/null Hello: __main Hello: puts Hello: main The '-load' option
specifies that 'opt' should load your pass as a shared object, which
makes '-hello' a valid command line argument (which is one reason you
need to register your pass). Because the hello pass does not modify
the program in any interesting way, we just throw away the result of
opt (sending it to /dev/null).
However when I run the command I get the following issue:
mymachine$./opt -load ../../../Debug+Asserts/lib/Hello.so -hello < hello.bc > /dev/null
Error opening '../../../Debug+Asserts/lib/Hello.so':
dlopen(../../../Debug+Asserts/lib/Hello.so, 9): image not found
-load request ignored. opt: Unknown command line argument '-hello'.
Try: './opt -help' opt: Did you mean '-help'?
Any ideas? I'm running OSX and I suspect that is part of the issue...
It turns out the the command I wanted (from the bin directory) was:
opt -load ../lib/LLVMHello.dylib -hello < hello.bc > /dev/null
and I understand the .dylib is the OSX equivalent of .so - but this was largely guesswork...
Try using an absolute path rather than a relative one? This seems like a relatively obvious "file not found".

Resources