I'm having some issues with output from a fortran application being executed from within Matlab. We use Matlab to call a number of fortran applications and to display output and results.
I'm using gfortran on OSX to build one of these programs, which does a large amount of file output and a little output to stdout to track progress. stdout output is accomplished mainly through print * statements, but I've tried write( * , * ) as well. The program uses OpenMP, but none of the print * or write( * , * ) statements are performed within OpenMP parallel sections.Everything works fine when the program is executed from a terminal. However, when the program is executed from within matlab, there is no output from stdout. The file output works fine though.
Additionally, the same code, when compiled with Intel's ifort, displays its output in matlab without issue. Unfortunately I don't have regular access to the Intel compiler.
I'm positive that the output is going to stdout (not stderr), and I've tried flushing both from within the code (call flush(6) & call flush(0)), but this doesn't seem to make a difference.
I'm not sure what could be causing this. Any thoughts?
some relevant information:
OS: OSX 10.6.8 (64bit mode)
Matlab: R2012b
gfortran: 4.7.2 (obtained via fink)
compile flags: -cpp -fopenmp -ffree-line-length-0 -fno-range-check -m64 -static-libgfortran -fconvert=little-endian -fstrict-aliasing
EDIT:
I've done some more testing, creating a simple 'hello' program:
program printTest
write (*,*) 'hello'
end program
compiled with...
gfortran test.f90 -o test
which exhibits the same behavior.
I've also tried compiling with an earlier version of gfortran (4.2.1), which produced some interesting results. it executes fine in terminal, but in matlab I get the following:
!./test
dyld: lazy symbol binding failed: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
dyld: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
./test: Trace/breakpoint trap
This leads me to believe its a library issue. using -static-libgfortran produces the same result in this case.
I believe Matlab is a single threaded application. When you invoke a multithreaded executive, I have seen various issues with piping the output back to Matlab. Have you considered recompiling into a Fortran mex file?
I am not sure a mex file would print to stdout any better than a standalone executable.
There are other options. One is to write(append) all your diagnostics to a file and just look at the file when you want to. Emacs, for example, automatically "revert"s the contents of a file every second or whatever you set the interval to. Another option might be to convert the fortran source into matlab source (see f2matlab) and keep it all in matlab.
bb
According to the system function documentation
[status, result] = system('command') returns completion status to the status variable and returns the result of the command to the result variable.
[status,result] = system('command','-echo') also forces the output to the Command Window.
So you should use '-echo' parameter to the system call to see the output directly in the command window
system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'], '-echo')
or you can assign the stdout to a variable:
[ret txt] = system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'])
Related
What I want to achieve
I try to set up a toolchain to compile OpenCL applications for Intel FPGAs. Therefore beneath building the C++ based host application I need to invoke the Intel OpenCL offline compiler for OpenCL kernels.
This step should only take place if the cl source file was edited or the resulting binaries are missing. My approach is to add a custom command to invoke the CL compiler and create a custom target that depends on the output generated by this command. The offline Open CL compiler is called aoc and due to the possibility of multiple SDK-Versions present on the system I invoke it with an absolute path that is stored in aocExecutable. This is the relevant part of my CMakeLists.txt
set (CLKernelName "vector_add")
set (CLKernelSourceFile "${PROJECT_SOURCE_DIR}/${CLKernelName}.cl")
set (CLKernelBinary "${PROJECT_BINARY_DIR}/${CLKernelName}.aocx")
add_executable (HostApplication main.cpp)
# ------ a lot of unneccessary details here ------
add_custom_command (OUTPUT "${CLKernelBinary}"
COMMAND "${aocExecutable} -march=emulator ${CLKernelSourceFile} -o ${CLKernelBinary}"
DEPENDS "${CLKernelSourceFile}"
)
add_custom_target (CompileCLSources DEPENDS "${CLKernelBinary}")
add_dependencies (HostApplication CompileCLSources)
What doesn't work
Running this in the CLion IDE under Linux leads to this error:
/bin/sh: 1: /home/me/SDKsAndFrameworks/intelFPGA/18.1/hld/bin/aoc -march=emulator /home/me/CLionProjects/cltest/vector_add.cl -o /home/me/CLionProjects/cltest/cmake-build-debug-openclintelfpgasimulation/vector_add.aocx: not found
The whole command expands correctly, copying it and pasting it into a terminal works without problems, so I'm not sure what the not found error means.
Further Question
Assumed the above problem will be solved, how can I achieve that the custom command is not only invoked if the output file is not present in the build directory but also if the CL source file was edited?
As you can see in the error message, the bash interprets the whole command line
/home/me/SDKsAndFrameworks/intelFPGA/18.1/hld/bin/aoc -march=emulator /home/me/CLionProjects/cltest/vector_add.cl -o /home/me/CLionProjects/cltest/cmake-build-debug-openclintelfpgasimulation/vector_add.aocx
as a single executable.
This is because you wrap COMMAND in your script with double quotes.
Remove these double quotes, so everything will work.
As in many other scripting languages, in CMake double quotes makes the quoted string to be interpreted as a single argument for a function or for a macro.
But in add_custom_command/add_custom_target functions a keyword COMMAND starts a list of arguments, first of which denotes an executable and others - separated parameters for that executable.
I see a string being output to my Terminal, when I ran an executable. I have the source code (in C) of the executable, but it was not written by me. I compiled it with -g flag. Is there any way to know which line in which file resulted in the output, with dtrace, lldb, gdb, or any other means?
I am using macOS 10.13. When I ran gdb and the following:
catch syscall write
I got this error:
The feature 'catch syscall' is not supported on this architecture yet.
Is there any way that can achieve my goal?
lldb tends to be better supported on macOS than gdb. You should be able to trace this call by using its conditional breakpoint feature.
While you can certainly trace the write() call with dtrace and get a stack trace using the ustack() action, I think you'll have a harder time pinpointing the state of the program than if you break on it in the debugger.
Your comment suggests you might be searching for a substring match. I suspect you can create a conditional breakpoint in lldb that matches a substring using something like this:
br s -n write -c 'strnstr((const char*)$rsi, "test", $rdx) != NULL'
I'm assuming lldb does not have argument names for the write function, so I'm using x86-64 calling convention register names directly. ($rdi = first argument, which would be the file descriptor; $rsi = second argument, buffer; $rdx = third argument, buffer length)
I'm trying to perform a one-step compile and run operation in emacs. I tried recording a macro using C-(, then M-!, then "gcc main.c && ./a.out", RET, then C-).
However, when I execute this macro with C-x e, the *Shell Command Output* buffer doesn't automatically open if I happen to not have it on my screen at the moment (even though it is in the buffer list, and the output does correctly appear in that buffer). I only see (Type e to repeat macro) on the bottom of my page, which has appeared after the output that I wanted to see, so is in a sense blocking it.
This is a minor annoyance; I would prefer that the Shell Command Output pop up automatically, the way it does when I manually use the shell-command command M-!, rather than requiring me to switch to that buffer manually using C-x b.
I also tried evaluating (call-process "/pathtofile/a.out" ), but that has a similar issue: I need to provide a buffername to output to, and even if I do, the output doesn't automatically get displayed; I have to manually switch to that new buffer. Additionally, call-process appends the output to that buffer, as opposed to refreshing.
How can I easily get the shell command output to show up automatically without manually performing the M-! command?
Update: I found out that recording a macro using M-x compile, then replacing the command with gcc main.c && ./a.out does automatically display the compilation result buffer when the macro is invoked using C-x e. If anyone has any insight to why the previous examples don't automatically display their output, I would welcome any answers.
As an alternative, you can specify compile-command as a buffer-local variable. For example, to compile, run with output to compilation buffer, removing the executable aftwerard, you can add
/* -*- compile-command: "gcc -std=gnu11 file-name.c && a.out && rm a.out" -*- */
to the first line of your file (everything between -*- are buffer local variables delimited by ;). This variable is initialized when the buffer is visited. Then running compile will use this command. This can be a useful to specify other buffer local variables like indentation, etc. in any file/mode (where comments on the first line are acceptable).
It also often makes sense to define compile-command in mode hooks, eg. your c-mode-hook, so your compile commands are generic across major modes.
I solved my problem using an elisp function instead of a macro. Putting the following in ~/.emacs:
(defun c-gcc-and-run ()
"Saves current buffer, runs gcc, and runs ./a.out if compile is successful."
(interactive)
(save-buffer)
(compile (concat "gcc " (buffer-file-name) " && ./a.out")))
(add-hook 'c-mode-hook '(lambda () (local-set-key "\C-c\C-f" 'c-gcc-and-run))
Every time c-mode is opened for the first time, the hook automatically binds \C-c\C-f to c-gcc-and-run, which I defined, and which uses the compile elisp function to perform the desired commands.
I am compiling some numerical code with gfortran using Code::Blocks. I have two versions of the executable: Debug and Release.
Debug compilation flags: -Jobj\Debug\ -Wall -g -c
Release compilation flags: -Jobj\Release\ -Wall -O2 -c
gdb invokation flags: -nx -fullname -quiet -args
When I run the code normally, both the Release and Debug executables produce the same output. However, when I run the code in gdb, the output is different. This appears to be due to numerical calculations producing different results during execution.
For example, the result of one calculation when run in gdb is 7.93941842553643E-06 and when run normally is 1.71006041855278E-03. More oddly, some of the non-zero results are identical within double precision accuracy.
How can I ensure that the output is the same when I run using gdb? Is a different type of numerical calculation or evaluation used by default when using gdb?
This appears to be due to numerical calculations producing different results during execution.
That is exceedingly unlikely: GDB doesn't participate in any numerical calculations your program executes.
Significantly more likely is that your program uses uninitialized memory, and that memory just happens to have different values when the program runs under GDB.
If you are on a platform that is supported by valgrind, your very first step should be to run your program under it, and fix all bugs it finds.
I am trying to execute a binary file on a xeon phi coprocessor, and it is coming back with "bash: cannot execute binary file". So I am trying to find how to either view an error log or have it display what's happening when I tell it to execute that is causing it not work. I have already tried bash --verbose but it didn't display any additional information. Any ideas?
You don't specify where you compiled your executable nor where you tried to execute from.
To compile a program on the host system to be executed directly on the coprocessor, you must either:
if using one of the Intel compilers, add -mmic to the compiler
command line
if using gcc, use the cross-compilers provided with the MPSS
(/usr/linux-k1om-4.7) - note, however, that the gcc compiler does not
take advantage of vectorization on the coprocessor
If you want to compile directly on the coprocessor, you can install the necessary files from the additional rpm files provided for the coprocessor (found in mpss-/k1om) using the directions from the MPSS user's guide for installing additional rpm files.
To run a program on the coprocessor, if you have compiled it on the host, you must either:
copy your executable file and required libraries to the coprocessor
using scp before you ssh to the coprocessor yourself to execute the
code.
use the micnativeloadex command on the host - you can find a man page
for that on the host.
If you are writing a program using the offload model (part of the work is done using the host then some of the work is passed off to the coprocessor), you can compile on the host using the Intel compilers with no special options.
Note, however, that, regardless of what method you use, any libraries to be used with an executable for the coprocessor will need themselves to be built for the coprocessor. The default libraries exist but any libraries you add, you need to build a version for the coprocessor in addition to any version you make for the host system.
I highly recommend the articles you will find under https://software.intel.com/en-us/articles/programming-and-compiling-for-intel-many-integrated-core-architecture. These articles are written by people who develop and/or support the various programming tools for the coprocessor and should answer most of your questions.
Update: What's below does NOT answer the OP's question - it is one possible explanation for the cannot execute binary file error, but the fact that the error message is prefixed with bash: indicates that the binary is being invoked correctly (by bash), but is not compatible with the executing platform (compiled for a different architecture) - as #Barmar has already stated in a comment.
Thus, while the following contains some (hopefully still somewhat useful) general information, it does not address the OP's problem.
One possible reason for cannot execute binary file is to mistakenly pass a binary (executable) file -- rather than a shell script (text file containing shell code) -- as an operand (filename argument) to bash.
The following demonstrates the problem:
bash printf # fails with '/usr/bin/printf: /usr/bin/printf: cannot execute binary file'
Note how the mistakenly passed binary's path prefixes the error message twice; If the first prefix says bash: instead, the cause is most likely not a problem of incorrect invocation, but one of trying to a invoke an incompatible binary (compiled for a different architecture).
If you want bash to invoke a binary, you must use the -c option to pass it, which allows you to specify an entire command line; i.e., the binary plus arguments; e.g.:
bash -c '/usr/bin/printf "%s\n" "hello"' # -> 'hello'
If you pass a mere binary filename instead of a full path - e.g., -c 'program ...' - then a binary by that name must exist in one of the directories listed in the $PATH variable that bash sees, otherwise you'll get a command not found error.
If, by contrast, the binary is located in the current directory, you must prefix the filename with ./ for bash to find it; e.g. -c './program ...'