How to run gcov on test application with Nuttx OS on STM discovery board? - gcc

Setup:
Toolchain: gcc-arm-none-eabi-5_2-2015q4-20151219
Target: STM429i-disco board
I want to run gcov and get real time report generated in target as per below link:
https://mcuoneclipse.com/2014/12/26/code-coverage-for-embedded-target-with-eclipse-gcc-and-gcov/
First, sucessfully have compiled my code (POSIX compliant NUTTX OS) with -fprofile-arcs & -ftest-coverage flags & got generated the .gcno files for my src files.
second, sucesfully have linked with -fprofile-arcs flags enabled and using the libgcov.a file (part of the toolchain) and the final binary is generated.
Now, I dont know what changes are needed in my test application to invoke gcov, generate report & dump report.
Another problem is, gcov functions are with HIDDEN attribute in libgcov.a as below.
9: 00000000 4 FUNC GLOBAL HIDDEN 1 __gcov_flush
9: 00000000 4 FUNC GLOBAL HIDDEN 1 __gcov_init
so, I could not invoke as I need.
Any inputs in getting the .gcda file generated would be of great help.

Can you look for gcov_exit instead? It is similar to __gcov_flush. Typically, it's one of gcov_exit and __gcov_flush that would be there and you can use any.
In case this is not there or is also hidden, you can use this approach which I tried for one of my projects. I picked (and modified for various reasons) the implementation of gcov_exit from gcc source code (of version matching my toolchain) (available at https://github.com/reeteshranjan/libgcov-embedded) and plugged it in my project. With everything else remaining the same (the compiler flags etc.), I was able to then break into gcov_exit and follow the rest of the approach in the blog link you have mentioned.

Related

V8 : Isolate is incompatible with the embedded blob

I am trying to create custom snapshot from some Javascript file. I was able to create a snapshot using the command
mksnapshot.exe snapshot11.js --startup_blob snap.bin
but when I was trying to create an Isolate with this snap.bin file I got this message
The Isolate is incompatible with the embedded blob. This is usually caused by incorrect usage of mksnapshot. When generating custom snapshots, embedders must ensure they pass the same flags as during the V8 build process (e.g.: --turbo-instruction-scheduling).
I am guessing that I need recreate the snapshot with the proper flags but I couldn't find which flags I need to use.
My args.gn
is_component_build=true
v8_static_library=false
is_official_build=false
is_debug=true
use_custom_libcxx=false
use_custom_libcxx_for_host=false
target_cpu="x64"
use_goma=false
v8_use_external_startup_data=false
v8_enable_i18n_support = false
symbol_level=2
v8_enable_fast_mksnapshot=true
Any lead will be helpful.
10x
You can invoke ninja with -v to have it print all the commands it executes; e.g. if you compile V8 with:
ninja -v -C out/... v8_monolith
then you'll find a line for the mksnapshot invocation in the output, and can copy the flags from there. (If you have already compiled V8, ninja will say "nothing to do"; in that case you can either clean out everything, or just delete snapshot_blob.bin and libv8_monolith.so.)

How can I specify a minimum compute capability to the mexcuda compiler to compile a mexfunction?

I have a CUDA project in a .cu file that I would like to compile to a .mex file using mexcuda. Because my code makes use of the 64-bit floating point atomic operation atomicAdd(double *, double), which is only supposed for GPU devices of compute capability 6.0 or higher, I need to specify this as a flag when I am compiling.
In my standard IDE, this works fine, but when compiling with mexcuda, this is not working as I would like. In this post on MathWorks, it was suggested to use the following command (edited from the comment by Joss Knight):
mexcuda('-v', 'mexGPUExample.cu', 'NVCCFLAGS=-gencode=arch=compute_60,code=sm_60')
but when I use this command on my file, the verbose option spits out the following line last:
Building with 'NVIDIA CUDA Compiler'.
nvcc -c --compiler-options=/Zp8,/GR,/W3,/EHs,/nologo,/MD -
gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_50,code=sm_50 -
gencode=arch=compute_60,code=sm_60 -
gencode=arch=compute_70,code=\"sm_70,compute_70\"
(and so on), which signals to me that the specified flag was not passed to the nvcc properly. And indeed, compilation fails with the following error:
C:/path/mexGPUExample.cu(35): error: no instance of overloaded function "atomicAdd" matches
the argument list. Argument types are: (double *, double)
The only other post I could find on this topic was this post on SO, but it is almost three years old and seemed to me more like a workaround - one which I do not understand even after some research, otherwise I would have tried it - rather than a true solution to the problem.
Is there a setting I missed, or can this simply not be done without a workaround?
I was able to work my way around this problem after some messing around with the standard xml-files in the MatLab folder. The following steps allowed me to compile using -mexcuda:
-1) Go to the folder C:\Program Files\MATLAB\-version-\toolbox\distcomp\gpu\extern\src\mex\win64, which contains xml-files for different versions of msvcpp;
-2) Make a backup of the file that corresponds to the version you are using. In my case, I made a copy of the file nvcc_msvcpp2017 and named it nvcc_msvcpp2017_old, to always have the original.
-3) Open nvcc_msvcppYEAR with notepad, and scroll to the following block of lines:
COMPILER="nvcc"
COMPFLAGS="--compiler-options=/Zp8,/GR,/W3,/EHs,/nologo,/MD $ARCHFLAGS"
ARCHFLAGS="-gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=\"sm_70,compute_70\" $NVCC_FLAGS"
COMPDEFINES="--compiler-options=/D_CRT_SECURE_NO_DEPRECATE,/D_SCL_SECURE_NO_DEPRECATE,/D_SECURE_SCL=0,$MATLABMEX"
MATLABMEX="/DMATLAB_MEX_FILE"
OPTIMFLAGS="--compiler-options=/O2,/Oy-,/DNDEBUG"
INCLUDE="-I"$MATLABROOT\extern\include" -I"$MATLABROOT\simulink\include""
DEBUGFLAGS="--compiler-options=/Z7"
-4) Remove the architectures that will not allow your code to compile, i.e. all the architecture flags below 60 in my case:
ARCHFLAGS="-gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=\"sm_70,compute_70\" $NVCC_FLAGS"
-5) I was able to compile using mexcuda after this. You do not need to specify any architecture flags in the mexcuda call.
-6) (optional) I suppose you want to revert this change after you are done with the project that required you to make this change, if you want to ensure maximum portability of the code you will compile after this.
Note: you will need administrator permission to make these changes.

Error Building Pin Tool that Uses External Library

I declared a function foo() in a header file imp.h and implemented it in imp.c. Then I generated a shared library named libimp.so and in my Pin tool I called foo().
In order to link the tool with this new library I added the following definitions to makefile.rules in its directory:
TOOL_CXXFLAGS += -I/path/to/imp.h
TOOL_LPATHS += -L/path/to/libimp.so
TOOL_LIBS += -limp
I also set LD_LIBRARY_PATH to the /path/to/libimp.so. But, at runtime, if I use foo(), the following error will be received:
dlopen failed. library "libimp.so" not found.
The library is OK when I call it from a simple test program. Any ideas?
I also set LD_LIBRARY_PATH to the /path/to/libimp.so
If the full path to libimp.so is literally /path/to/libimp.so, then the correct value for LD_LIBRARY_PATH is /path/to, and not /path/to/libimp.so.
(It isn't clear from your question whether you understand this.)
You may wish to link your pintool with -Wl,--rpath=/path/to so you don't have to set LD_LIBRARY_PATH at all.

GnuCOBOL entry point not found

I've installed GnuCOBOL 2.2 on my Ubuntu 17.04 system. I've written a basic hello world program to test the compiler.
1 IDENTIFICATION DIVISION.
2 PROGRAM-ID. HELLO-WORLD.
3 *---------------------------
4 DATA DIVISION.
5 *---------------------------
6 PROCEDURE DIVISION.
7 DISPLAY 'Hello, world!'.
8 STOP RUN.
This program is entitled HelloWorld.cbl. When I compile the program with the command
cobc HelloWorld.cbl
HelloWorld.so is produced. When I attempt to run the compiled program using
cobcrun HelloWorld
I receive the following error:
libcob: entry point 'HelloWorld' not found
Can anyone explain to me what an entry point is in GnuCOBOL, and perhaps suggest a way to fix the problem and successfully execute this COBOL program?
According to the official manual of GNUCOBOL, you should compile your code with:
cobc -x HelloWorld.cbl
then run it with
./HelloWorld
You can also read GNUCOBOL wiki page which contains some exmaples for further information.
P.S. As Simon Sobisch said, If you change your file name to HELLO-WORLD.cbl to match the program ID, the same commands that you have used will be ok:
cobc HELLO-WORLD.cbl
cobcrun HELLO-WORLD
Can anyone explain to me what an entry point is in GnuCOBOL, and perhaps suggest a way to fix the problem and successfully execute this COBOL program?
An entry point is a point where you may enter a shared object (this is actually more C then COBOL).
GnuCOBOL generates entry points for each PROGRAM-ID, FUNCTION-ID and ENTRY. Therefore your entry point is HELLO-WORLD (which likely gets a conversion as - is no valid identifier in ANSI C - you won't have to think about this when CALLing a program as the conversion will be done internal).
Using cobcrun internally does:
search for a shared object (in your case HelloWord), as this is found (because you've generated it) it will be loaded
search for an entry point in all loaded modules - which isn't found
There are three possible options to get this working:
As mentioned in Ho1's answer: use cobc -x, the reason that this works is because you don't generate a shared object at all but a C main which is called directly (= the entry point doesn't apply at all)
preload the shared object and calling the program by its PROGRAM-ID (entry point), either manually with COB_PRE_LOAD=HelloWorld cobcrun HELLO-WORLD or through cobcrun (option available since GnuCOBOL 2.x) cobcrun -M HelloWorld HELLO-WORLD
change the PROGRAM-ID to match the source name (either rename or change the source, I'd do the second: PROGRAM-ID. HelloWorld.)

Class "Veins::ObstacleControl" not found

I have followed step by step the tutorial to install Veins, but when I tried running the example scenario (final step) I ended up with the above error.
The whole error was:
Error in module (cModule) RSUExampleScenario (id=1) during network
setup: Class "Veins::ObstacleControl" not found -- perhaps its code
was not linked in, or the class wasn't registered with
Register_Class(), or in the case of modules and channels, with
Define_Module()/Define_Channel().
TRAPPING on the exception above, due to a debug-on-errors=true
configuration option. Is your debugger ready?
Simulation terminated with exit code: -2147483645 Working directory:
C:/Users/user/src/veins-4.3/examples/veins Command line:
../../../omnetpp-4.6/bin/opp_run.exe -r 0 -n .;../../src/veins
--tkenv-image-path=../../images -l ../../src/veins omnetpp.ini
I don't think I have missed a step during the tutorial as I have tried it two times. I did not make any change in anything, I've just strictly followed the tutorial like a robot, so I cannot provide an MCVE with more details than the tutorial.
Here is what I'm using:
- Windows 7 Pro 64 bits
- SUMO 0.25.0 64 bits
All other steps of the tutorial successfully worked until the final step.
I assume this error occurs when running Veins via the OMNeT++ IDE. Or, if you have compiled it with GCC (The error does not happen if you use CLANG)
There are two ways to bypass this error:
Use the .run as executable from your examples directory, which calls veins/run and includes all the required libraries:
Use opp_run as executable and set dynamic libraries to the directory where libveins.so is located (usually src/veins)
PS: to answer #ChristopSommer questions: Veins::ObstacleControl appears in opp_run -l src/veins -h classes
This could be a solution too, but I never tested it: Compiler flags in Eclipse

Resources