I've been trying to generate a OpenMP enabled precompiled header (PCH) with cppyy
and have been failing so far. Instead of doing so manually each time as outlined in this answer, I'm looking for an automated solution.
So far, the broad method is to use os.environ to export environmental variables - but this seems to fail somewhat inconsistently, resulting in single-threaded code each time.
In particular, there doesn't seem to be any output from cppyy_backend.loader.ensure_precompiled_header all the time. Even if there is output, it has no effect on whether the C++ code runs parallely or not.
The relevant part of the code below:
import cppyy_backend.loader as l
os.environ['EXTRA_CLING_ARGS'] = '-fopenmp -O2 -g'
l.set_cling_compile_options(True)
current_folder = os.path.split(os.path.abspath(__file__))[0]
pch_folder = os.path.join(current_folder, 'cling_pch/')
if os.path.exists(pch_folder):
pass
else:
os.mkdir(pch_folder)
l.ensure_precompiled_header(pch_folder)
# find PCH file:
pch_path = glob.glob(pch_folder+'/allDict*')
if len(pch_path)>0:
os.environ['CLING_STANDARD_PCH'] = pch_path[0]
else:
raise ValueError('Unable to find a precompiled header...')
System specs:
Ubuntu 18.04.2 LTS
python 3.9.0
cppyy 2.4.0
cppyy_backend 6.27.0
EDIT:
I made sure the custom PCH folder was first empty while checking to see if the custom PCH file was being created.
If you set CLING_STANDARD_PCH to the desired name, it will be used as the name for the generated PCH. Conversely, if you set the folder name to the lower level call from loader, it will set CLING_STANDARD_PCH for you. Setting both should not be necessary.
The default naming scheme will generate a differently named PCH if OpenMP is enabled from when it's not (a .omp. part is added to the name). This allows you to go back end forth, without bothering having to rebuild the PCH each time. Likely, you have multiple editions in the cling_pch directory, and pch_path[0] may well be a non-OpenMP variant.
If you want to stay close to the above, though, then how about:
import os
os.environ['EXTRA_CLING_ARGS'] = '-fopenmp -O2 -g'
current_folder = os.path.split(os.path.abspath(__file__))[0]
pch_folder = os.path.join(current_folder, 'cling_pch')
if os.path.exists(pch_folder):
pass
else:
os.mkdir(pch_folder)
os.environ['CLING_STANDARD_PCH'] = os.path.join(pch_folder, 'std_with_openmp.pch')
import cppyy
cppyy.cppexec("std::cerr << _OPENMP << std::endl")
I'm not sure about this, and will be able to confirm in a few days, but... I now suspect the reason there was an inconsistency in PCH creation was that cppyy was being called before the EXTRA_CLING_ARGS were being set.
I guess this means the 'standard' pre-compiled header was first being loaded and the 'custom' arguments were being ignored. During debugging or over the course of multiple runs - sometimes the effective order of execution was reversed.
Related
I can't figure out how to use two different compilers in the same wscript. Nothing in the Waf book shows this clearly.
I tried something among those lines :
def configure(ctx):
ctx.setenv('compiler1')
ctx.env.CC = '/some/compiler'
ctx.load('compiler_c')
ctx.setenv('compiler2')
ctx.env.CC = '/some/other/compiler'
ctx.load('compiler_c')
This does not appear to work. Waf does not find any compiler when I do it that way. I have only managed to compile using two different compilers by specifying in the command line
$ CC='/some/compiler' waf configure
This is annoying because I have to manually change the CC variable every time by hand and rerun configure...
Thanks !
Well, you were close :) You just need to load the compiler tool after setting the CC env variable, conf.load("compiler_c") and use variants build. I wrote a complete example in this answer.
Background information (you do not need to repeat these steps to answer the question, this just gives some background):
I am trying to compile a rather large set of generated modules. These files are the output of a prototype Modelica to OCaml compiler and reflect the Modelica class structure of the Modelica Standard Library.
The main feature is the use of polymorphic, open recursion: Every method takes a this argument which contains the final superclass hierarchy. So for instance the model:
model A type T = Real type S = T end A;
is translated into
let m_A = object
method m_T this = m_Modelica_Real
method m_S this = this#m_T this
end
and has to be closed before usage:
let _ = m_A#m_T m_A
This seems to postpone a lot of typechecking until the superclass hierarchy is actually fixed, which in turn makes it impossible to compile the final linkage module (try ocamlbuild Linkage.cmo after editing the comments in the corresponding file to see what I mean).
Unfortunately, since the code base is rather large and uses a lot of objects, the type-structure might not be the root cause after all, it might as well be some optimization or a flaw in the code-generation (although I strongly suspect the typechecker). So my question is: Is there any way to profile the ocaml compiler in a way that signals when a certain phase (typechecking, intermediate code generation, optimization) is over and how long it took? Any further insights into my particular use case are also welcome.
As of right now, there isn't.
You can do it yourself though, the compiler source are open and you can get those and modify them to fit your needs.
Depending on whether you use ocamlc or ocamlopt, you'll need to modify either driver/compile.ml or driver/optcompile.ml to add timers to the compilation process.
Fortunately, this already has been done for you here. Just compile with the option -dtimings or environment variable OCAMLPARAM=timings=1,_.
Even more easily, you can download the opam Flambda switch:
opam switch install 4.03.0+pr132
ocamlopt -dtimings myfile.ml
Note: Flambda itself changes the compilation time (most what happens after typing) and its integration into the OCaml compiler is not confirmed yet.
OCaml compiler is an ordinary OCaml program in that regard. I would use poorman's profiler for a quick inspection, using e.g. pmp script.
I have a large source tree with a directory that has several files in it. I'd like gdb to break every time any of those functions are called, but don't want to have to specify every file. I've tried setting break /path/to/dir/:*, break /path/to/dir/*:*, rbreak /path/to/dir/.*:* but none of them catch any of the functions in that directory. How can I get gdb to do what I want?
There seems to be no direct way to do it:
rbreak file:. does not seem to accept directories, only files. Also note that you would want a dot ., not asterisk *
there seems to be no way to loop over symbols in the Python API, see https://stackoverflow.com/a/30032690/895245
The best workaround I've found is to loop over the files with the Python API, and then call rbreak with those files:
import os
class RbreakDir(gdb.Command):
def __init__(self):
super().__init__(
'rbreak-dir',
gdb.COMMAND_BREAKPOINTS,
gdb.COMPLETE_NONE,
False
)
def invoke(self, arg, from_tty):
for root, dirs, files in os.walk(arg):
for basename in files:
path = os.path.abspath(os.path.join(root, basename))
gdb.execute('rbreak {}:.'.format(path), to_string=True)
RbreakDir()
Sample usage:
source a.py
rbreak-dir directory
This is ugly because of the gdb.execute call, but seems to work.
It is however too slow if you have a lot of files under the directory.
My test code is in my GitHub repo.
You could probably do this using the Python scripting that comes with modern gdb's. Two options: one is to list all the symbols and then if they contain the required directory create an instance of the Breakpoint class at the appropriate place to set the breakpoint. (Sorry, I can't recall off hand how to get a list of all the symbols, but I think you can do this.)
You haven't said why exactly you need to do this, but depending on your use-case an alternative may be to use reversible debugging - i.e. let it crash, and then step backwards. You can use gdb's inbuilt reversible debugging, or for radically improved performance, see UndoDB (http://undo-software.com/)
I'm struggling with the last pieces of logic to make our Ada builder work as expectedly with variantdir. The problem is caused by the fact that the inflexible tools gnatbind and gnatlink doesn't allow the binder files to be placed in a directory other than the current one. This leaves me with two options:
Let gnatbind write the the binder files to topdir and then let gnatlink pick it from there. This may however cause race conditions if we want to allow simulatenous builds for different architectures and compiler versions which we want.
Modify the calls to gnatbind and gnatlink to temporarily go down to the build directory, in our case build/$ARCH/src-path. I successfully fixed the gnatbind step as this is explicitly called using a env.Execute from within the Ada builder. To try to fix the linking step I've modified the Program env using
env["LINKCOM"] = SCons.Action.Action(ada_linkcom)
where ada_linkcom is defined as
def ada_linkcom(source, target,env ):
....
return ret
where ret is a string describing what should be done in the shell. I need this to be a function it contains a bit complicated logic to convert paths from being relative to top-level to just containing their basenames.
This however fails with an error in scons-2.3.1/SCons/Executor.py on line 347 in function do_execute. Isn't env["LINKCOM"] allowed to be a function with ada_linkcom's signature?
No, it's not. You seem to think that 'env["LINKCOM"]' is what actually calls/executes the final build command, and that's not quite correct. Instead, environment variables like LINKCOM get expanded by the Executor/Builder for each specified Action, and are then executed.
You can have Python functions as Actions, and also use a so-called "generator" to create your Action strings on-the-fly. But you have to assign this Action to a Builder, and can't set it as an environment variable directly.
Please also have a look at the UserGuide ( http://www.scons.org/doc/production/HTML/scons-user.html ), especially section 18.4 "Builders That Execute Python Functions". Our basic guide for writing Builders and Tools might also prove to be helpful: http://www.scons.org/wiki/ToolsForFools
The project I'm working on recently made a big effort to cleanup the code by turning on all the strictest GCC warnings and iterating until it compiled. Now, for instance, compilation fails if I declare a variable and don't use it.
After my latest development task there I see that there is a header file included somewhere that is now unnecessary. Is there any good way to find other such header files (and in such a way reduce dependencies) other than trying to remove a header file and seeing if anything breaks?
I am using GCC 4.3.2 on Linux.
No, there's no way to get gcc to fail if a header isn't required. Included headers can contain pretty much anything, so it is assumed that whoever included them had good reason to do so. Imagine the following somewhat pathological case:
int some_function(int x) {
#include "function_body.h"
return x;
}
It's certainly not good form, but it would still compile if you removed the include. So, an automatic checker might declare it "unnecessary," even though the behavior is presumably different when the function body is actually there.