How to indicate a list of dicts in pythran? - pythran

I don't know how to indicate a list of dicts in Pythran. Say I call the function dictest() in a pure python file. A list of dicts (here: ll) need to be provided as arguments:
import test_pythran as tp
dd = {
"ux": 10.0,
"uy": 5,
"uz": 3.4
}
ee = {
"ux": 11.0,
"uy": 7,
"uz": 2.4
}
ll = (dd,ee)
tp.dictest(ll)
The function dictest() is defined in a separate file (here: test.py), which is compiled by pythran test.py -o test_pythran.so:
#pythran export dictest(str:float dict list )
def dictest(ll):
print(ll["ux"], ll["uy"], ll["uz"])
Compilation gives a bunch of errors:
WARNING: Compilation error, trying hard to find its origin...
Compilation error, trying hard to find its origin...
WARNING: Nope, I'm going to flood you with C++ errors!
Nope, I'm going to flood you with C++ errors!
CRITICAL: Cover me Jack. Jack? Jaaaaack!!!!
E: error: Command "/usr/local/opt/gcc/bin/g++-12 -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -DENABLE_PYTHON_MODULE -D__PYTHRAN__=3 -DPYTHRAN_BLAS_OPENBLAS -I/usr/local/opt/openblas/include -I/usr/local/Cellar/pythran/0.12.0/libexec/lib/python3.10/site-packages/pythran -I/usr/local/opt/numpy/lib/python3.10/site-packages/numpy/core/include -I/usr/local/Cellar/pythran/0.12.0/libexec/include -I/usr/local/opt/python#3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -c /var/folders/41/dj_cjw2977qbd4fqxf1c79tc0000gn/T/tmpctvfroxf.cpp -o /var/folders/41/dj_cjw2977qbd4fqxf1c79tc0000gn/T/tmp3ae2neiv/var/folders/41/dj_cjw2977qbd4fqxf1c79tc0000gn/T/tmpctvfroxf.o -std=c++11 -fno-math-errno -Wno-unused-function" failed with exit status 1
Cover me Jack. Jack? Jaaaaack!!!!
E: error: Command "/usr/local/opt/gcc/bin/g++-12 -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -DENABLE_PYTHON_MODULE -D__PYTHRAN__=3 -DPYTHRAN_BLAS_OPENBLAS -I/usr/local/opt/openblas/include -I/usr/local/Cellar/pythran/0.12.0/libexec/lib/python3.10/site-packages/pythran -I/usr/local/opt/numpy/lib/python3.10/site-packages/numpy/core/include -I/usr/local/Cellar/pythran/0.12.0/libexec/include -I/usr/local/opt/python#3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -c /var/folders/41/dj_cjw2977qbd4fqxf1c79tc0000gn/T/tmpctvfroxf.cpp -o /var/folders/41/dj_cjw2977qbd4fqxf1c79tc0000gn/T/tmp3ae2neiv/var/folders/41/dj_cjw2977qbd4fqxf1c79tc0000gn/T/tmpctvfroxf.o -std=c++11 -fno-math-errno -Wno-unused-function" failed with exit status 1
I am sure that it is related to the way I need to indicate the list of dicts in the function definition: str:float dict list. How to do this in the proper way?
Thanks!

There are several issues in your example.
First, dictest takes a list of dict as input but you're indexing its argument through a string, maybe you meant
#pythran export dictest(str:float dict list )
def dictest(ll):
for l in ll:
print(l["ux"], l["uy"], l["uz"])
Second, at call site you're passing a tuple of dict and not a list of dict as input to dictest. You probably wants to type
ll = [dd, ee]

Related

GCC Linking Error when Building Fast RCNN

I am trying to build the source code at https://github.com/craftGBD/craftGBD in order to achieve the same results of the published paper of authors to observe whether it is reproducible or not for my term project. I realized that I have to install Fast RCNN by running Makefile inside the craftGBD/evaluation/lib folder. However, I got following results when I run Makefile using make:
/cta/users/byaman/craftEnv/bin/python setup.py build_ext --inplace
python setup.py build_ext --inplace
running build_ext
cythoning utils/bbox.pyx to utils/bbox.c
/cta/users/byaman/craftEnv/lib/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /cta/users/byaman/craftGBD/evaluation/lib/utils/bbox.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
cythoning nms/cpu_nms.pyx to nms/cpu_nms.c
/cta/users/byaman/craftEnv/lib/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /cta/users/byaman/craftGBD/evaluation/lib/nms/cpu_nms.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
cythoning nms/gpu_nms.pyx to nms/gpu_nms.cpp
/cta/users/byaman/craftEnv/lib/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /cta/users/byaman/craftGBD/evaluation/lib/nms/gpu_nms.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
skipping 'pycocotools/_mask.c' Cython extension (up-to-date)
building 'utils.cython_bbox' extension
['-Wno-cpp', '-Wno-unused-function'] .c ['-I/cta/users/byaman/craftEnv/lib/python2.7/site-packages/numpy/core/include', '-I/cta/users/byaman/craftEnv/include/python2.7', '-c'] ['-Wno-cpp', '-Wno-unused-function'] ['-I/cta/users/byaman/craftEnv/lib/python2.7/site-packages/numpy/core/include', '-I/cta/users/byaman/craftEnv/include/python2.7']
/cta/users/byaman/craftEnv/bin/x86_64-conda-linux-gnu-cc -fno-strict-aliasing -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -pipe -DNDEBUG -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include -fPIC -I/cta/users/byaman/craftEnv/lib/python2.7/site-packages/numpy/core/include -I/cta/users/byaman/craftEnv/include/python2.7 -c utils/bbox.c -o build/temp.linux-x86_64-2.7/utils/bbox.o -Wno-cpp -Wno-unused-function
x86_64-conda_cos6-linux-gnu-gcc -pthread -shared -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,-rpath,/cta/users/byaman/craftEnv/lib -L/cta/users/byaman/craftEnv/lib -Wl,--no-as-needed -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath,/cta/users/byaman/craftEnv/lib -Wl,-rpath-link,/cta/users/byaman/craftEnv/lib -L/cta/users/byaman/craftEnv/lib -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include build/temp.linux-x86_64-2.7/utils/bbox.o -L/cta/users/byaman/craftEnv/lib -lpython2.7 -o /cta/users/byaman/craftGBD/evaluation/lib/utils/cython_bbox.so
/cta/users/byaman/craftEnv/bin/../lib/gcc/x86_64-conda-linux-gnu/9.3.0/../../../../x86_64-conda-linux-gnu/bin/ld: /cta/users/byaman/craftEnv/lib/libc.a(__stack_chk_fail.o): relocation R_X86_64_32 against symbol `__stack_chk_guard' can not be used when making a shared object; recompile with -fPIC
collect2: error: ld returned 1 exit status
error: command 'x86_64-conda_cos6-linux-gnu-gcc' failed with exit status 1
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 1
Note that my username is byaman and I run the code inside the Conda environment, which is craftEnv.
The code that is run by Makefile is:
# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------
import os
from os.path import join as pjoin
from setuptools import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import subprocess
import numpy as np
def find_in_path(name, path):
"Find a file in a search path"
# Adapted fom
# http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/
for dir in path.split(os.pathsep):
binpath = pjoin(dir, name)
if os.path.exists(binpath):
return os.path.abspath(binpath)
return None
def locate_cuda():
"""Locate the CUDA environment on the system
Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'
and values giving the absolute path to each directory.
Starts by looking for the CUDAHOME env variable. If not found, everything
is based on finding 'nvcc' in the PATH.
"""
# first check if the CUDAHOME env variable is in use
if 'CUDAHOME' in os.environ:
home = os.environ['CUDAHOME']
nvcc = pjoin(home, 'bin', 'nvcc')
else:
# otherwise, search the PATH for NVCC
default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin')
nvcc = find_in_path('nvcc', os.environ['PATH'] + os.pathsep + default_path)
if nvcc is None:
raise EnvironmentError('The nvcc binary could not be '
'located in your $PATH. Either add it to your path, or set $CUDAHOME')
home = os.path.dirname(os.path.dirname(nvcc))
cudaconfig = {'home':home, 'nvcc':nvcc,
'include': pjoin(home, 'include'),
'lib64': pjoin(home, 'lib64')}
for k, v in cudaconfig.iteritems():
if not os.path.exists(v):
raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))
return cudaconfig
CUDA = locate_cuda()
# Obtain the numpy include directory. This logic works across numpy versions.
try:
numpy_include = np.get_include()
except AttributeError:
numpy_include = np.get_numpy_include()
def customize_compiler_for_nvcc(self):
"""inject deep into distutils to customize how the dispatch
to gcc/nvcc works.
If you subclass UnixCCompiler, it's not trivial to get your subclass
injected in, and still have the right customizations (i.e.
distutils.sysconfig.customize_compiler) run on it. So instead of going
the OO route, I have this. Note, it's kindof like a wierd functional
subclassing going on."""
# tell the compiler it can processes .cu
self.src_extensions.append('.cu')
# save references to the default compiler_so and _comple methods
default_compiler_so = self.compiler_so
super = self._compile
# now redefine the _compile method. This gets executed for each
# object but distutils doesn't have the ability to change compilers
# based on source extension: we add it.
def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
if os.path.splitext(src)[1] == '.cu':
# use the cuda for .cu files
self.set_executable('compiler_so', CUDA['nvcc'])
# use only a subset of the extra_postargs, which are 1-1 translated
# from the extra_compile_args in the Extension class
postargs = extra_postargs['nvcc']
else:
postargs = extra_postargs['gcc']
super(obj, src, ext, cc_args, postargs, pp_opts)
# reset the default compiler_so, which we might have changed for cuda
self.compiler_so = default_compiler_so
# inject our redefined _compile method into the class
self._compile = _compile
# run the customize_compiler
class custom_build_ext(build_ext):
def build_extensions(self):
customize_compiler_for_nvcc(self.compiler)
build_ext.build_extensions(self)
ext_modules = [
Extension(
"utils.cython_bbox",
["utils/bbox.pyx"],
extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
include_dirs = [numpy_include]
),
Extension(
"nms.cpu_nms",
["nms/cpu_nms.pyx"],
extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
include_dirs = [numpy_include]
),
Extension('nms.gpu_nms',
['nms/nms_kernel.cu', 'nms/gpu_nms.pyx'],
library_dirs=[CUDA['lib64']],
libraries=['cudart'],
language='c++',
runtime_library_dirs=[CUDA['lib64']],
# this syntax is specific to this build system
# we're only going to use certain compiler args with nvcc and not with
# gcc the implementation of this trick is in customize_compiler() below
extra_compile_args={'gcc': ["-Wno-unused-function"],
'nvcc': ['-arch=sm_35',
'--ptxas-options=-v',
'-c',
'--compiler-options',
"'-fPIC'"]},
include_dirs = [numpy_include, CUDA['include']]
),
Extension(
'pycocotools._mask',
sources=['pycocotools/maskApi.c', 'pycocotools/_mask.pyx'],
include_dirs = [numpy_include, 'pycocotools'],
extra_compile_args={
'gcc': ['-Wno-cpp', '-Wno-unused-function', '-std=c99']},
),
]
setup(
name='fast_rcnn',
ext_modules=ext_modules,
# inject our custom trigger
cmdclass={'build_ext': custom_build_ext},
)
I don't know how to solve this problem even though I investigated following questions/answers:
"relocation R_X86_64_32S against " linking Error
How to recompile with -fPIC
Cython wrapping a class that uses another library

How to query GCC warnings for C++?

GCC allows querying available warning flags specific for C++ language with the syntax:
g++ -Q --help=warning,c++
Adding warning flags to the call includes them in the result:
g++ -Wall -Q --help=warning,c++
However, it seems the call is done from the C point of view and I don't know how to do it from the C++ point of view. If the call includes C++-only warning, like:
g++ -Wnon-virtual-dtor -Q --help=warning,c++
the output contains a message:
cc1: warning: command line option ‘-Wnon-virtual-dtor’ is valid for C++/ObjC++ but not for C
and still shows the warning as disabled:
-Wnon-virtual-dtor [disabled]
Note, that this happens regardless of whether the call is done using g++ or gcc.
The same with C-only -Wbad-function-cast behaves in an expected way:
gcc -Wbad-function-cast -Q --help=warning,c
There is no extra message and reported warning status changes between [disabled] and [enabled]. Again, regardless of whether g++ or gcc is used.
I'm using GCC version 7.3.0. Although the issue seems to apply to many if not all versions. It can be observed through Compiler Explorer.
So, is there a way to do this query with respect to given language?
Yes, your observations are correct.
Probably this is not the intended behavior, and if you care about this feature, then I suggest reporting it upstream.
Note that this works, however:
touch 1.cc
g++ -Wnon-virtual-dtor -Q --help=warning,c++ 1.cc
I.e. if there's an input file with a proper extension, then the correct compiler proper executable is invoked: cc1plus, not cc1. The latter is the default if no input files are present. I did some quick debugging, and here's how that happens:
// gcc.c:
driver::do_spec_on_infiles () const
{
...
for (i = 0; (int) i < n_infiles; i++)
{
...
/* Figure out which compiler from the file's suffix. */
input_file_compiler
= lookup_compiler (infiles[i].name, input_filename_length,
infiles[i].language);
if (input_file_compiler)
{
...
value = do_spec (input_file_compiler->spec);
And input_file_compiler at that point is the C compiler, because
p n_infiles
$9 = 1
(gdb) p infiles[0]
$10 = {name = 0x4cbfb0 "help-dummy", language = 0x4cbfae "c", incompiler = 0x58a920, compiled = false, preprocessed = false}
Here's how the dummy file got created (function process_command in the same file):
if (n_infiles == 0
&& (print_subprocess_help || print_help_list || print_version))
{
/* Create a dummy input file, so that we can pass
the help option on to the various sub-processes. */
add_infile ("help-dummy", "c");
}

NativeLstm2.so: undefined symbol: sgemm_

While trying to run 22_train.sh
OpCodeCompiler call: /home/end2end/cuda-8.0/bin/nvcc -shared -O2 -std=c++11 -I /home/anaconda2/lib/python2.7/site-packages/tensorflow/include -I /anaconda2/lib/python2.7/site-packages/tensorflow/include/external/nsync/public -I /home/end2endcuda-8.0/include -L /home/end2endcuda-8.0/lib64 -x cu -DGOOGLE_CUDA=1 -Xcompiler -fPIC -arch compute_30 -D_GLIBCXX_USE_CXX11_ABI=0 -g /tmp/returnn_tf_cache/ops/NativeLstm2/357cf3b2da/NativeLstm2.cc -o /tmp/returnn_tf_cache/ops/NativeLstm2/357cf3b2da/NativeLstm2.so
Exception creating layer root/'lstm0_fw' of class RecLayer with opts:
{'direction': 1,
'loss': None,
'n_out': 1024,
'name': 'lstm0_fw',
'network': <TFNetwork 'root' train=<tf.Tensor 'globals/train_flag:0' shape=() dtype=bool>>,
'output': Data(name='lstm0_fw_output', shape=(None, 1024), dtype='float32', batch_dim_axis=1),
'sources': [<EvalLayer 'source' out_type=Data(shape=(None, 40), dtype='float32')>],
'unit': 'nativelstm2'}
Unhandled exception <class 'tensorflow.python.framework.errors_impl.NotFoundError'> in thread <_MainThread(MainThread, started 140224295745280)>, proc 18758.
Thread current, main, <_MainThread(MainThread, started 140224295745280)>:
(Excluded thread.)
That were all threads.
NotFoundError: /tmp/returnn_tf_cache/ops/NativeLstm2/357cf3b2da/NativeLstm2.so: undefined symbol: sgemm_
Cuda 8.0 and CUDNN paths are configured.
To put the comments into an answer:
This is probably due to an old version, and should be fixed by now. There have been some changes with respect to how RETURNN finds the sgemm_ symbol.

undefined reference to__aeabi_ldivmod when building kernel for arm32

When building one kernel image for arm32 platform, in the final linking, the error is:
arm-eabi-ld -EL -p --no-undefined -X --build-id -o .tmp_vmlinux1 -T obj/KERNEL/arch/arm/kernel/vmlinux.lds arch/arm/kernel/head.o init/built-in.o --start-group usr/built-in.o arch/arm/vfp/built-in.o arch/arm/kernel/built-in.o arch/arm/mm/built-in.o arch/arm/common/built-in.o arch/arm/net/built-in.o arch/arm/crypto/built-in.o arch/arm/mach-sc/built-in.o kernel/built-in.o mm/built-in.o fs/built-in.o ipc/built-in.o security/built-in.o crypto/built-in.o block/built-in.o arch/arm/lib/lib.a lib/lib.a arch/arm/lib/built-in.o lib/built-in.o drivers/built-in.o sound/built-in.o firmware/built-in.o arch/arm/oprofile/built-in.o net/built-in.o --end-group
drivers/built-in.o:
undefined reference to `__aeabi_ldivmod'
make[2]: *** [vmlinux] Error 1
I know the reason is my using 64bit divison for arm32 which 64bit is not supported, and using do_div() can get rid of the __aeabi_ldivmod error. I know __aeabi_ldivmod is defined in the libgcc.a, so I added the following code in my Kernel Makefile:
--- a/Makefile
+++ b/Makefile
## -677,6 +677,7 ##
LDFLAGS_BUILD_ID = $(patsubst -Wl$(comma)%,%,\
$(call cc-ldoption, -Wl$(comma)--build-id,))
KBUILD_LDFLAGS_MODULE += $(LDFLAGS_BUILD_ID)
LDFLAGS_vmlinux += $(LDFLAGS_BUILD_ID)
+LDFLAGS_vmlinux += -L$(MY_LIBPATH) -lgcc
But it still can't work, so can anybody help on my questions:
1) Why the libgcc.a is not linked default in the kernel building?
2) How to link the libgcc.a to fix the link error?
[Update]
OK, I found the following note for -lgcc in OSDev wiki:
-lgcc
You disable the important libgcc library when you pass -nodefaultlibs (implied by -nostdlib). The compiler needs this library for many operations that it cannot do itself or that is more efficient to put into a shared function. You must pass this library near the end of the link line, after all the other object files and libraries, or the linker won't use it and you get strange linker errors.
So I hard coded -L${MYLIB_PATH} -lgcc into into scripts/link-vmlinux.sh:
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
## -55,7 +55,7 ## vmlinux_link()
if [ "${SRCARCH}" != "um" ]; then
${LD} ${LDFLAGS} ${LDFLAGS_vmlinux} -o ${2} \
-T ${lds} ${KBUILD_VMLINUX_INIT} \
- --start-group ${KBUILD_VMLINUX_MAIN} --end-group ${1}
+ --start-group ${KBUILD_VMLINUX_MAIN} --end-group ${1} -L${MYLIB_PATH} -lgcc
else
${CC} ${CFLAGS_vmlinux} -o ${2} \
-Wl,-T,${lds} ${KBUILD_VMLINUX_INIT} \
Then the build went without __aeabi_ldivmod error.
While I want to find the better modifying than just harding coding and to know why kernel doesn't link libgcc default.
Don't edit scripts/link-vmlinux.sh, edit only certain Makefiles. For example if you have this error:
drivers/power/reset/msm-poweroff.c:249: undefined reference to `lge_set_restart_reason'
run cscope -R inside top of linux kernel source code directory, then find global definition for lge_set_restart_reason, it finds file definition in include/soc/qcom/lge/lge_handle_panic.h
But lge_handle_panic.h has only definition of lge_set_restart_reason, you need lge_handle_panic.c, which has this function, looking like this:
void lge_set_restart_reason(unsigned int reason)
{
writel_relaxed(reason, RESTART_REASON);
if(use_hardreset) {
qpnp_pon_set_restart_reason(map_imem_reboot_to_pmic(reason));
qpnp_pon_system_pwr_off(PON_POWER_OFF_HARD_RESET);
}
}
so you need to edit drivers/power/reset/Makefile to include lge_handle_panic.o object file:
obj-$(CONFIG_POWER_RESET_MSM) += msm-poweroff.o lge_handle_panic.o
In custom kernels, a lot of functions is enabled by config, what I mean is:
#ifdef CONFIG_LGE_HANDLE_PANIC
static void __iomem *msm_timer0_base;
void __iomem *wdt_timer_get_timer0_base(void)
{
return msm_timer0_base;
}
static void wdt_timer_set_timer0_base(void __iomem * iomem)
{
msm_timer0_base = iomem;
}
#endif
You need enable CONFIG_LGE_HANDLE_PANIC, or any config if you have again this error, or remove "ifdef" and "endif" (I'm not recommending doing this)

MPFR 3.1.3 fails with "illegal thread local variable reference to regular symbol" on Mac + Intel 15

I downloaded MPFR 3.1.3 from http://www.mpfr.org/mpfr-current/ and attempted to compile with Intel 15 (icc --version returns icc (ICC) 15.0.3 20150408) on Mac Yosemite (10.10.4).
The build fails at this point:
jrhammon-mac01:build jrhammon$ make
Making all in doc
make[1]: Nothing to be done for `all'.
Making all in src
/Applications/Xcode.app/Contents/Developer/usr/bin/make all-am
/bin/sh ../libtool --tag=CC --mode=link icc -Wall -Wmissing-prototypes -Wpointer-arith -fp_port -mp -wd1572 -wd265 -wd186 -wd239 -g -O2 -version-info 5:3:1 -Wl,-search_paths_first -o libmpfr.la -rpath /usr/local/lib exceptions.lo extract.lo uceil_exp2.lo uceil_log2.lo ufloor_log2.lo add.lo add1.lo add_ui.lo agm.lo clear.lo cmp.lo cmp_abs.lo cmp_si.lo cmp_ui.lo comparisons.lo div_2exp.lo div_2si.lo div_2ui.lo div.lo div_ui.lo dump.lo eq.lo exp10.lo exp2.lo exp3.lo exp.lo frac.lo frexp.lo get_d.lo get_exp.lo get_str.lo init.lo inp_str.lo isinteger.lo isinf.lo isnan.lo isnum.lo const_log2.lo log.lo modf.lo mul_2exp.lo mul_2si.lo mul_2ui.lo mul.lo mul_ui.lo neg.lo next.lo out_str.lo printf.lo vasprintf.lo const_pi.lo pow.lo pow_si.lo pow_ui.lo print_raw.lo print_rnd_mode.lo reldiff.lo round_prec.lo set.lo setmax.lo setmin.lo set_d.lo set_dfl_prec.lo set_exp.lo set_rnd.lo set_f.lo set_prc_raw.lo set_prec.lo set_q.lo set_si.lo set_str.lo set_str_raw.lo set_ui.lo set_z.lo sqrt.lo sqrt_ui.lo sub.lo sub1.lo sub_ui.lo rint.lo ui_div.lo ui_sub.lo urandom.lo urandomb.lo get_z_exp.lo swap.lo factorial.lo cosh.lo sinh.lo tanh.lo sinh_cosh.lo acosh.lo asinh.lo atanh.lo atan.lo cmp2.lo exp_2.lo asin.lo const_euler.lo cos.lo sin.lo tan.lo fma.lo fms.lo hypot.lo log1p.lo expm1.lo log2.lo log10.lo ui_pow.lo ui_pow_ui.lo minmax.lo dim.lo signbit.lo copysign.lo setsign.lo gmp_op.lo init2.lo acos.lo sin_cos.lo set_nan.lo set_inf.lo set_zero.lo powerof2.lo gamma.lo set_ld.lo get_ld.lo cbrt.lo volatile.lo fits_sshort.lo fits_sint.lo fits_slong.lo fits_ushort.lo fits_uint.lo fits_ulong.lo fits_uintmax.lo fits_intmax.lo get_si.lo get_ui.lo zeta.lo cmp_d.lo erf.lo inits.lo inits2.lo clears.lo sgn.lo check.lo sub1sp.lo version.lo mpn_exp.lo mpfr-gmp.lo mp_clz_tab.lo sum.lo add1sp.lo free_cache.lo si_op.lo cmp_ld.lo set_ui_2exp.lo set_si_2exp.lo set_uj.lo set_sj.lo get_sj.lo get_uj.lo get_z.lo iszero.lo cache.lo sqr.lo int_ceil_log2.lo isqrt.lo strtofr.lo pow_z.lo logging.lo mulders.lo get_f.lo round_p.lo erfc.lo atan2.lo subnormal.lo const_catalan.lo root.lo sec.lo csc.lo cot.lo eint.lo sech.lo csch.lo coth.lo round_near_x.lo constant.lo abort_prec_max.lo stack_interface.lo lngamma.lo zeta_ui.lo set_d64.lo get_d64.lo jn.lo yn.lo rem1.lo get_patches.lo add_d.lo sub_d.lo d_sub.lo mul_d.lo div_d.lo d_div.lo li2.lo rec_sqrt.lo min_prec.lo buildopt.lo digamma.lo bernoulli.lo isregular.lo set_flt.lo get_flt.lo scale2.lo set_z_exp.lo ai.lo gammaonethird.lo grandom.lo -lgmp
libtool: link: icc -dynamiclib -Wl,-undefined -Wl,dynamic_lookup -o .libs/libmpfr.4.dylib .libs/exceptions.o .libs/extract.o .libs/uceil_exp2.o .libs/uceil_log2.o .libs/ufloor_log2.o .libs/add.o .libs/add1.o .libs/add_ui.o .libs/agm.o .libs/clear.o .libs/cmp.o .libs/cmp_abs.o .libs/cmp_si.o .libs/cmp_ui.o .libs/comparisons.o .libs/div_2exp.o .libs/div_2si.o .libs/div_2ui.o .libs/div.o .libs/div_ui.o .libs/dump.o .libs/eq.o .libs/exp10.o .libs/exp2.o .libs/exp3.o .libs/exp.o .libs/frac.o .libs/frexp.o .libs/get_d.o .libs/get_exp.o .libs/get_str.o .libs/init.o .libs/inp_str.o .libs/isinteger.o .libs/isinf.o .libs/isnan.o .libs/isnum.o .libs/const_log2.o .libs/log.o .libs/modf.o .libs/mul_2exp.o .libs/mul_2si.o .libs/mul_2ui.o .libs/mul.o .libs/mul_ui.o .libs/neg.o .libs/next.o .libs/out_str.o .libs/printf.o .libs/vasprintf.o .libs/const_pi.o .libs/pow.o .libs/pow_si.o .libs/pow_ui.o .libs/print_raw.o .libs/print_rnd_mode.o .libs/reldiff.o .libs/round_prec.o .libs/set.o .libs/setmax.o .libs/setmin.o .libs/set_d.o .libs/set_dfl_prec.o .libs/set_exp.o .libs/set_rnd.o .libs/set_f.o .libs/set_prc_raw.o .libs/set_prec.o .libs/set_q.o .libs/set_si.o .libs/set_str.o .libs/set_str_raw.o .libs/set_ui.o .libs/set_z.o .libs/sqrt.o .libs/sqrt_ui.o .libs/sub.o .libs/sub1.o .libs/sub_ui.o .libs/rint.o .libs/ui_div.o .libs/ui_sub.o .libs/urandom.o .libs/urandomb.o .libs/get_z_exp.o .libs/swap.o .libs/factorial.o .libs/cosh.o .libs/sinh.o .libs/tanh.o .libs/sinh_cosh.o .libs/acosh.o .libs/asinh.o .libs/atanh.o .libs/atan.o .libs/cmp2.o .libs/exp_2.o .libs/asin.o .libs/const_euler.o .libs/cos.o .libs/sin.o .libs/tan.o .libs/fma.o .libs/fms.o .libs/hypot.o .libs/log1p.o .libs/expm1.o .libs/log2.o .libs/log10.o .libs/ui_pow.o .libs/ui_pow_ui.o .libs/minmax.o .libs/dim.o .libs/signbit.o .libs/copysign.o .libs/setsign.o .libs/gmp_op.o .libs/init2.o .libs/acos.o .libs/sin_cos.o .libs/set_nan.o .libs/set_inf.o .libs/set_zero.o .libs/powerof2.o .libs/gamma.o .libs/set_ld.o .libs/get_ld.o .libs/cbrt.o .libs/volatile.o .libs/fits_sshort.o .libs/fits_sint.o .libs/fits_slong.o .libs/fits_ushort.o .libs/fits_uint.o .libs/fits_ulong.o .libs/fits_uintmax.o .libs/fits_intmax.o .libs/get_si.o .libs/get_ui.o .libs/zeta.o .libs/cmp_d.o .libs/erf.o .libs/inits.o .libs/inits2.o .libs/clears.o .libs/sgn.o .libs/check.o .libs/sub1sp.o .libs/version.o .libs/mpn_exp.o .libs/mpfr-gmp.o .libs/mp_clz_tab.o .libs/sum.o .libs/add1sp.o .libs/free_cache.o .libs/si_op.o .libs/cmp_ld.o .libs/set_ui_2exp.o .libs/set_si_2exp.o .libs/set_uj.o .libs/set_sj.o .libs/get_sj.o .libs/get_uj.o .libs/get_z.o .libs/iszero.o .libs/cache.o .libs/sqr.o .libs/int_ceil_log2.o .libs/isqrt.o .libs/strtofr.o .libs/pow_z.o .libs/logging.o .libs/mulders.o .libs/get_f.o .libs/round_p.o .libs/erfc.o .libs/atan2.o .libs/subnormal.o .libs/const_catalan.o .libs/root.o .libs/sec.o .libs/csc.o .libs/cot.o .libs/eint.o .libs/sech.o .libs/csch.o .libs/coth.o .libs/round_near_x.o .libs/constant.o .libs/abort_prec_max.o .libs/stack_interface.o .libs/lngamma.o .libs/zeta_ui.o .libs/set_d64.o .libs/get_d64.o .libs/jn.o .libs/yn.o .libs/rem1.o .libs/get_patches.o .libs/add_d.o .libs/sub_d.o .libs/d_sub.o .libs/mul_d.o .libs/div_d.o .libs/d_div.o .libs/li2.o .libs/rec_sqrt.o .libs/min_prec.o .libs/buildopt.o .libs/digamma.o .libs/bernoulli.o .libs/isregular.o .libs/set_flt.o .libs/get_flt.o .libs/scale2.o .libs/set_z_exp.o .libs/ai.o .libs/gammaonethird.o .libs/grandom.o -lgmp -mp -g -O2 -Wl,-search_paths_first -mp -install_name /usr/local/lib/libmpfr.4.dylib -compatibility_version 6 -current_version 6.3 -Wl,-single_module
icc: command line remark #10148: option '-mp' not supported
icc: command line remark #10148: option '-mp' not supported
ld: illegal thread local variable reference to regular symbol __tls___mpfr_allocate_func for architecture x86_64
make[2]: *** [libmpfr.la] Error 1
make[1]: *** [all] Error 2
make: *** [all-recursive] Error 1
I have never seen this error before. Is it a bug in MPFR, the Intel compiler or the Mac toolchain? I am using the system install of GMP (from Homebrew), which I assume is built with Clang, not Intel. Is there a C ABI incompatibility issue here with TLS?
I found https://github.com/feeley/gambit/issues/109 and https://trac.mpich.org/projects/mpich/ticket/1547, which suggest that it is a Mac issue, but I am able to build with CC=clang so I guess it has something to do with Intel.
I have got MPFR to build in the past with Intel compilers, but (apparently) had to apply this patch: https://github.com/hpcugent/easybuild-easyconfigs/blob/master/easybuild/easyconfigs/m/MPFR/MPFR_ictce_remove-deprecated-mp.patch .
Maybe the warnings you're getting aren't so harmless as they seem.

Resources