Why is a GCC cross compile not building `crti.o`? - gcc

In an attempt to build a gcc 4.x.x cross compiler for arm, I'm stuck at a missing crti.o file in the $BUILD_DIR/gcc subdirectory.
An strace on the top level Makefile shows that the compiled xgcc is calling the cross-linker ld with "crti.o" as an argument. I'm assuming that if the cross linking ld is being called, the native /usr/lib/crti.o is not what is needed.
I can see that in the gcc source tree there is a number of potential sources for a crti object (including $SRC_DIR/gcc/config/arm/crti.asm).
How can I configure the gcc build to insure this file is built (or omitted from the ld command)?
Here is my configure line:
/x-tools/build/gcc-4.5.0$ ../../src/gcc-4.5.0/configure --target=arm-linux --prefix=/opt/arm-tools --disable-threads --enable-languages=c

The real answer is that it should compile crti.o if one was to build an arm-elf target. In building an arm-linux target, the gcc people reasonably assume that glibc has been compiled previously and it will provide the crti.o startup. Perfectly reasonable, if you're upgrading.
Building a new root file system is another story, a paradoxical one at that (which comes first: glibc or gcc?). An approach (endorsed, but I've not yet succeeded with) is to build a stand-alone gcc (arm-elf\static, say) then glibc, then gcc again.
It seems as though some have addressed the missing crti.o in an arm-linux target by modfiying gcc\config\arm\t-linux. Rather than relying on an unexisting glibc, the kludge is to use the arm-elf provided version of the crti.o. An example can be found here.
--- gcc-3.4.4/gcc/config/arm/t-linux 2003-09-20 17:09:07.000000000 -0400
+++ gcc-3.4.4.works/gcc/config/arm/t-linux 2005-05-25 20:44:07.000000000 -0400
## -18,3 +18,24 ##
# LIBGCC = stmp-multilib
# INSTALL_LIBGCC = install-multilib
+
+EXTRA_MULTILIB_PARTS = crtbegin.o crtend.o crti.o crtn.o
+
+# If EXTRA_MULTILIB_PARTS is not defined above then define EXTRA_PARTS here
+# EXTRA_PARTS = crtbegin.o crtend.o crti.o crtn.o
+
+LIBGCC = stmp-multilib
+INSTALL_LIBGCC = install-multilib
+
+# Assemble startup files.
+$(T)crti.o: $(srcdir)/config/arm/crti.asm $(GCC_PASSES)
+ $(GCC_FOR_TARGET) $(GCC_CFLAGS) $(MULTILIB_CFLAGS) $(INCLUDES) \
+ -c -o $(T)crti.o -x assembler-with-cpp $(srcdir)/config/arm/crti.asm
+
+$(T)crtn.o: $(srcdir)/config/arm/crtn.asm $(GCC_PASSES)
+ $(GCC_FOR_TARGET) $(GCC_CFLAGS) $(MULTILIB_CFLAGS) $(INCLUDES) \
+ -c -o $(T)crtn.o -x assembler-with-cpp $(srcdir)/config/arm/crtn.asm
+
+# Disable libc link
+
+SHLIB_LC =

Related

Compiling library in g++ using C++11

I'm been attempting to compile an open-source C++ library (QuantLib-1.7) on my mac for several days but I seem to be encountering some kind of C++11 compatibility issue.
When I run make && sudo make install from the terminal the compilation seems to work except for a bunch of errors of the form
Making all in BermudanSwaption
g++ -DHAVE_CONFIG_H -I. -I../../ql -I../.. -I../.. -I/opt/local/include -g -O2 -MT BermudanSwaption.o -MD -MP -MF .deps/BermudanSwaption.Tpo -c -o BermudanSwaption.o BermudanSwaption.cpp
In file included from BermudanSwaption.cpp:22:
In file included from ../../ql/quantlib.hpp:43:
In file included from ../../ql/experimental/all.hpp:25:
In file included from ../../ql/experimental/volatility/all.hpp:21:
In file included from ../../ql/experimental/volatility/zabr.hpp:31:
In file included from ../../ql/math/statistics/incrementalstatistics.hpp:35:
In file included from /opt/local/include/boost/accumulators/statistics/stats.hpp:14:
In file included from /opt/local/include/boost/accumulators/statistics_fwd.hpp:12:
/opt/local/include/boost/mpl/print.hpp:50:19: warning: in-class initialization
of non-static data member is a C++11 extension [-Wc++11-extensions]
const int m_x = 1 / (sizeof(T) - sizeof(T));
^
1 warning generated.
I'm guessing this has something to do with g++ not being correctly configured for C++11. I'm familiar with the fact that C++11 can be invoked by compiling with g++ -std=c++11. However, despite a lot of googling I can't find a way to modify the makefile such that -std=c++11 is called when I run make && sudo make install.
Any help would be greatly appreciated.
Here is the section of the makefile which I believe is relevant:
BOOST_INCLUDE = -I/opt/local/include
BOOST_LIB = -L/opt/local/lib
BOOST_THREAD_LIB =
BOOST_UNIT_TEST_DEFINE = -DQL_WORKING_BOOST_STREAMS
BOOST_UNIT_TEST_LIB = boost_unit_test_framework-mt
BOOST_UNIT_TEST_MAIN_CXXFLAGS = -DBOOST_TEST_DYN_LINK
CC = gcc
CCDEPMODE = depmode=gcc3
CFLAGS = -g -O2
CPP = gcc -E
CPPFLAGS = -I/opt/local/include
CXX = g++
CXXCPP = g++ -E
CXXDEPMODE = depmode=gcc3
CXXFLAGS = -g -O2
Here is the output from running "g++ -v":
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.0 (clang-700.1.76)
Target: x86_64-apple-darwin14.5.0
Thread model: posix
Makefile.am: https://www.dropbox.com/s/v5j7qohwfup81od/Makefile.am?dl=0
Makefile.in: https://www.dropbox.com/s/t92hft9ea2ar1zw/Makefile.in?dl=0
QuantLib-1.7 directory: https://www.dropbox.com/sh/ulj0y68m8x35zg8/AAA-w7L2_YWIP8_KnwURErzYa?dl=0
Full error log: https://www.dropbox.com/s/g09lcnma8skipv7/errors.txt?dl=0
Add something like
CXXFLAGS += -std=c++11
to your Makefile. This will work regardless of the Darwin-specific munging of the g++ executable---it's really clang++.
References:
https://gcc.gnu.org/onlinedocs/gcc-5.2.0/gcc/C_002b_002b-Dialect-Options.html#C_002b_002b-Dialect-Options
https://gcc.gnu.org/projects/cxx0x.html
http://clang.llvm.org/cxx_status.html
https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html
As you already have, and are familiar with homebrew, my suggestion would be to use that to install and manage quantlib like this:
brew install quantlib
That will then build and put all the files in /usr/local/Cellar/quantlib under some version number that is not of importance. The important thing is the the tools are then linked into /usr/local/bin so all you need to do is make sure that /usr/local/bin is in your PATH.
That gives you access to the tool quantlib-config which is always linked to the latest version and it knows which version that is. So, if you run:
quantlib-config --cflags
it will tell you what the correct path is for your includes like this:
-I/usr/local/Cellar/quantlib/1.6.1/include
Likewise, if you run:
quantlib-config --libs
it will tell you the correct linking directories and libraries for your latest version.
In short, all you need to do to compile is:
g++ $(quantlib-config --cflags --libs)
and it will always pull in the version you are using.
Note that if you use a Makefile, you will need to double the dollar signs.
This is how I eventually managed to compile the Quantlib library for future reference. It is probably not the most efficient/elegant method but it appears to work.
I followed the steps given in http://quantlib.org/install/macosx.shtml and found that running make && sudo make install led to the error reported in the OP.
Create a new static library C++ project in Eclipse called 'Quantlib'
Copy the ql directory located in the .tar file to the Quantlib Eclipse workspace
Right-click Quantlib > Properties > C/C++ Build > Settings > Cross G++ Compiler: Change the Language standard to ISO C++ 11 (-std=c++0x)
Right-click Quantlib > C/C++ General > Paths and Symbols: Add the following include directories for GNU C++
opt/local/include
/Quantlib (check "Is a workspace directory")
/opt/local/include/boost.
Build the Quantlib project (around 34 min on MacBook Air 1.8 GHz Intel Core i7)
Create a new C++ executable project (e.g. BermudanSwaption) and copy the BermudanSwaption.cpp into the BermudanSwaption Eclipse workspace
Repeat steps 4. and 5. for the BermudanSwaption Eclipse project
Right-click BermudanSwaption > Properties > C/C++ General > Paths and Symbols > References: check Quantlib (the Library Paths tab should now contain the entry '/Quantlib/Debug')
Build and run the BermudanSwaption executable project
QuantLib-1.7
OSX Yosemite 10.10.5
Eclipse C/C++ Development Tools Version: 8.8.0.201509131935
Xcode Version 7.1 (7B91b)
xcode-select version 2339.

Building shared libraries for Ada

I'm having some trouble building shared libraries from Ada packages without using GPR's.
I have a package, Numerics, in files "numerics.ads" and "numerics.adb". They have no dependencies. There is a small build script which does:
gnatmake -Os numerics.ad[bs] -cargs -fPIC
gcc -shared numerics.o -o libnumerics.so -Wl,-soname,libnumerics.so
The .so and .ali files are installed at /usr/lib, and the .ads file is installed at /usr/include.
gnatls -v outputs the following relevant parts:
Source Search Path:
<Current_Directory>
/usr/include
/usr/lib/gcc/x86_64-unknown-linux-gnu/5.1.0/adainclude
Object Search Path:
<Current_Directory>
/usr/lib
/usr/lib/gcc/x86_64-unknown-linux-gnu/5.1.0/adalib
So GNAT should have no problem finding the files.
Then, trying to compile a package that depends on Numerics:
gnatmake -O2 mathematics.ad[bs] -cargs -fPIC
outputs:
gcc -c -fPIC mathematics.adb
gcc -c -I./ -fPIC -I- /usr/include/numerics.ads
cannot generate code for file numerics.ads (package spec)
gnatmake: "/usr/include/numerics.ads" compilation error
This error has me thinking GNAT doesn't recognize the shared library, and is trying to rebuild Numerics.
I'd like to be building shared libraries, and only supply the spec for reference/documentation purposes.
edit:
So, it looks like gprbuild does two things I'm not doing. The first, is also passing -lnumerics to the compiler. The second, which shouldn't matter since libnumerics.so is in a standard directory anyways, is -L«ProjectDirectory». GPRbuild is obviously not doing desired behavior either, even though it's building the dependent project. It should be using the installed library /usr/lib/libnumerics.so, but instead is using «path»/Numerics/build/libnumerics.so. Furthermore, after building Numerics with GPRbuild, and then renaming the body to make it as if the body didn't exist (like with the installed files), when building Mathematics with GPRbuild, it complains about the exact same problem. It's as if the libraries aren't even shared, and GPRBuild is just making them look that way (except readelf reports the correct dependencies inside the libraries).
Adding -lnumerics to the build script accomplishes nothing; the build error is exactly the same. I'm completely lost at this point.
edit:
Following the link from Simon, the buildscript has changed to:
gnatmake -O2 mathematics.ad[bs] \
-aI/usr/include \
-aO/usr/lib \
-cargs -fPIC \
-largs -lnumerics
The error is essentially the same:
gcc -c -O2 -I/usr/include/ -fPIC mathematics.adb
gcc -c -I./ -O2 -I/usr/include/ -fPIC -I- /usr/include/numerics.ads
cannot generate code for file numerics.ads (package spec)
gnatmake: "/usr/include/numerics.ads" compilation error
I thought to check libnumerics.so is actually a correct shared library. ldd reports:
linux-vdso.so.1 (0x00007ffd944c1000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f50d3927000)
/usr/lib64/ld-linux-x86-64.so.2 (0x00007f50d3ed4000)
So I'm thinking yes, the library is fine, and gnatmake still isn't recognizing it.
In general, you need to install the body of the packages as well (numerics.adb in your case). Also, I suspect you want to set the ALI files
(numerics.ali) as read-only, so that gnatmake does not try to recompile them.

using external libraries with msys - minGW

I want to write and compile C++ code that requires the FLTK 1.3.2 GUI libraries.
I would like to use minGW with MSYS.
I have installed minGW and MSYS properly and have been able to build FLTK with ./configure
make. Everything worked up to this point.
Now I am testing the hello program, and can get the compiler to locate the header files, but it returns errors - which I believe are a result of the compiler not finding the location of the FLTK library. I have looked over the minGW site and it seems the difficulty of getting MSYS to direct the compiler to the correct location is not uncommon.
I have worked with C++ minGW for about a year but am completely new to MSYS.
Here is my command:
c++ Hello.cxx -Lc:/fltk-1.3.2/test -Ic:/fltk-1.3.2 -o Hello.exe
(I am not sure if my syntax is correct so any comments are appreciated)
Here is what I get from the compiler:
C:\Users\CROCKE~1\AppData\Local\Temp\ccbpaWGj.o:hello.cxx(.text+0x3c): undefined reference to 'Fl_Window::Fl_Window(int, int, char const*)'
... more similar comments...
collect2: ld returned exit status
It seems the compiler can't find the function definitions which I believe are in c:/fltk-1.3.2/test.
Again, I am a newbie so any help is greatly appreciated.
Thanks.
Your compile command is not good... You only inform LD where to search for additional libraries with the -L parameter, but you do not specify any library you actually want to use. For that you use -l flag.
So the command should be something like: g++ Hello.cxx -Lc:/fltk-1.3.2/test -Ic:/fltk-1.3.2 -o Hello.exe -llibfltk_images -llibfltk -llibwsock32 -llibgdi32 -llibuuid -llibole32
My recommendation - use the provided fltk-config script to obtain the flags.
Here is a MinGW makefile I "stole" from here: http://www.fltk.org/articles.php?L1286 .
# Makefile for building simple FLTK programs
# using MinGW on the windows platform.
# I recommend setting C:\MinGW\bin AND C:\MinGW\msys\1.0\bin
# in the environment %PATH% variable on the development machine.
MINGW=C:/MinGW
MSYS=${MINGW}/msys/1.0
FLTK_CONFIG=${MSYS}/local/bin/fltk-config
INCLUDE=-I${MSYS}/local/include
LIBS=-L${MSYS}/local/lib
CC=${MINGW}/bin/g++.exe
RM=${MSYS}/bin/rm
LS=${MSYS}/bin/ls
EXE=dynamic_buttons_scroll.exe
SRC=$(shell ${LS} *.cxx)
OBJS=$(SRC:.cxx=.o)
CFLAGS=${INCLUDE} `${FLTK_CONFIG} --cxxflags`
LINK=${LIBS} `${FLTK_CONFIG} --ldflags`
all:${OBJS}
${CC} ${OBJS} ${LINK} -o ${EXE}
%.o: %.cxx
${CC} ${INCLUDE} ${CFLAGS} -c $*.cxx -o $*.o
clean:
- ${RM} ${EXE}
- ${RM} ${OBJS}
tidy: all
- ${RM} ${OBJS}
rebuild: clean all
# Remember, all indentations must be tabs... not spaces.

Cross-Compiling for an embedded ARM-based Linux system

I try to compile some C code for an embedded (custom) ARM-based Linux system. I set up an Ubuntu VM with a cross-compiler named arm-linux-gnueabi-gcc-4.4 because it looked like what I needed. Now when I compile my code with this gcc, it produces a binary like this:
$ file test1
test1: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked
(uses shared libs), for GNU/Linux 2.6.31,
BuildID[sha1]=0x51b8d560584735be87adbfb60008d33b11fe5f07, not stripped
When I try to run this binary on the embedded Linux, I get
$ ./test1
-sh: ./test1: not found
Permissions are sufficient. I can only imagine that something's wrong with the binary format, so I looked at some working binary as reference:
$ file referenceBinary
referenceBinary: ELF 32-bit LSB executable, ARM, version 1, dynamically linked
(uses shared libs), stripped
I see that there are some differences, but I do not have the knowledge to derive what exactly I need to fix and how I can fix that. Can someone explain which difference is critical?
Another thing I looked at are the dependencies:
$ ldd test1
libc.so.6 => not found (0x00000000)
/lib/ld-linux.so.3 => /lib/ld-linux.so.3 (0x00000000)
(Interestingly, this works on the target system although it cannot execute the binary.) The embedded system only has a libc.so.0 available. I guess I need to tell the compiler the libc version I want to link against, but as I understand it, gcc just links against the version it comes with, is this correct? What can I do about it?
Edit: Here's the Makefile I use:
CC=/usr/bin/arm-linux-gnueabi-gcc-4.4
STRIP=/usr/bin/arm-linux-gnueabi-strip
CFLAGS=-I/usr/arm-linux-gnueabi/include
LDFLAGS=-nostdlib
LDLIBS=../libc.so.0
SRCS=test1.c
OBJS=$(subst .c,.o,$(SRCS))
all: test1
test1: $(OBJS)
$(CC) $(LDFLAGS) -o main $(OBJS) $(LDLIBS)
$(STRIP) main
depend: .depend
.depend: $(SRCS)
rm -f ./.depend
$(CC) $(CFLAGS) -MM $^>>./.depend;
clean:
rm -f $(OBJS)
include .depend
What you should probably do is to install libc6 on the embedded system. Read this thread about a similar problem. The solution in post #5 was to install:
libc6_2.3.6.ds1-13etch9_arm.deb
linux-kernel-headers_2.6.18-7_arm.deb
libc6-dev_2.3.6.ds1-13etch9_arm.deb
Your other option is to get the libc from the embedded system onto your VM and then pass it to the gcc linker and use the -static option.
This solution was also mentioned in the above thread. Read more about static linking here.
Other things to try:
In this thread they suggest removing the -mabi=apcs-gnu flag from your makefile if you're using one.
This article suggests feedint gcc the -nostdlib flag if you're compiling from the command line.
Or you could switch to using the arm-none-eabi-gcc compiler. References on this can be found here and here.

shared library locations for matlab mex files:

I am trying to write a matlab mex function which uses libhdf5; My Linux install provides libhdf5-1.8 shared libraries and headers. However, my version of Matlab, r2007b, provides a libhdf5.so from the 1.6 release. (Matlab .mat files bootstrap hdf5, evidently). When I compile the mex, it segfaults in Matlab. If I downgrade my version of libhdf5 to 1.6 (not a long-term option), the code compiles and runs fine.
question: how do I solve this problem? how do I tell the mex compilation process to link against /usr/lib64/libhdf5.so.6 instead of /opt/matlab/bin/glnxa64/libhdf5.so.0 ? When I try to do this using -Wl,-rpath-link,/usr/lib64 in my compilation, I get errors like:
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../x86_64-pc-linux-gnu/bin/ld: warning: libhdf5.so.0, needed by /opt/matlab/matlab75/bin/glnxa64/libmat.so, may conflict with libhdf5.so.6
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: ld returned 1 exit status
mex: link of 'hdf5_read_strings.mexa64' failed.
make: *** [hdf5_read_strings.mexa64] Error 1
ack. the last resort would be to download a local copy of the hdf5-1.6.5 headers and be done with it, but this is not future proof (a Matlab version upgrade is in my future.). any ideas?
EDIT: per Ramashalanka's excellent suggestions, I
A) called mex -v to get the 3 gcc commands; the last is the linker command;
B) called that linker command with a -v to get the collect command;
C) called that collect2 -v -t and the rest of the flags.
The relevant parts of my output:
/usr/bin/ld: mode elf_x86_64
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/crti.o
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/crtbeginS.o
hdf5_read_strings.o
mexversion.o
-lmx (/opt/matlab/matlab75/bin/glnxa64/libmx.so)
-lmex (/opt/matlab/matlab75/bin/glnxa64/libmex.so)
-lhdf5 (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/libhdf5.so)
/lib64/libz.so
-lm (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/libm.so)
-lstdc++ (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libstdc++.so)
-lgcc_s (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libgcc_s.so)
/lib64/libpthread.so.0
/lib64/libc.so.6
/lib64/ld-linux-x86-64.so.2
-lgcc_s (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libgcc_s.so)
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/crtendS.o
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/crtn.o
So, in fact the libhdf5.so from /usr/lib64 is being referenced. However, this is being overriden, I believe, by the environment variable LD_LIBRARY_PATH, which my version of Matlab automagically sets at run-time so it can locate its own versions of e.g. libmex.so, etc.
I am thinking that the crt_file.c example works either b/c it does not use the functions I am using (H5DOpen, which had a signature change in the move from 1.6 to 1.8 (yes, I am using -DH5_USE_16_API)), or, less likely, b/c it does not hit the parts of Matlab internals that need hdf5. ack.
The following worked on my system:
Install hdf5 version 1.8.4 (you've already done this: I installed the source and compiled to ensure it is compatible with my system, that I get gcc versions and that I get the static libraries - e.g. the binaries offered for my system are icc specific).
Make a target file. You already have your own file. I used the simple h5_crtfile.c from here (a good idea to start with this simple file first a look for warnings). I changed main to mexFunction with the usual args and included mex.h.
Specify the static 1.8.4 library you want to load explicitly (the full path with no -L for it necessary) and don't include -lhdf5 in the LDFLAGS. Include a -t option so you can ensure that there is no dynamic hdf5 library being loaded. You also need -lz, with zlib installed. For darwin we also need a -bundle in LDFLAGS:
mex CFLAGS='-I/usr/local/hdf5/include' LDFLAGS='-t /usr/local/hdf5/lib/libhdf5.a -lz -bundle' h5_crtfile.c -v
For linux, you need an equivalent position-independent call, e.g. fPIC and maybe -shared, but I don't have a linux system with a matlab license, so I can't check:
mex CFLAGS='-fPIC -I/usr/local/hdf5/include' LDFLAGS='-t /usr/local/hdf5/lib/libhdf5.a -lz -shared' h5_crtfile.c -v
Run the h5_crtfile mex file. This runs without problems on my machine. It just does a H5Fcreate and H5Fclose to create "file.h5" in the current directory, and when I call file file.h5 I get file.h5: Hierarchical Data Format (version 5) data.
Note that if I include a -lhdf5 above in step 3, then matlab aborts when I try to run the executable (because it then uses matlab's dynamic libraries which for me are version 1.6.5), so this is definitely solving the problem on my system.
Thanks for the question. My solution above is definitely much easier for me than what I was doing before. Hopefully the above works for you.
I am accepting Ramashalanka's answer because it led me to the exact solution which I will post here for completeness only:
download the hdf5-1.6.5 library from the hdf5 website, and install the header files in a local directory;
tell mex to look for "hdf5.h" in this local directory, rather than in the standard location (e.g. /usr/include.)
tell mex to compile my code and the shared object library provided by matlab, and do not use the -ldfh5 flag in LDFLAGS.
the command I used is, essentially:
/opt/matlab/matlab_default/bin/mex -v CC#gcc CXX#g++ CFLAGS#"-Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include" CXXFLAGS#"-Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include " -O -lmwblas -largeArrayDims -L/usr/lib64 hdf5_read_strings.c /opt/matlab/matlab_default/bin/glnxa64/libhdf5.so.0
this gets translated by mex into the commands:
gcc -c -I/opt/matlab/matlab75/extern/include -DMATLAB_MEX_FILE -Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include -O -DNDEBUG hdf5_read_strings.c
gcc -c -I/opt/matlab/matlab75/extern/include -DMATLAB_MEX_FILE -Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include -O -DNDEBUG /opt/matlab/matlab75/extern/src/mexversion.c
gcc -O -pthread -shared -Wl,--version-script,/opt/matlab/matlab75/extern/lib/glnxa64/mexFunction.map -Wl,--no-undefined -o hdf5_read_strings.mexa64 hdf5_read_strings.o mexversion.o -lmwblas -L/usr/lib64 /opt/matlab/matlab_default/bin/glnxa64/libhdf5.so.0 -Wl,-rpath-link,/opt/matlab/matlab_default/bin/glnxa64 -L/opt/matlab/matlab_default/bin/glnxa64 -lmx -lmex -lmat -lm -lstdc++
this solution should work on all my various target machines and at least until I upgrade to matlab r2009a, which I believe uses hdf5-1.8. thanks for all the help, sorry for being so dense with this--I think I was overly-committed to using the packaged version of hdf5, rather than a local set of header files.
Note this would all have been trivial if Mathworks had provided a set of the header files with the Matlab distribution...

Resources