FreeType manual integration - freetype

I am trying to run a program that manually uses FreeType. I should not compile FreeType to a library but use source code directly. At the moment I can compile my code with no errors. However, when I run my program on Ubuntu it gives a segmentation fault. I believe the problem is related to module structure. I am using FreeType to convert ttf to bitmap, thus I included tt, sfnt, and psnames modules. However, there is something wrong with their initialization I guess.

Why are you avoiding Ubuntu's provided libfreetype6 and libfreetype6-dev packages?
I could understand that you goal might be to make changes to libfreetype, and thus have an easy way to make your needed changes without affecting the rest of the system, but you're always going to want to use FreeType as a library. (Sure, you could statically link against it, but in my experience, statically linking usually adds to problems instead of removing problems.)
So you could install your own local copy of FreeType into /usr/local/lib/ or ~/local/lib/ (use ./configure --prefix=/usr/local or --prefix=~/local/).
Then when compiling your program, you'd use gcc -I ~/local/include -L ~/local/lib ...

Related

How to dynamically link your nim-application against musl?

I've written a web-server in nim using the prologue framework. I would like to deploy that application using an alpine-docker-container.
As far as I'm aware, compiling means you dynamically link against your system libraries for whatever you need, that system library on any normal distro being glibc.
On alpine however you do not use glibc, you use musl, so dynamically linking against glibc means my application will expect glibc functions with glibc names that do not exist since there are only musl functions.
The big question that arises out of this for me as a python developer that jumped onto nim and knows very little about compilers:
How do I compile, so that I link dynamically against musl?
The folks from nim discord's brought me to the answer. It consists of passing flags to the nim-compiler to swap out the compiler nim normally uses for its generated C-code, in order to use musl-gcc. This can be done by using the --gcc.exe:"musl-gcc" and --gcc.linkerexe:"musl-gcc" flags.
Here an example for Linux:
1. install musl to get access to musl-gcc
download the tar file from the official musl page
Unpack the tar file somewhere
Run bash configure in the unpacked directory. WARNING: Make sure that you do NOT install musl with --prefix being /usr/local, as that may adversely affect your system. Use somewhere else where it is unlikely to override files, e.g. /usr/local/musl. This path will further be referred to as <MUSL_INSTALL_PATH>
Run make && make install in the unpacked directory
Add <MUSL_INSTALL_PATH> to your PATH environment variable
Validate whether you set everything up correctly by opening a new terminal and seeing whether you have access to the musl-gcc command
2. Compile with musl
Create a compile command with --gcc.exe:"musl-gcc" to swap out gcc with musl-gcc and --gcc.linkerexe:"musl-gcc" to swap out the default linker with musl-gcc as well. An example can look like this:
nim c \
--gcc.exe:"musl-gcc" \
--gcc.linkerexe:"musl-gcc" \
--define:release \
--threads:on \
--mm:orc \
--deepcopy:on \
--define:lto \
--outdir:"." \
<PATH_TO_YOUR_MAIN_PROJECT_FILE>.nim
run the command
This should generate a binary that is dynamically linked against musl and thus can run within an alpine docker container.

Linking against static Lua libraries on macOS

I'm trying to compile and link a program (using CMake) that uses Lua 5.3's C interface on Mac OS X 10.15.7. However I have these problems:
brew install lua#5.3 only installs dynamic libraries
I cannot copy static libraries built from source to /usr/local due to System Integrity Protection (?)
I don't know how to make CMake find the libraries if I put them anywhere else (using find_package(Lua 5.3 REQUIRED)
What's the easiest way to solve this?
If I correctly understand your question, you are trying to use Lua's C API, which means that you need access to the principal header files lua.h, lualib.h, and lauxlib.h, as well the static library liblua.a that is created when the interpreter is built.
I would recommend downloading lua-5.3.5.tar.gz from lua.org and then building from source.
This can be done easily from the Terminal:
$ wget http://www.lua.org/ftp/lua-5.3.5.tar.gz
$ tar xzf lua-5.3.5.tar.gz
$ cd lua-5.3.5
$ make macosx
After that you should be able to do make install as well, which copies the Lua interpreter to /usr/local/bin, I believe.
If you do not want the key Lua header files put into your include path, build your program with -I and -L flags. Also, don't forget the -llua -ldl -lm flags when linking your program.

GCC built from source in different location is incorrectly using same shared libs as native GCC

I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.

Compiling Glibc

I'm attempting to compile and package Glibc for what may eventually become my own Linux distribution. --with-headers=directory allows me to tell Glibc that the kernel headers are in a different place. But how do I tell Glibc to put it's own headers in a non standard location?
if you run ./configure --help, it'll show many flags to control various install paths. for headers, use --includedir=/some/other/path.

boost library gives errors on ubuntu

I am trying to compile a package on ubuntu 8.1
when executing this command: ./configure I get the follwoing error:
checking for Boost headers version >= 103700... no
configure: error: cannot find Boost headers version >= 103700
knowing that I installed needed boost packages using these command:
$ apt-get install libboost-dev libboost-graph-dev libboost-iostreams-dev
Can anybody help please?
thank you. Now it works but i get another error when running ./configure: checking boost/iostreams/device/file_descriptor.hpp usability... yes checking boost/iostreams/device/file_descriptor.hpp presence... yes checking for boost/iostreams/device/file_descriptor.hpp... yes checking for the Boost iostreams library... no configure: error: cannot not find the flags to link with Boost iostreams any ideas please?
It could be that the version of boost that you're getting from the Ubuntu repository is too old (it's suggested here that the highest version for 8.10 is 1.35; it looks like your configure script is asking for 1.37). You might need to build from source; there's some more info in the answers to the question I linked to which will hopefully help.
UPDATE:
From your new error, it sounds like configure now can't find the boost_iostreams library. On my system it's /usr/lib/libboost_iostreams-mt.[a|so] - do you have those files (possibly in a different directory depending on where you installed boost)?
You can also try running ldconfig in case there's a missing symlink (from, say,
libboost_iostreams-mt.so.1.37.0 to libboost_iostreams-mt.so).
Is this configure one generated by GNU autoconf? If it is, there should be a file called config.log in the same directory which contains a list of all the commands configure tried to run when looking for things. If there's anything in there about boost_iostreams could you post it?
One totally random guess: some examples I've found on the web link to boost_iostreams without the multi-threading suffix -mt - but I don't have those on my machine at all. Maybe your configure script is running into the same problem?
UPDATE 2
The configure script seems to be looking for a single-threaded debug build of the boost iostreams library, which won't be produced by default when building from source on linux. Also, the default on linux is not to name the libraries based on the build configuration (so the libs you found in /usr/lib might not be the ones you installed from source unless you overrode this). This stuff isn't really explained on the boost website, I only found out by looking in the Jamroot file (bjam --help works too)! Anyway, to get a library with the right build configuration, and named correctly, I need to go into the root of the boost source tree and run:
sudo bjam --with-iostreams --layout=tagged variant=debug threading=single install
For me this puts the libraries (libboost_iostreams-d.a and the shared versions) into /usr/local/lib where ld will find them by default, so this should be fine. If you need them to go somewhere else you can use the --prefix=... option to bjam eg. if you want them in /usr/lib you can do --prefix=/usr. If the package you're building needs more boost libraries you can remove the --with-iostreams and then they'll all be built (or replace iostream with the name of each other library you need).
A side note: I had to install the libbz2-dev package to get boost iostreams to build - it's easy to miss the error here if you build all of boost as there's so much output!

Resources