From the (slightly) outdated documentation on pyrocksdb, it says:
"If you do not want to call make install export the following enviroment variables:"
$ export CPLUS_INCLUDE_PATH=${CPLUS_INCLUDE_PATH}:`pwd`/include
$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:`pwd`
$ export LIBRARY_PATH=${LIBRARY_PATH}:`pwd`
But the installation instructions for RocksDB do not seem to mention any sort of install target!
Is there an accepted procedure for installing RocksDB from source?
My thoughts are to just copy the contents of the include directory from the rocksdb directory into somewhere like /usr/local/include and copy the librocksdb.so and librocksdb.a files into /usr/local/lib. Is this an acceptable method?
Note: The method of exporting environment variables was less preferable to me, as I built rocksdb in a directory inside my home folder--I am hoping for a cleaner solution (interpret that how you want).
RocksDB recently has make install. If you use the latest version, you should be able to do make install in RocksDB.
There is no install target in the current Makefile.
This breaks the long-established conventions for writing Makefiles (or pretty-much every other build system...); it should be considered a defect.
Without spending a lot of time analysing I can't be sure, but the install target should be something like:
prefix=/usr/local
bindir=$(prefix)/bin
# Normally you'd write a macro for this; 'lib' for 32-bit, 'lib64' for 64...
libdir=$(prefix)/lib64
includedir=$(prefix)/include
# Define this to be the directory(s) the headers are installed into.
# This should not include the 'include' element:
# include/rocksdb/stuff -> rocksdb/stuff
HEADER_DIRS=...
# Define this so all paths are relative to both the $CWD/include directory...
# so include/rocksdb/foo.h -> HEADER_FILES=rocksdb/foo.h
HEADER_FILES=...
.PHONY: install
install: $(TOOLS) $(LIBRARY) $(SHARED) $(MAKEFILES)
mkdir -p $(DESTDIR)$(bindir)
mkdir -p $(DESTDIR)$(libdir)
mkdir -p $(DESTDIR)$(includedir)
for tool in $(TOOLS); do \
install -m 755 $$tool $(DESTDIR)$(bindir); \
done
# No, libraries should NOT be executable on Linux.
install -m 644 $(LIBRARY) $(DESTDIR)$(libdir)
install -m 644 $(SHARED3) $(DESTDIR)$(libdir)
ln -s $(SHARED3) $(DESTDIR)$(libdir)/$(SHARED2)
ln -s $(SHARED2) $(DESTDIR)$(libdir)/$(SHARED1)
for header_dir in $(HEADER_DIRS); do \
mkdir -p $(DESTDIR)$(includedir)/$$header_dir; \
done
for header in $(HEADER_FILES); do \
install -m 644 include/$$header $(DESTDIR)$(includedir)/$$header; \
done
This will then allow you to install the files into /usr/local, by simply doing:
make install
However, the reason it's so heavily parameterised, is so you can change the destination folder, without having to modify the Makefile. For example, to change the destination to /usr, you simply do:
make prefix=/usr install
Alternatively, if you'd like to test the installation process, without messing with your filesystem, you could do:
make DESTDIR=/tmp/rocksdb_install_test prefix=/usr install
This would put the files into /tmp/rocksdb_install_test/usr which you can then check to see if they're where you want them to be... when you're happy, you can just do rm -Rf /tmp/rocksdb_install_test to cleanup.
The variables I've used are essential for packaging with RPM or DEB.
I an use ubuntu 16.04
DEBUG_LEVEL=0 make shared_lib install-shared
In this way, the installation is already generated in the production mode.
If you want to save time, you can specify the quantities of processors used in the process by passing -j[n], in my case, -j4
DEBUG_LEVEL=0 make -j4 shared_lib install-shared
In the case of ubuntu, this is sufficient, but in the case of ubuntu for docker, you should specify where the lib was installed.
export LD_LIBRARY_PATH=/usr/local/lib
Hope this helps.
Kemper
Related
Is there a way to have a 'watch' target in my Makefile which would keep looping and rebuilding the project every time the source file gets changed?
I have this for my Latex project:
.PHONY : monitor
monitor:
while true; do \
inotifywait -e modify -q *.tex *.cls; make all; \
done
Interesting arguments:
-q for quiet
-r for recursive (if you want to watch the whole src folder)
-e to list specific events (if your editor does more file operations and retriggers the build way too often)
--exclude to exclude some (if your src folder contains build artifacts) to make sure the build itself will not retrigger this loop (which would be equivalent of an infinite loop without any delays)
More arguments here (inotify tools are amazing):
https://linux.die.net/man/1/inotifywait
Depending on your distribution you might have to install separate package, on my Debian I had to do
sudo apt-get install inotify-tools
Every now and then a new tarball or a new xyHub/Lab-repository needs to be built. They usually come with a Makfile or an Autotools/CMake/XY-Generator provides one on the fly. As the maintainers most likely use another operating system or distribution than the one I am currently running, the assumptions that went into their Makefiles usually do not fit my filesystem hierarchy (lib vs. lib64, bin vs. sbin, /usr/lib vs. /lib and so on). As the final command in the build sequence usually is
sudo make install
it is quite annoying to move thousands of files to the correct place. Or even worse determine which files of my distribution were overwritten. Here GNU Makes dry run mode comes in very handy. Running
sudo make -n install
first, saves me the trouble of cleaning up my file system, by just printing all the commands from all active GNU Make recepies without executing them. In case of a handwritten or Autotools-generated Makfile this works as intended. If the Makefile contains something like:
#PREFIX is environment variable, but if it is not set, then set default value
ifeq ($(PREFIX),)
PREFIX := /usr/local
endif
install: unixlib.a
install -d $(DESTDIR)$(PREFIX)/lib/
install -m 644 unixlib.a $(DESTDIR)$(PREFIX)/lib/
install -d $(DESTDIR)$(PREFIX)/include/
install -m 644 unixlib.h $(DESTDIR)$(PREFIX)/include/
I would see exactly what would happen. Every install/cp/mv-command with the full path information would be printed. If I made a mistake with the install prefix in the configure step I can see it there. If the default in the Makefile is weird because it comes from another OS, I would see it there.
Now in case of a CMake-generated Makefile this is different. Doing
mkdir build && cd build
cmake ..
make
sudo make -n install
only produces output that ends in
...
make -f CMakeFiles/Makefile2 preinstall
/usr/bin/cmake -E cmake_echo_color --switch= --cyan "Install the project..."
/usr/bin/cmake -P cmake_install.cmake
As these commands get not executed, just printed, I do not get all the cp/mv/mkdir/install/etc-commands that I would like to see first, before I let the Makefile touch the file system.
Is there a way to get the list of commands that would be executed from the install target in a CMake-generated Makefile as it is the case with handwritten or Autotools-generated ones?
Is there a way to get the list of commands that would be executed from the install target.
Actually, the core part of installation process is contained in the file cmake_install.cmake (which is created in the build directory). This file is processed as CMake script using cmake -P flow of the cmake executable.
The script cmake_install.cmake performes installation of files with install command. Semantic of the install command, used by the script, differs from the one described in documentation: internally, CMake uses some undocumented features of the command.
But it shouldn't be so hard to understand cmake_install.cmake script in general and deduce paths from it.
While in a conda environment (source activate), how can I make install into the environment library directories (lib, bin, etc.) and not the system directories?
Note that I do NOT want answers related to conda-build.
Use the -C (change directory) argument to tell make to use a different directory:
make -C $CONDA_PREFIX/lib install
From the manual:
-C dir, --directory=dir
Change to directory dir before reading the makefiles or doing anything else.
I've written a scons build chain form a little C project, but I'm afraid users won't like to be told "You should install SCons first. Besides, it's really cool!" (expecially my professor, as he's kind of from the old guard).
Is there a way I can set up a Makefile that will wrap scons, not requiring it to be installed on the target system?
After looking for such a solution some time ago, I ended up writing a Makefile for this purpose.
Because SCons also comes as a drop-in userspace package scons-local (see the download page), one can fetch in and run it. Here is a dissectioned and commented version of my Makefile, which I also uploaded as a gist.
all: get_scons
#$(SCONS_EXE)
↑ The default action depends on scons being available, and simply runs the scons command (set later in the script) (the # symbol prevents make from printing the command)
SCONS_VERSION=2.3.4
scons-local-%.tar.gz:
curl -L http://sourceforge.net/projects/scons/files/scons-local/$(SCONS_VERSION)/scons-local-$(SCONS_VERSION).tar.gz > scons-local-$(SCONS_VERSION).tar.gz
touch scons-local-$(SCONS_VERSION).tar.gz
scons-local: scons-local-$(SCONS_VERSION).tar.gz
mkdir -p scons-local
tar xzf scons-local-$(SCONS_VERSION).tar.gz --directory scons-local
touch scons-local
↑ Set up the rules for fetching the tarball and unpack it into the scons-local directory
NATIVE_SCONS=$(strip $(shell which scons 2>/dev/null))
ifeq ($(NATIVE_SCONS),)
SCONS_EXE=python2 ./scons-local/scons.py
get_scons: scons-local
#echo "Couldn't find an installation of SCons, using a local copy"
else
SCONS_EXE=$(NATIVE_SCONS)
get_scons:
#echo "Found SCons installation at $(SCONS_EXE)"
endif
↑ Look for the scons executable in the search path (using the which command): if it is available, set up the get-scons target to simply print it is available. If, instead, it is not available, create the get-scons target instructing it to depend on the scons-local target defined earlier.
clean:
$(SCONS_EXE) -c
rm -rf scons-local
rm -f scons-local-*.tar.gz
.PHONY: all clean get_scons
↑ Finally, set-up the clean target that delegates to scons and deletes the local installation afterwards. The .PHONY rule tells make that the following rules do not correspond to files being created.
At this point, one could add more proxy rules of the kind:
mytarget: get_scons
#$(SCONS_EXE) mytarget
Which will invoke scons with the corresponding target.
Hope this is useful, feel free to correct me in case there's something wrong (I'm actually not a Makefile expert, and I'm trying not to become one by using SCons instead :P )
I downloaded gcc 4.6.2 (with GMP, MPFR, and MPC) and did a build. I could see g++ executable in build/gcc directory.
When I try to use it
./g++ test.cpp
I get the following error:
g++: error trying to exec 'cc1plus': execvp: No such file or directory
How to resolve this?
How to use the newly built g++ by default?
PS.
I followed these steps to install and I didn't see any error.
$ export CC=/usr/bin/gcc-4.2
$ export CXX=/usr/bin/g++-4.2
$ export CPP=/usr/bin/cpp-4.2
$ export LD=/usr/bin/ld # not /usr/bin/gcc-4.2!!
Clean also your $PATH as much as possible:
$ export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/texbin:/usr/X11/bin
(I don’t know exactly if this is necessary but works fine for me and when you close the Terminal the PATH reverts to its original setting).
1. Download the GCC, GMP, MPFR, and MPC sources. The links are in the original post.
2. Save everything in, say, ~/Downloads (or any directory you prefer).
3. Start the Terminal and run the following commands (change the filenames according to the version you have downloaded):
$ cd # go to your home directory
$ mkdir src ; cd src
$ tar -xzf ~/Downloads/gcc-4.6.1.tar.gz # change the path if you have saved the sources elsewhere!
$ tar -xjf ~/Downloads/gmp-5.0.2.tar.bz2
$ tar -xzf ~/Downloads/mpc-0.9.tar.gz
$ tar -xzf ~/Downloads/mpfr-3.0.1.tar.gz
$ cd gcc-4.6.1
$ ln -s ../gmp-5.0.2 gmp
$ ln -s ../mpc-0.9 mpc
$ ln -s ../mpfr-3.0.1 mpfr
4. Now create a build directory in ~/src but **outside** the gcc source tree, so that it can easily cleaned up to restart everything from scratch:
$ cd ~/src
$ mkdir build ; cd build
$ ../gcc-4.6.1/configure
$ make
$ make install
Congratulations on getting GCC 4.6.2 on MacOS X 10.7 built - I was not successful (even with the 4.6.1 compiler that I'd built OK). If I had notes about what went wrong, I'd share war stories with you.
GCC is compiled to be installed in some particular location (/usr/local by default, I believe). When you are running in the build area, you have to redirect it to find its executables in the correct alternative location - there are options to do that.
Run g++ --help. The -B option is probably what you need. My G++ 4.6.1 gives this output:
$ g++ -print-search-dirs
install: /usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/
programs: =/usr/gcc/v4.6.1/libexec/gcc/x86_64-apple-darwin11.1.0/4.6.1/:/usr/gcc/v4.6.1/libexec/gcc/x86_64-apple-darwin11.1.0/4.6.1/:/usr/gcc/v4.6.1/libexec/gcc/x86_64-apple-darwin11.1.0/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/../../../../x86_64-apple-darwin11.1.0/bin/x86_64-apple-darwin11.1.0/4.6.1/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/../../../../x86_64-apple-darwin11.1.0/bin/
libraries: =/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/../../../../x86_64-apple-darwin11.1.0/lib/x86_64-apple-darwin11.1.0/4.6.1/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/../../../../x86_64-apple-darwin11.1.0/lib/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/../../../x86_64-apple-darwin11.1.0/4.6.1/:/usr/gcc/v4.6.1/lib/gcc/x86_64-apple-darwin11.1.0/4.6.1/../../../:/lib/x86_64-apple-darwin11.1.0/4.6.1/:/lib/:/usr/lib/x86_64-apple-darwin11.1.0/4.6.1/:/usr/lib/
$
I specified --prefix=/usr/gcc/v4.6.1 when I ran configure. (Tip: The configured prefix should either not exist when you do the build or the path should not involve any symlinks - because GCC will build the 'real path' into the binary (resolving all symlinks), not the value given on the command line. This matters if you have any plans to use the code on different machines.)