I'm trying to use libpcap that was compiled with pf_ring.
I got the sources from ntop, and compiled it.
However, there's something I don't understand (sorry for the newbie linking question):
I wanted to know if my application used the correct pcap version (the one with pfring),
so I typed ldd and didn't see pcap at all, only pfring.
I looked at the output and saw only pfring.so and not pcap, although I dynamically linked to both libs.
I looked at the Makefile of libpcap and saw it linked statically with pfring.a.
I thought I don't have to link with pfring at all, because it's a part of pcap, but got undefined reference.
Does anyone know why I get the undefined reference error and why don't I see libpcap in the ldd output when I link to it dynamically?
Thanks,
Ron
First of all make sure you did all of the following steps:
//Installation
sudo su
cd kernel; make install
cd ../userland/lib; make install
insmod ./kernel/pf_ring.ko
then remove the current libpcap and all of its dependencies from your system.
the pfring enabled libpcap is under /userland/libpcapx as you know.
if you are using pf_ring enabled libpcap in your application simply link libpcap.a to your program.
Related
I'm trying to use pp (the perl compiler) to create an application that can run independent of the perl installed library and interpreter.
It successfully creates a compiled executable although I had to use the -x -c options to get it to find dependencies successfully. It will run on my machine but when I try it on another machine I get this error so clearly there is still some dependency:
501 Protocol scheme 'https' is not supported (LWP::Protocol::https not installed)
I am running it on MacOS 10.14.1 if that makes any difference. Thanks!
LWP::Protocol::https is loaded dynamically when needed, so pp has no way of knowing it's needed by default.
Solution 1
Pass -x to pp, and make sure the module is actually loaded in the run pp uses to determine the modules to include. This would probably be achieved by using LWP to make an HTTPS request during that run. --xargs=... might come in useful for this.
Solution 2
Pass -M LWP::Protocol::https to pp. You could also pass -M 'LWP::Protocol::**' to get all protocols handlers you have installed.
Solution 3
Add use LWP::Protocol::https (); to your script or an included module. Including a comment indicating why you are doing this would be appropriate.
You were building Net::SSLeay on MacOS 10.14 linking it to libssl.44.dylib which is not present on MacOS 10.12 where you try to run it.
I've found it annoying having to switch between build and test systems to find out which of the libraries are missing or incompatible and need to be packed.
I am now using the following strategy:
I use perlbrew instead of system perl.
For alien dependencies I use homebrew instead of the system libraries.
I build the packed executable using pp and run the resulting program with export DYLD_PRINT_LIBRARIES=YES being set (on the development machine)
I examine the list of loaded libraries and add all those referenced in the homebrew directory tree (/usr/local/opt/ and /usr/local/cellar/in my case) using pp -l /full/path/name -l ...
I rebuild the executable.
I still check on a target machine before deploying, but chances are very high now that it just works.
I am trying to build a library with a different build system, but files in the library require a config.h header file that is generated after running the configure scripts generated by autoconf.
This is the sequence of steps I am following to try and generate the config.h file that is needed
autoreconf -ivf
./configure --disable-dependency-tracking
The build system guarantees that the library gflags will be linked and the headers will be available at preprocessing time. But the configure script exits with the following error
configure: error: Please install google-gflags library
Is there some way I can get the list of required libraries (such as gflags) and then pass arguments to the configure script that tells it to assume that this library exists on the system? I went through the help output for both autoreconf and ./configure and wasn't able to figure this out.
Sorry for the long explanation and problem. I am very new to autoconf, etc.
The answer to your question is: no, it is not possible to get a list of dependencies from autotools.
Why?
Well, autotools doesn't track dependencies at all.
Instead, it checks whether specific features are present on the system (e.g. a given header-file; or a given library file).
Now a specific header file can come from a variety of sources, e.g. depending on your distribution the foo.h header can be installed via
libfoo-dev (Debian and derivatives)
foo-devel (Fedora)
foo (upstream)
...
In your specific case, the maintainers of your project output a nice error message telling you to install a given package by name.
The maintainers of your project also chose to abort with a fatal error if a given dependency is not available.
The reason might well be, that the project simply won't work without that dependency, and that is impossible to compile the program without it.
Example
Your project might be written in C++ and thus require a C++-compiler.
Obviously there is little use in passing some flags to ./configure so it assumes that there is a C++-compiler available if in reality there is none.
There is hope
However, not all is bad.
Your configure script might will have the ability to disable certain features (that appear to be hard requirements by default).
Just check ./configure --help and look for flags like
--enable-FOO
--disable-FOO
--with-BAR
--without-BAR
automation?
One thing to know about autotools, is that configure really is a program (the source-code being configure.ac) written in some arcane programming language (involving bash and m4),
This means that it can practically have any behavior, and there is no single standard way to achieve "dependecy tracking".
What you're trying to do will not work as umläute already said. On the other hand, depending on the package you're trying to build, you may be able to tell ./configure that a given library is there even if it isn't.
For instance if the script uses pkg-config to check for the presence of a library, you can use FOO_CFLAGS and FOO_LIBS to override the presence checking and telling it "yes those packages are there, you just don't know how to find them", but these are very package-specific so you may have to provide more information if that's what you're looking for.
i'm tackling the problem of compiling vmime library using this guide with MinGW. As this guide states, first i need to compile libiconv library with these commands(yep i'm new to MinGW):
$ tar -xvvzf libiconv-1.13.1.tar.gz
$ cd ./libiconv-1.13.1
$ ./configure --prefix=/mingw #configures makefile, use /mingw as a prefix
$ make
$ make install
after all this commands the libiconv.dll.a appears in libiconv-1.13.1\lib.libs
directory.Also after compiling process appears the /bin directory and there is only 1 library - libcharset-1.dll.
My question is - how do i know if the library properly compiled, without errors?Should i check the output from the MSYS console? there are tons of checks, it seems pretty boring task. Thanks in advance, glad to hear any advice!
You're building a GNU Autotools package.
./configure generates the makefile(s) needed by make to build the library
on your particular system. If it thinks the library can't be built on your particular
system, it will tell you why. It might just miss some reason why you can't build
the library, because the library developer(s) have to script the tests that it runs, and might
just overlook some necessary ones. But if it misses something then make will fail.
make executes all the commands necessary to build the library on your system. If any of them fail,
then make will fail, and will tell you so unmistakably.
Likewise make install does everything necessary to install the library
under the default or specified prefix path.
Classically, unix tools (like the autootols) will inform you when something goes wrong
and not inform you that nothing went wrong.
when I am doing ./configure in opensolaris in an attempt to install software. I've got the following error:
error:C compiler cannot create executables
Then I check up on the net and find its due to a missing module call gcc-lib6-dev. But how can I install it?
Sounds like you found an answer for a completely different OS that doesn't apply to opensolaris (lib6 sounds like an old Linux release). Look in the generated config.log
for more detailed error messages that give better information as to the actual problem.
I have a project which requires opencl. I have installed CUDA and openCL on my machine but when I 'make' my project the following error occurs:
CL/cl.h: No such file or directory
I know that the i can create a hard link (in my unix (ubuntu) system) to fix the problem:
ln -s /usr/include/nvidia-current/CL
But i consider this a quick fix and not the correct solution. I would like to handle this in my makefile (i guess) so that a simple "make" command would compile. How could I do this?
You need to pass an appropriate -I option to the compiler (by setting CPPFLAGS or CFLAGS, for example). -I/usr/include/nvidia-current sounds like it'd work.
I saw this thread from compile opencl program using CL/cl.h file
I installed 7.5 and added below link in /usr/include, it works for my opencl program. looks like CUDA forget to implement this link after the installation.
ln -s /usr/local/cuda-7.5/include/CL /usr/include
Are you using Ubuntu or Debian distro? Then now you can use this package:
sudo apt-get install opencl-headers