Checking for c library - bash

What is the best method in bash to check to se if a c library is installed?
Per comments, to be more clear:
I run an install script that looks for the libexpat header file (expat.h)
Currently it is doing this via:
if ! locate libexpat.so 1> /dev/null; then
However, it should be looking for expat.h, but this still required the mlocate db to be updated first.
Any better way?

You can do as autoconf does (or, more accurately, configure scripts generated by autoconf do), and test whether you can compile a C program that uses the library.
If you don't have a compiler or the development tools installed, your options are more limited.
It also isn't clear whether you want to search in out of the way locations for the library, or if you are only concerned with it being installed in the main library directories. Also, are you looking only for shared libraries, or are you looking for static libraries - the naming conventions differ wildly depending on what you're up to, and which machine type you are doing it on (Linux and Solaris vs AIX vs MacOS X vs HP-UX (older, for PA-RISC) vs Windows, for instance). If you're looking for static libraries, you'll be hard-pressed to use them without the compiler (and probably the headers). If you're looking for shared libraries, you can use those without the development paraphernalia if you compile the program on some other machine and copy it to the target machine.

The best one I found was with #cnicutar's comment, thanks.
Specifically to do this in a bash script:
expat_loc=`whereis -b expat.h | cut -c8-`
if [[ -e "$expat_loc" ]]; then
#Something something
fi;

If you need runtime, you'll need a specific version for running some stuff. Try to run that stuff.
If you need headers, try to compile something.

Related

JSON-C build on Windows Platform

Disclaimer: Please read question carefully, This question has a twist so read it till end.
So JSON-C is one of the highly popular library to work on JSON using C programming. Basic illustration on current work, whatever code building here is for multi-platform. Currently Linux & Windows are supported platform and I have small issue with Windows related JSON-C part.
I'm using Cygwin for Windows development, and when I compile JSON-C code as per provided instruction on it's GitHub page, using CMAKE it works out quite good and build system is able generate DLLs for Windows. But if you have worked with Cygwin then you must know that whatever is built using Cygwin, will have dependency on it's run-time environment (cygwin1.dll) (Why does GCC-Windows depend on cygwin?) and it won't be an independent DLL that can be moved around to different system with same architecture. So with this dependency on Cygwin if my project is built on Windows platform, I have to carry around either Cygwin Run-Time Env. or I have to make sure Cygwin installed on target system for smooth execution. This sort of dependency I do not wish to have for my project, it can ruin user experience.
So what I want as help here, Is there a way to build JSON-C independent of Cygwin (run-time environment)??
NOTE: I already know that, if using Cygwin one wishes to create such an independent DLL for Windows then that can be done using few parameters to compiler and some additional macros placed in-front of function declaration as described here Creating a DLL in GCC or Cygwin?
But I don't see such support in source code JSON-C for Windows. So I was just wondering if JSON-C Dev team has kept some provision via build-system then I'm keen to know that part.
PS: I have not dipped into JSON-C build system yet due to my other development, so if anyone out there (my beloved community) has anything on this then please share, that would be terrific.
EDIT
Forgot to mention the version I'm using :p
json-c-0.13.1-20180305
I am able to build JSON-C 0.14-20200419 (from https://github.com/json-c/json-c) under MSYS2 with MinGW-w64 - both static and shared - using these instructions (replace /usr/local as needed):
INSTALLPREFIX=/usr/local
mkdir -p build_static build_shared &&
cmake.exe -Wno-dev -GNinja -DCMAKE_INSTALL_PREFIX:PATH=$INSTALLPREFIX -DCMAKE_BUILD_TYPE:STRING=Release -DBUILD_SHARED_LIBS:BOOL=OFF -DBUILD_TESTING:BOOL=OFF -S. -Bbuild_static &&
cmake.exe -Wno-dev -GNinja -DCMAKE_INSTALL_PREFIX:PATH=$INSTALLPREFIX -DCMAKE_BUILD_TYPE:STRING=Release -DBUILD_SHARED_LIBS:BOOL=ON -DBUILD_TESTING:BOOL=OFF -S. -Bbuild_shared &&
ninja -Cbuild_static install/strip &&
ninja -Cbuild_shared install/strip &&
echo Success
If you don't have ninja you can probably also use -G"MSYS Makefiles" and use make instead of ninja.
Note that MinGW-w64 is different from Cygwin in that it compiles to native Windows binaries without dependancies on a compatibility layer (like Cygwin's cygwin1.dll). The following screenshots illustrates this:

How to run a C program in android-x86 terminal?

I have to C program which is compiled using gcc in ubuntu. I want to run that executable in android terminal. When i run it is showing either "file or directory is not found" or "not executable:ELF32".
I want to run the code in android terminal. Is there any way or flags in gcc or using any other compiler so that i can run my code in the android terminal.?
Android does not use the same system libraries as Ubuntu, so they will not be found.
There are two solutions:
Copy the libraries you need.
If you can place them in the same filesystem locations they have in Ubuntu then great, otherwise you'll need to run the ld-linux.so manually and tell it where to find the libraries. Or, you could relink the program such that it expects to find the dynamic linker and libraries in a non-standard place. You might also use a chroot, but that requires root, and you'd need to find a chroot binary that works.
Use a static link.
This usually just means passing -static to GCC. You get a much larger binary that should be entirely self-contained, with no dependencies. It requires that static versions of all your libraries are available on your build system. Also, some features (such as DNS lookup) always expect a shared library, so they won't work this way.
Even then, you should expect some Linux features to not work. Basically, anything that requires hardware features or configuration files in /etc are going to need a lot of effort. There are various projects that have done this already (search "linux chroot android").
I'm not sure what the "not executable:ELF32" message means, but you should check whether you're building 32 or 64-bit executables, and which the Android binaries are using (file <whatever> should tell you).

meaning of ./configure --with-ssl=openssl

I followed this install wget tutorial,
After I ran this
./configure --with-ssl=openssl
It ran so many checks, what exactly it did? Did it change anything in my system?
If it does, then, is it safer or more fault prove to use the package management tool like MacPort or such so that such 'configure' will not be done manually like this or does those tool do the same thing in order to make wget work?
Sorry, I am pretty noob on shell commands.
Thanks
It's part of the build process. The configure script collects information about your system and build options into a local file, nothing more.
Typically, this script is created by autoconf and is used to figure out whether the prerequisites for a build are properly installed, etc. It will collect this into a file config.save and also possibly generate a makefile and/or other build infrastructure in order for make to be able to concentrate on compiling and linking the source files.
Neither configure nor make should be expected to change anything outside of the directory tree where you run them.
Conventionally, make install will copy the final build artefacts into place so that other parts of your system can find them and use them.
See also http://www.edwardrosten.com/code/autoconf/
A prepackaged binary will already have been built on a remote system before it was packaged (though there are package managers which allow or require you to build locally; Gentoo Linux famously uses the latter approach) and is often the simplest way to get a tool if you don't have special requirements, such as building with a specific SSL version, or disabling SSL entirely, or getting a bleeding edge version before anybody has packaged it.

How do I create an executable file with OpenCOBOL?

Upon finishing a COBOL program, how do I compile it into an executable file that may be run on other PCs? I'm using OpenCOBOL via cygwin.
Check out this getting started page from the user manual for OpenCOBOL:
But in case the link is broken, just do this:
$ cobc -x hello.cob
$ ./hello
cobc is the compiler. hello.cob is the source file. The output is simply the file hello which can be run by calling ./hello. The -x option is necessary to build an executable.
However, with all compiled programs, it is compiled for the machine is was built on. It will work on machine with similar architectures, but you don't true cross-platform ability unless you're using an interpreted language like Python or Java.
If you compile with Cygwin, the target computers also need Cygwin, or in particular the cygwin dynamic libraries along with the OpenCOBOL runtimes.
Many times, you can also compile under MinGW, which lessens the dependencies, but also lessens the available POSIX features.
Easiest path, install OpenCOBOL and Cygwin on the target machines, and you'll be good to go, otherwise you'll need to produce release packages with all the dependencies and instructions for PATH settings.

USB GCC Development Environment with Libraries

I'm trying to get something of an environment on a usb stick to develop C++ code in. I plan to use other computers, most of the time linux, to work on this from a command line using g++ and make.
The problem is I need to use some libraries, like Lua and OpenGL, which the computers don't have. I cannot add them to the normal directories, I do not have root on these computers. Most of the solutions I've found involve putting things in /usr/lib/ and the like, but I cannot do that. I've also attempted adding options like '-L/media//lib', which is where they are kept, and it didn't work. When compiling, I get the same errors I got when first switching to an OS with the libraries not installed.
Is there somewhere on the computer outside of /usr/ I can put them, or a way to make gcc 'see' them?
You need more than the libraries to be able to compile code utilizing those libraries. (I'm assuming Linux here, things might be slightly different on e.g. OSX,BSDs,Cygwin,Mingw..)
Libraries
For development you need these 3 things when your code uses a library:
The library header files, .h files
The library development files, libXXX.so or libXXX.a typically
The library runtime files , libXXX.so.Y where Y is a version number. These are not needed if you statically link in the library.
You seem to be missing the header files (?) Add them to your usb stick, say under /media/include
Development
Use (e.g.) the compiler flag -I/media/include when compiling source code to refer to a non-standard location of header files.
Use the compiler/linker flag -L/media/lib to refer to non-standard location of libraries.
You might be missing the first step.
Running
For dynamically linked libraries, the system will load those only from default locations, typically /lib/ , /usr/lib/
Learn the ldd tool to help debug this step.
You need to tell the system where to load additional libraries when you're running a program, here's 3 alternatives:
Systemwide: Edit /etc/ld.so.conf and add /media/libs there. Run ldconfig -a afterwards.
Local, to the current shell only. set the LD_LIBRARY_PATH environment variable to refer to /media/lib, run export LD_LIBRARY_PATH=/media/lib
Executable: Hardcode the non-standard library path in the executable. You add this to the linking step when creating your executable: -Wl,-rpath,/media/lib
Etc.
There could be other reasons things are not working out, if so,
show us the output of ls -l /media/libs , and where you put the library header files, the command line you use to compile/link, and the exact errors you get.
Missing the headers and/or development libraries (for dynamic libraries there is usually a symlink from a libXXX.so to a libXXX.so.Y , the linker needs the libXXX.so , it will not look directly at libXXX.so.Y)
using libraries not compatible with your current OS/architecture. (libraries compiled on one linux distro is often not compatible with another distro, or even another minor version of the same distro)
using an usb stick with a FAT32 filesystem, you'll get in trouble with symlinks..

Resources