I am learning to build a compiler using LLVM as back end.
I followed the steps on getting started with the LLVM system until setting up your environment
What is the specific location for [/path/to/your/bitcode/libs] ?
Was this mistake cause the command not found when I type in lli in a Terminal?
//I am trying to build a hello world to see through the total compiling procedure
You can put LLVM_LIB_SEARCH_PATH wherever you want. For now, you probably don't need to worry about it at all; as the documentation says, it is optional. Later, you may create bitcode (i.e. compiled VM code) functions which you would like to link into the bitcode your compiler produces. For example, you may need to create some kind of standard library and runtime environment for your executables.
That has nothing to do with the lli not found error, which is the result of the LLVM binaries either not having been installed, or having been installed somewhere which is not in your $PATH.
By default, the llvm package will configure itself for installation under the prefix /usr/local, which means that after you gmake install you should find lli and friends in places like /usr/local/bin/lli. That may or may not be in your $PATH; to find out, type
echo "$PATH"
and see if it has :/usr/local/bin: somewhere in it. If it doesn't, then you could change your PATH:
export PATH="/usr/local/bin:$PATH"
To make that permanent, you'll have to add it to your bash startup files.
But you might not want it to be installed there. I usually install software I'm playing with in my local directory tree, so that I don't have to sudo all the time. You can change the root of the installation directory tree with the --prefix argument to ./configure. (You have to do that before you build LLVM.) ./configure --help will provide some more information about configure options, but --prefix is certainly the most important one.
Whatever you do, don't do it blindly. Make sure you understand what this all means before doing it. If you plan on making a compiler, you'll need to understand some of the details of a typical build- and runtime- environment; PATH and configure scripts are on the unfortunately long list of things you should at least be somewhat familiar with.
As I understand it, some version of LLVM is already installed on Mac OS X, so you'll need to be careful that your installation doesn't interfere. The fact that bash is report that lli can't be found probably indicates that not all the tools are installed, which will make things less complicated.
I'm afraid that I don't really have any experience with installing LLVM on a Mac, but if you run into specific problems (like "my compiler doesn't work after I install LLVM") then you could ask a specific question with appropriate tags.
Related
I need to install Primer3 for my research in Windows, and I really have no idea of how to go about it. I was following the instructions mentioned here.
I'm getting to the part where I need to run
mingw32-make TESTOPTS=--windows
and I keep getting an error saying:
'mingw32-make' is not recognized as an internal or external command,
operable program or batch file.
Just for reference, I went into the minGW Installation manager and got the ming32-make packages, including the bin, doc, lang, and lic ones, because I really had no idea which one was the correct one.
If someone could help me, I would be very grateful! Installing these niche programs without an installation wizard is a challenge!
You will need to install mingw32-make. This is a
Windows of port of GNU Make,
a software-build tool that is supported on all operating systems,
indeed the daddy of such tools.
But make alone will not suffice. To build primer3 you will
need a Windows port of the whole GNU toolchain for building software
from source code. Without that, running make by itself will
just expose the absence of the GCC compiler and linker that it
expects to do its bidding.
This is quite a lot of software, but it is easy and quick to install and there
are several open-source offerings. I suggest you go to TDM GCC
and download the TDM64 bundle. This will give you an executable installer.
Just run it and you will end up with the complete GNU toolchain, including,
mingw32-make, in your chosen installation directory.
It will also install in your Windows launch menu the MinGW command prompt.
Launch this and you will be presented with a Windows commandline console
with its environment set up to find and run any of the GNU tools.
In this console change directory to your primer3-X.Y.Z/test directory
and then run mingw32-make TESTOPTS=--windows as per documentation.
Be forwarned that the self-tests of primer3 that are executed to
verify the build may take 1/2 hr. to 1 hr. to run, depending on your
hardware, but they will finish successfully with the steps I've
described, barring problems specific to your machine. It is a foolproof-simple build.
All the built executables are deposited in the primer3-X.Y.Z/src
directory. You may want to move them somewhere more convenient
in in your PATH.
It does seem oddly amateurish that the documentation simply
directs you to run mingw32-make with no preliminary account of
what that is or how to install it, while on the other hand it
advises that you must install perl and strongly recommends a
specific perl distribution; but evidently primer3 is open-source
scientfic software and its documentation is not bad by the standard
of that genre.
I followed this install wget tutorial,
After I ran this
./configure --with-ssl=openssl
It ran so many checks, what exactly it did? Did it change anything in my system?
If it does, then, is it safer or more fault prove to use the package management tool like MacPort or such so that such 'configure' will not be done manually like this or does those tool do the same thing in order to make wget work?
Sorry, I am pretty noob on shell commands.
Thanks
It's part of the build process. The configure script collects information about your system and build options into a local file, nothing more.
Typically, this script is created by autoconf and is used to figure out whether the prerequisites for a build are properly installed, etc. It will collect this into a file config.save and also possibly generate a makefile and/or other build infrastructure in order for make to be able to concentrate on compiling and linking the source files.
Neither configure nor make should be expected to change anything outside of the directory tree where you run them.
Conventionally, make install will copy the final build artefacts into place so that other parts of your system can find them and use them.
See also http://www.edwardrosten.com/code/autoconf/
A prepackaged binary will already have been built on a remote system before it was packaged (though there are package managers which allow or require you to build locally; Gentoo Linux famously uses the latter approach) and is often the simplest way to get a tool if you don't have special requirements, such as building with a specific SSL version, or disabling SSL entirely, or getting a bleeding edge version before anybody has packaged it.
So, we all know that Mountain Lion doesn't ship with X11 anymore and users needing X11 are directed to download Xquartz. Xquartz installs to /opt, but it also symlinks X11 and X11R6 to /usr. But when building software that requires linking to X11 include files, I've discovered that I must pass an environment variable adding /usr/X11/include (or /opt/X11/include) to the library search path to get ./configure to find the X11 libraries. My question is why?
I've done some research on Google (many results pointing back to Stack Overflow), and I've read Apple's documentation, and these sources all indicate that there is no equivalent in OS X to the /etc/ld.so.conf file found in many (if not all) Linux distributions. Apple even states that DYLD_LIBRARY_PATH is empty by default. However, under Lion (with Apple's last 'official' X11 installed), the same ./configure scripts would find the X11 libraries without adding anything to the library search path.
So, why can't ./configure scripts find X11 libraries in Mountain Lion without explicit modification of the library search path?
Asked more than a year ago... but as I came here with a similar problem...
Note that in the mentioned ruby question, there was no library search path being modified.
That solution just set an environment variable that is picked up by many Makefiles as the flags for the C++ compiler. That example defined the build time -I ncludepath, i.e. where to search for .h eaders -- not libraries (which would have been a -L option to your compiler/linker). Both would have been build time options.
Whether LD_LIBRARY_PATH or DYLD_LIBRARY_PATH -- both are environment variables that are considered by the dynamic linker at runtime. (For more, see http://en.wikipedia.org/wiki/Dynamic_linker )
I have no pre-10.8 machine at hand, but guess that there might have been a symlink
/usr/include/X11 -> /opt/X11/include/X11 -- otherwise I have no Idea atm how
it could have worked before, assuming same sources...
This is another potential solution for such problems (just fixed my realvnc build):
$ autoconf
$ ./configure
So your question for "why?" could be eventually answered with: Because your sources contained a 'pre-built' configure script that was based on older autotools that did not include
/opt/X11/include as a potential location to search for X11 includes or simply did not get some of the above mentioned compile time flags right on your current system.
I have autoconf installed through homebrew -- ahh, great stuff, cheers.
I have a makefile project that I would like to port to Xcode.
I was following the instructions on:
https://developer.apple.com/library/mac/#documentation/Porting/Conceptual/PortingUnix/preparing/preparing.html
The document lists as important to install Xcode in the default "/usr" folder.
But the installer gives no such option.
And it installs the folder Devolopper in the "/" folder.
Is it safe to just move the whole content of "Developper" to "/usr" or should that be done during installation?
If so, how?
TFA says
If you are using makefiles for compilation, you should install Xcode in the default location (/usr). If you do not, you may have to do extra work to get your scripts to run the compiler, linker, and so on in a nonstandard location.
I'm guessing that the specific path is out of date, but the overall advice of installing Xcode in the standard place is sound. I recommend you use the Xcode installer, which puts make and gcc in the proper places and not worry about /usr in particular.
You certainly do NOT need to move /Developer to /usr. The docs must be talking about /usr/llvm-gcc-4.2/ folder and others which will be created when Xcode is installed into /Developer. You don't need to do anything special for that.
When Xcode tools are installed to the default location (/Developer), the installer creates aliases for developer tools in /usr locations automatically. Install and everything will just work: make, gcc, ld and everything else will appear in $PATH. In short, just install and that will be it.
I don't know why the doc says it that way, it must be mistake or misunderstanding.
I need to build ffmpeg for Mac for converting MOV to FLV in a Java application. I made and installed LAME and then FFMPEG, but I'm confused as to what file I should grab to include with the Java application. What is the binary file? The previous version that I grabbed from the source of ffmpegX was 10mb in size, but the file that's in my /usr/local/bin is only 0.1mb. Is that the right file, or what do I need to include?
I'm not too savvy with anything that needs to be typed into Terminal, so excuse the lack of technical jargon!
Short answer: that file in /usr/local/bin is either the real binary or a soft link to the real binary. If you run ls -l /usr/local/bin any links will be displayed with an arrow to their target location. But pszilard is probably right, that file might be the actual binary, which was dynamically linked to library code.
Long answer: If you compiled from source, then you ran the following three commands
./configure
make
make install
The first one creates a configuration file called config.mak. Near the top of that file, you'll see a lines similar to the following:
prefix=/usr/local
LIBDIR=$(DESTDIR)${prefix}/lib
SHLIBDIR=$(DESTDIR)${prefix}/lib
INCDIR=$(DESTDIR)${prefix}/include
BINDIR=$(DESTDIR)${prefix}/bin
DESTDIR is optional; it's irrelevant unless you ran make install with an additional argument. BINDIR is the actual install location. On my system (snow leopard) that was /usr/local/bin/.
If you're still having trouble, just don't install the build. If you run
make clean
make
The binary will be in your build folder.
Don't use MacPorts or Fink. You'll be happier in the long run if you compile from source yourself. If you insist on using a package manager, try Homebrew <Link>.
I'm not a Mac expert by far, but I've a few tips. If you build the lib with dynamic linkage and the the other one was statically linked that might explain the size difference.
As for the location, what did you use? MacPorts, Fink, or source? If you built from source depends what you used :) MacPorts and Fink have their specific location for binaries (I don't remember anymore, but the documentation should have the info, otherwise the big G has it ;)