So I'm following the install instructions for Bazel 2.0, and basically it seems like all I have to do is download the ".exe" file, add it to the path, and then I can use it from windows powershell (probably bash too, although I haven't tried). What I want to know is - does the ".exe" file do any manipulation of my system (outside of the obvious compiling work) or download anything else under the hood? I ask because I want to try it out while working on a restricted computer system, as I'm sure some of you have encountered before.
It will extract itself into the location where it also (unless configured otherwise) keeps its build output. By default this would be under current user's home directory. The location can be changed with --output_user_root parameter or TEST_TMPDIR environmental variable. You can check out the docs for more detailed description.
Adding to Ondrej K.'s answer:
Yes, you just download the .exe and add it to your PATH. Do not run it from Bash though, because it's broken. (I'm linking to the documentation at master as of 2020-02-28 and as of 2.1.0 being the most recent version. The current master will become the release doc for 2.2.0.)
Yes, Bazel will download stuff. This includes tools for the languages you build (e.g. Java), and also external dependencies of the project.
Yes, Bazel will write to disk even if you just run it once: as Ondrej K. wrote, it will extract itself to a directory.
Do not set TEST_TMPDIR to tell Bazel where to run. Setting this envvar will make Bazel believe it's running inside a test, and it will significantly reduce its resource use and change its behavior in subtle ways you probably don't want. (If you want to limit its resource use, you can do so with several flags, see --jobs and --local_ram_resources, --local_cpu_resources.)
Related
I need to install Primer3 for my research in Windows, and I really have no idea of how to go about it. I was following the instructions mentioned here.
I'm getting to the part where I need to run
mingw32-make TESTOPTS=--windows
and I keep getting an error saying:
'mingw32-make' is not recognized as an internal or external command,
operable program or batch file.
Just for reference, I went into the minGW Installation manager and got the ming32-make packages, including the bin, doc, lang, and lic ones, because I really had no idea which one was the correct one.
If someone could help me, I would be very grateful! Installing these niche programs without an installation wizard is a challenge!
You will need to install mingw32-make. This is a
Windows of port of GNU Make,
a software-build tool that is supported on all operating systems,
indeed the daddy of such tools.
But make alone will not suffice. To build primer3 you will
need a Windows port of the whole GNU toolchain for building software
from source code. Without that, running make by itself will
just expose the absence of the GCC compiler and linker that it
expects to do its bidding.
This is quite a lot of software, but it is easy and quick to install and there
are several open-source offerings. I suggest you go to TDM GCC
and download the TDM64 bundle. This will give you an executable installer.
Just run it and you will end up with the complete GNU toolchain, including,
mingw32-make, in your chosen installation directory.
It will also install in your Windows launch menu the MinGW command prompt.
Launch this and you will be presented with a Windows commandline console
with its environment set up to find and run any of the GNU tools.
In this console change directory to your primer3-X.Y.Z/test directory
and then run mingw32-make TESTOPTS=--windows as per documentation.
Be forwarned that the self-tests of primer3 that are executed to
verify the build may take 1/2 hr. to 1 hr. to run, depending on your
hardware, but they will finish successfully with the steps I've
described, barring problems specific to your machine. It is a foolproof-simple build.
All the built executables are deposited in the primer3-X.Y.Z/src
directory. You may want to move them somewhere more convenient
in in your PATH.
It does seem oddly amateurish that the documentation simply
directs you to run mingw32-make with no preliminary account of
what that is or how to install it, while on the other hand it
advises that you must install perl and strongly recommends a
specific perl distribution; but evidently primer3 is open-source
scientfic software and its documentation is not bad by the standard
of that genre.
I am learning to build a compiler using LLVM as back end.
I followed the steps on getting started with the LLVM system until setting up your environment
What is the specific location for [/path/to/your/bitcode/libs] ?
Was this mistake cause the command not found when I type in lli in a Terminal?
//I am trying to build a hello world to see through the total compiling procedure
You can put LLVM_LIB_SEARCH_PATH wherever you want. For now, you probably don't need to worry about it at all; as the documentation says, it is optional. Later, you may create bitcode (i.e. compiled VM code) functions which you would like to link into the bitcode your compiler produces. For example, you may need to create some kind of standard library and runtime environment for your executables.
That has nothing to do with the lli not found error, which is the result of the LLVM binaries either not having been installed, or having been installed somewhere which is not in your $PATH.
By default, the llvm package will configure itself for installation under the prefix /usr/local, which means that after you gmake install you should find lli and friends in places like /usr/local/bin/lli. That may or may not be in your $PATH; to find out, type
echo "$PATH"
and see if it has :/usr/local/bin: somewhere in it. If it doesn't, then you could change your PATH:
export PATH="/usr/local/bin:$PATH"
To make that permanent, you'll have to add it to your bash startup files.
But you might not want it to be installed there. I usually install software I'm playing with in my local directory tree, so that I don't have to sudo all the time. You can change the root of the installation directory tree with the --prefix argument to ./configure. (You have to do that before you build LLVM.) ./configure --help will provide some more information about configure options, but --prefix is certainly the most important one.
Whatever you do, don't do it blindly. Make sure you understand what this all means before doing it. If you plan on making a compiler, you'll need to understand some of the details of a typical build- and runtime- environment; PATH and configure scripts are on the unfortunately long list of things you should at least be somewhat familiar with.
As I understand it, some version of LLVM is already installed on Mac OS X, so you'll need to be careful that your installation doesn't interfere. The fact that bash is report that lli can't be found probably indicates that not all the tools are installed, which will make things less complicated.
I'm afraid that I don't really have any experience with installing LLVM on a Mac, but if you run into specific problems (like "my compiler doesn't work after I install LLVM") then you could ask a specific question with appropriate tags.
I followed this install wget tutorial,
After I ran this
./configure --with-ssl=openssl
It ran so many checks, what exactly it did? Did it change anything in my system?
If it does, then, is it safer or more fault prove to use the package management tool like MacPort or such so that such 'configure' will not be done manually like this or does those tool do the same thing in order to make wget work?
Sorry, I am pretty noob on shell commands.
Thanks
It's part of the build process. The configure script collects information about your system and build options into a local file, nothing more.
Typically, this script is created by autoconf and is used to figure out whether the prerequisites for a build are properly installed, etc. It will collect this into a file config.save and also possibly generate a makefile and/or other build infrastructure in order for make to be able to concentrate on compiling and linking the source files.
Neither configure nor make should be expected to change anything outside of the directory tree where you run them.
Conventionally, make install will copy the final build artefacts into place so that other parts of your system can find them and use them.
See also http://www.edwardrosten.com/code/autoconf/
A prepackaged binary will already have been built on a remote system before it was packaged (though there are package managers which allow or require you to build locally; Gentoo Linux famously uses the latter approach) and is often the simplest way to get a tool if you don't have special requirements, such as building with a specific SSL version, or disabling SSL entirely, or getting a bleeding edge version before anybody has packaged it.
I am new to using coverity and this might not be a very challenging question, but I would appreciate it greatly if someone could guide me through the process of setting up the .
I first ran the following command:
cov-configure --compiler /usr/bin/gcc --comptype gcc
This created a few files pertaining to the above command in my /config directory.
The real problem occurs when I run the cov-install-gui command to setup the defect manager and the database, I am not sure what to input for the --datadir option. When I passed in an empty directory (as a mere attempt), it complains saying that coverity_db does not exist within the empty directory.
Its not clear to me as to where I can find the coverity_db directory or how to install it?
I feel like I am missing something from the cov-configure command, but I am not sure.
Also I am using, Linux CentOS 5.4 and Coverity prevent 4.5
Thanks in advance
You are using an old and no longer supported version of Coverity Prevent (4.5 or older) since you are referencing the Defect Manager.
Current version is 6.0 so you should not be using the version that you are.
The answer to your question is that data directory is any directory that will be used to write the results and GUI files, so you can just specify any file path that doesn't already exist and it will create the directory and the files it needs in that directory.
Hi guys : I recently (accidentally) removed all folders/files from my .vim folder in mac os x (home directory).
I am trying to add in the Clojure Vim plugin (VimClojure) - its simply a folder which you are supposed to "drop into .vim/plugins".
I have added it, but I don't see any changes to the syntax highlighting when I launch vim. I'm not sure wether vim "sees" the plugin or not.
I'm on OS X .
Any ideas on how to debug the plugin ? In particular
1) How does VIM look for plugins ?
2) Are there files which need to be in $HOME/.vim/ ?
3) Is it sufficient to simply dump the unzip a new plugin file into $HOME/.vim/plugins when installing a standard vim plugin ?
Thanks
About debugging: in order to see whether vim has loaded your plugin you can use :scriptnames and also breakadd file /path/to/your/plugin (or breakadd file *your_plugin_name.vim: I never used absolute paths so I do not know what breakadd will do in this case). Other questions:
Described in :h initialization, precisely :h load-plugins.
Vim does not need any files at all (except vim executable, used shared libraries, dynamic linker and the kernel of course).
Follow installation instructions. Normally plugins are either extracted to ~/.vim or distributed as a singe file that should go to either ~/.vim/plugin (no s!), ~/.vim/colors, ~/.vim/ftplugin or such. I guess you should try to extract it to ~/.vim/plugin, but if archive contains some special directories like plugin/, ftplugin/, colors/, after/ (see /usr/share/vim/vim73 for a list) it is likely that it should go to ~/.vim. Also consider using vim-addon-manager, if plugin was posted on vim.org VAM is likely to be able to install it.
A few points.
How does vim look for plugins? See :help startup so see where and when vim looks for files to load.
Not really. Anything there is just personal customization. Vim will run fine without a .vim folder.
That all depends on the plugin. It sounds to me like the VimClojure plugin may be a little misleading. Do you have a link to the source you are using?
In any case, the first step I always take when attempting to debug a script is check the output of :scriptnames. This command will show you what scripts vim has loaded for the current session. If you see none of the files shipped with VimClojure, you probably made a mistake during the installation.
Another tip is that you really should look in to using a plugin manager such as vundle or vim-addon-manager, or at least the runtimepath manager pathogen. This seems to be the way of the future for vim configuration these days and it makes installing and managing plugins much easier. They also help to keep your .vim folder clean and organized.
The VimClojure directory should either be extraction on top of your .vim folder, or in a bundle folder if your using something like pathogen (which you should!). If you're starting from scratch, consider starting with vimclojure-easy (not to toot my own horn) which is a basic, full install of VimClojure with instructions.