I recently got upgraded from Go 1.8.4 to 1.9 without knowing it. Compilation speed wasn't affect (not noticed at least). But I had problem with tools like guru, so I uninstalled 1.9 and re-installed 1.8.4. Afterwards, go run foo.go becomes pretty slow. I'm suspecting that the older version of the compiler cannot use cache from 1.9 and had to recompile everything from scratch - I have no prove.
Is my suspicion correct? If so, is there a way I can reset the compiler cache?
Delete the folder $GOPATH\pkg. That is the package cache folder.
If you run the compiler with the -v flag it will list all the packages being compiled. If it keeps compiling the same packages that you have not changed then you know it is not using the cache.
In the past I have found that when compiling code with run/build the compiler does not cache packages, but it does cache them when using install.
Related
First off I know there are several posts similar to this,but I am going to ask anyway. Is there a known problem with boost program_options::options_description in Debian "Buster" and Boost 1.67 packages?
I have code that was developed back in Debian 7, system upgraded to 8.3 then 8.11 now using Boost 1.55.
Code builds and runs. Now upgrade system to Debian Buster with Boost 1.67 and get the link errors for unresolved reference to options_description(const std::string& , unsigned int, unsigned int) along with several other program_options functions. All unresolved, expect the options_description, are from boost calling another boost function, so not even directly called from within my code. boost_program_options IS in the link line.
I AM not a novice and understand link order and this has nothing to do with link order.
I am going to try getting the source code for boost and building to see if that works, if not I will be building a system from scratch and testing against that.
Since this is all on a closed network simply saying try a newer version of boost or Debian is not an option because I am contractually obligated to only use Debian "Buster" and Boost 1.67 as the newest revisions, so if the package is unavailable (newer) in Buster it is out of the question, without having a new contract be drafted and go through approvals which could take months.
So to the point of this question, is there an issue with the out of the box version of Boost in Buster?
I don't think there's gonna be an issue with the package in Buster.
My best bet is that either
you're relinking old objects with the new libraries - and they don't match (did you do a full make clean e.g. to eliminate this possibility?).
Often build systems do not do complete header dependencies, so the
build system might not notice that the boost headers changed,
requiring the objects to be be rebuilt.
if that doesn't explain it, there could be another version of boost on the include path, leading to the same problem as under #1 even when rebuilding.
You could establish this by inspecting the command lines (make -Bsn or compile_commands.json e.g. depending on your tools). Another trick is to include boost/version.hpp and see what BOOST_VERSION evaluates to
Lastly, there could be a problem where the library was built with different compiler version or compiler flags, leading to incompatible synbols (this is a QoI issue that you might want to report with the Boost Developers).
This is assuming ABI/ODR issues, in case you want to validate this possibility.
I love coding in Haskell, but often am on a computer where I cannot install software, and which has some restrictions about what you can run. I would like to write Haskell code and test it while on this computer. Does anyone know of version of Haskell, interpreted or compiled, written in Java, JavaScript, Ruby, Python, or another interpreted language available in the default install on a Mac? A standalone version of Haskell which can be installed at the user level works too, but compiling Haskell myself is not an option.
The GHC binary distributions (the ones that come as tarballs, not installers) all can be installed locally trivially easily.
./configure --prefix=$HOME/ghc
make install
Then update your path to include $HOME/ghc/bin
If you want cabal, get the tarball from hackage, then untar it and run bootstrap.sh.
GHC works really well as a local install. In fact, I never use it as a system install.
I do this on my workstation, too, so that the distribution I'm on (Debian in my case) doesn't suddenly start upgrading stuff without me noticing in a simple apt-get upgrade.
This solution installs a full ghc and haskell-platform as well as ~/.cabal prefix.
First of all, I have a ~/local directory that I use in order to put custom-compiled programs in my home directory. I usually dislike the sudo make install step, because I'm giving some random Makefile root access to my system, and that makes me feel queasy.
Then I download the ghc binary distribution from the ghc site. NOTE that I linked you to 7.4.2. I hear there's some segfault bug on Mac OS X, but I'm not aware of the details. You should check that out or get the newer ghc instead, but be aware that there are many packages on hackage that are not yet fixed to work with 7.6. Also, ignore that "STOP!" warning, you're the 1% who actually want a non-distrib GHC binary.
You can just cd into the ghc directory, then do ./configure --prefix=$HOME/local/haskell or so, followed by make install (no compiling necessary, it's just going to install, not compile.)
At this point, you should add ~/local/haskell/bin to your path. Here's the code that I put in my ~/.zshrc, which will add all ~/local/*/bin directories to your path.
You can then get the Haskell Platform, and do the same ./configure --prefix=$HOME/local/haskell && make && make install dance. This step will need compilation. It means that you will need some header libraries installed. I find the random openGL headers that are necessary particularly annoying.
You can also of course skip haskell-platform, and just download cabal-install directly, then install what you need. You should in any case not forget to add ~/.cabal/bin to your $PATH!
Do a cabal update and you should be good to go.
NOTE: there's one important part that the binary distribution of GHC needs, which can sometimes be a pita on old Linux systems: libgmp. It's linked dynamically against it, and if you get some errors about the shared libgmp not being found on OS X, too, you can… well, ask that question in a comment, and I shall explain how to get there. Basically, you'll have to compile libgmp + deps yourself.
But I don't think that should be a problem on OS X. It's just been a problem on a couple old debian boxes I've tried this on.
For single files, you can use codepad.
I'm working for a while with OpenCV 2.3.1 and MS Visual Studio 2010 now and have it setup on multiple PC's. In the past I've had an installation of openCV 2.1.0 on one of my PC's as well. My problem is that on the PC where I've had installed openCV 2.1.0, cxcore210.lib and cv210.lib are listed as inherited values in Linker >> Input >> Additional Dependencies
The problem is that when I try building a program on this PC with OpenCV 2.3.1 (I've setup all the linkers and stuff correctly and on my "clean" PC it is working fine) it keeps asking for these 2 lib files. Of course I can install OpenCV 2.1.0 again and link to these files but that's not really what I want since I'm working with OpenCV 2.3.1
I've tried reinstalling my Visual Studio but this doesn't solve the problem either. Also OpenCV 2.1.0 is uninstalled and Path setting are deleted as well. Does anyone know why it keeps poking around for the cxcore210.lib and cv210.lib as inherited values and how can I get rid of them?
That's because your project still thinks you are using OpenCV 2.1. You need to go to the project settings under Linker > Input > Additional Dependencies and replace cxcore210.lib cv210.lib by their respective v2.3.1 counterparts, which are:
opencv_core231.lib opencv_highgui231.lib
You might need to add other libraries like opencv_imgproc231.lib abd maybe others, depending on what your program is using from OpenCV. A lot of things changed between these versions.
Also, if you installed OpenCV 2.3.1 in a different directory than the one used for v2.1 you will have to adjust a few more things in the project settings:
The path to the headers: C/C++ > General > Additional Include Directories
and probably the path to the libraries: Linker > General > Additional Library Directories
This tutorial shows step by step how to configure these and much more.
Hmmmm seems to work for now, think I managed to find somewhat of a workaround. I went to Linker > Input > Additional Dependencies and unchecked the box "Inherit from parent or project defaults". Both lib files stay listed as inherited values but at least I can properly build and run the project without getting an error telling me to point to cxcore210.lib cv210.lib
My specs:
OS: Ubuntu 10.04 LTS amd64
fpc: 2.4.0
lazarus: 0.9.28
I'm trying to compile a WebLaz project just by creating one and then compiling.
Somehow the compiler gets all lost when determinig witch httpd and fpapache Units to use.
I've found similar problems in the forums:
mod_helloworld.lpr Can't find fpapache Unit ...
I NEED HELP with fpweb ...
After trying some of the solutions provided there I'm still at this point:
Project compiles fine if I only have httpd22 under the Compiled units and the Source for the packages. Alas it then completely fails to link.
With the original fpc/lazarus folder structure (Having all of HTTPD13, HTTPD20 and HTTPD22 untouched on both locations, units and source) the compiler complains that checksum of httpd has changed and the fails to find fpapache's source.
It finds httpd.pas under httpd20 but then it only works with folders for 2.2
I'm completely lost as how to compile this using the WebLaz component, what am I missing?
Probably you need to select the version you want, and then rebuild the relevant lazarus parts, so that the pkgs get build with the then selected apache.
Afaik the selection of the httpd daemons is simply changing order, it doesn't mean that all versions are supported at once, like e.g. mysqlconnection does.
From what I could investigate from the, very verbose, output using the Test button on the "Compiler Options" none for these option are defined:
FPCAPACHE_1_3
FPCAPACHE_2_0
So this means that in: /etc/fpc.cfg
#IFDEF FPCAPACHE_1_3
-Fu/usr/lib/fpc/$fpcversion/units/$fpctarget/httpd13/
#ELSE
#IFDEF FPCAPACHE_2_0
-Fu/usr/lib/fpc/$fpcversion/units/$fpctarget/httpd20/
#ELSE
-Fu/usr/lib/fpc/$fpcversion/units/$fpctarget/httpd22/
#ENDIF
#ENDIF
The test will revert to httpd22 by default.
None the less, having:
/usr/lib/fpc/2.4.0/units/x86_64-linux/httpd20
/usr/lib/fpc/2.4.0/units/x86_64-linux/httpd22
in the compiler's path to compiled units it means that it will find httpd20 first.
This means it will try to load the 2.0 version and not the 2.2 version of the compiled units.
So the first solution is to delete/move the 1. folder from the system.
This will let you compile, but alas it will not link on a 64 bit system (I'm testing on a AMD64 system so I'm not going to presume it works elsewhere).
The process ends with a hint, to add -fPIC to the compiler options.
If you go to Project->Compiler Options...->Other on the lower TextBox you can add it.
Voila, it's working.
When compiling from source, I never know which configure flags to use to optimize the compilation for my environment. Assume the following hardware/OS:
Single Core, 2 GHz Intel
512MB Ram
Debian 4
I usually just go with
./configure --prefix=/usr/local
Should I be doing anything else?
I always use Debian packages. Compiling from sources can break your development environment during libraries conflicts and such problems are hard to detect.
You might want to check those few options out, which may be required by a Ruby On Rails environment, in which case they should be compiled. Just make sure the directory corresponds to your current settings.
--with-openssl-dir=/usr --with-readline-dir=/usr --with-zlib-dir=/usr
I recommend mixing in a few packages from Debian Unstable feeds. They tend to be pretty stable, despite the name. They're also very up to date.