From the Armadillo install instructions:
If you are using sparse matrices, also install ARPACK and SuperLU.
Caveat: only SuperLU version 4.3 can be used!
On the SuperLU site there are three versions available: SuperLU, SuperLU_MT and SuperLU_DIST. Since the requirement is version 4.3, does it mean that only the single threaded version can be linked to Armadillo? If so, it means, that for sparse matrices only single PCs can be used, I cannot make use of the cluster available at my university.
Related
I want to download Julia (the last version is 1.0) from Anaconda. However, you can download from https://julialang.org/. My questions are: What are the differences between both ways of installing Julia? Can I install, for example, DifferentialEquations.jlor Symata.jl without problems if I choose Anaconda? If I choose Anaconda, how good is the package management?
The only benefits of Anaconda are, as far I'm aware, that it automatically selects the right binary (i.e., OS), and likely has a slightly easier updating experience (for the language itself). However, it does not seem to support Windows (https://anaconda.org/conda-forge/julia), so if you happen to be on that platform, you are out of luck. I would recommend grabbing the binary from the website directly, the installation process is very straightforward.
The management of Julia packages will still happen inside Julia. Julia 1.0 has a very good package manager called Pkg. You can read more on installing packages within Julia on https://docs.julialang.org/en/v1.0.0/stdlib/Pkg/.
I'm looking into performing object detection (not just classification) using CNNs; I currently only have access to Windows platforms but can install Linux distributions if necessary. I would like to assess a number of existing techniques, but most available code is for Linux.
I am aware of the following:
Faster RCNN (CNTK, Caffe w/ Matlab)
R-CNN (Matlab with loads of
toolboxes)
R-FCN (Caffe w/ Matlab)
From what I can see, there are no TensorFlow implementations for Windows currently available. Am I missing anything, or do I just need to install Ubuntu if I want to try more?
EDIT: A windows version of YOLO can be found here: https://github.com/AlexeyAB/darknet
There is a Tensorflow implementation for Windows but honestly it's always 1-2 steps behind Linux.
One thing I would add to your list (which was not already mentioned) is MaskRCNN
You can also look for some implementation of it for Tensorflow like this one: https://github.com/CharlesShang/FastMaskRCNN
I'm trying to install python numpy from source with libraries Lapack and Atlas. I have realized that Atlas itself contains lapack library. However if I compile it(atlas only), it has 0.5 MB. When Netlib Lapack is deployed than the library liblapac.a has more than 13 MB. This leads me to following questions:
Questions regarding numpy/scipy:
can i install numpy/scipy only with netlib's Lapack, or only with Atlas lib?
(if answer for 1 is yes) if only Atlas lib is installed (no netlib's Lapack) -are there any disadvantages (performance, functions unavailable,...)
is there any performance review how numpy/scipy are doing w/out Lapack/Atlas installed?
Numpy, or Scipy does use more Atlas/Lapack? is there any significant difference?
thanks!
ATLAS is not a full LAPACK implementation. It only provides a few routines that are optimized.
This ATLAS page explains how to build full LAPACK that also uses ATLAS.
From the page:
ATLAS natively provides only a relative handful of the routines which comprise LAPACK.
The SciPy Homepage tells you that you need LAPACK for SciPy, but not for numpy:
Before building, you will also need to install packages that NumPy and SciPy depend on
BLAS and LAPACK libraries (optional but strongly recommended for NumPy, required for SciPy): typically ATLAS + LAPACK, or MKL, or ACML
[...]
To summarize, if you want SciPy, you need LAPACK. If you want a faster LAPACK, you might want to install ATLAS, too. If you only want numpy, LAPACK is not required, but considered a good idea by the SciPy people.
I upgraded my machine from WinXP to Win7, and at the same installed Lattice Diamond 3.1. My more complex simulations hang, Active-HDL uses 100% CPU time and is obviously in an infinite loop. Stupidly I don't have the installation of Lattice Diamond 2.1 or 2.2, and unbelievably Lattice only allows you to download the latest version. No fallbacks!
Does anyone have an installation file for Lattice Diamond 2.1 or at a pinch 2.2? I can provide an FTP to put it on if some has. I know its a big file, probably 1G+.
Actually I was able to just copy the Active-HDL 9.2 directory from Win7 in a virtual box on another machine, and overwrite the Active-HDL 9.4 directory. I still wouldn't mind an old installation file but at least I can simulate now. And Diamond 3.1 its actually possible to eliminate bkm warnings and errors. There were 2 many bugs in 2.1, tech support actually admitted my warnings were Diamond bugs not flaws in my code.
Actually I found you can download older version from support -> Software archive
Actual link:
http://www.latticesemi.com/en/Support/SoftwareArchive.aspx
Initial note: The question mentions AIX because it is the initial context but the question really pertains to gcc itself, most probably regardless of the platform.
AIX is supposed to be backwards binary compatible: a C program compiled on AIX 5.1 will run as is on 5.2, 5.3, 6.1 and 7.1.
In my understanding gcc should be built to target a specific system (whether the current one or another one in the case of cross-compiling). So, gcc built on AIX 6.1 targets AIX 6.1, and produces binaries usable on both 6.1 and 7.1 thanks to binary compatibility.
Yet gcc itself built on AIX 6.1 is a 6.1 program, so it should execute on 7.1 as is. Of course if I compile a program with it on 7.1, this program might get linked or use headers specific to 7.1, thus making the resulting binary requiring 7.1. So as far as I understand it, I should be able to run gcc built on AIX 6.1 onto a 7.1 machine, and produce maybe non-optimal yet perfectly valid binaries, although they would require 7.1 as a side effect of linking.
This looks too much like rainbows and unicorns dancing in glittery skies. I smell something fishy but lack any knowledge of gcc innards. Please mighty crowd, enlighten me.
tl;dr: Can gcc built on and targeting a version N of an OS/platform be run and used on version N+1 by virtue of platform binary compatibility to produce binaries running on version N+1? If not, what mechanism would prevent it?
Here's enlightenment: your question is way too general. In order to answer it, someone would have to have knowledge of
the operating systems you care about
the OS versions you care about
the gcc versions you care about
and then research the binary compatibility in this three dimensional matrix.
Mechanisms preventing binary compatibility are too numerous and directly correlate to your OS and compiler vendor's ingenuity at breaking it. One of the more common and documented ways being official deprecation of API calls, removal of compatibility libraries shipped, and bridges being burnt, like going from a.out to ELF.