Potential issues when uninstalling and reinstalling Anaconda - anaconda

Unfortunately my base environment has become corrupt and I need to uninstall and reinstall anaconda to fix the issue (not ideal!) I was reading the documentation:
https://docs.anaconda.com/anaconda/install/uninstall/
I am unable to "conda install anaconda-clean" because of the issue with my base environment which leaves me with Option A:
"Open the Terminal.app or iTerm2 terminal application, and then remove
your entire Anaconda directory, which has a name such as anaconda2,
anaconda3, or ~/opt. Enter rm -rf ~/anaconda3 to remove the
directory."
-
"This will leave a few files behind, which for most users is just
fine."
What I want to check is that these few files that are left behind, are these going to create any issues when I reinstall anaconda.

The anaconda-clean function is basically a list of files to delete and an interactive for-loop. One can easily do the same thing manually.
Here is the list (which notably has not changed since 2016):
FILES = [
'.anaconda', '.astropy', '.continuum',
'.conda', '.condamanager', '.condarc',
'.enthought', '.idlerc', '.glue', '.ipynb_checkpoints', '.ipython',
'.jupyter', '.matplotlib', '.python-eggs',
'.spyder2', '.spyder2-py3', '.theano',
]
As always, back stuff up first.
(Opinionated) Advice
Since you're starting fresh, I strongly encourage installing Mambaforge (a Miniforge variant that includes Mamba in the base) and avoid installing anything but Conda infrastructure in the base env. If you need Anaconda, simply create an env with it, e.g.,
conda create -n foo anaconda

Related

How to copy all Conda packages from one env to the base env?

I have an environment called envname, but I would like its packages to be available in the base environment. How can I do this without reinstalling each of them?
Word of Caution
Be very careful when tinkering with the base env. It's where the conda package lives and so if it breaks, the Conda installation will break. This is a very tedious situation to recover from, so I generally recommend against using the base env for anything other than running conda update -n base conda.
That said, one should only try the following for sharing between two non-base envs.
Copying (Linking) Packages Across Envs
One way would be to export an env, let's call it foo, out as a YAML:
conda env export -n foo > foo.yaml
And then ask the other env, let's call it bar, to attempt to install all the packages:
Warning: Conda will attempt the following command without requesting approval!
conda env update -n bar -f foo.yaml
Note that if the foo env has conflicting packages, they will all supersede whatever was in the bar env (if resolvable). To be cautious, you should probably do a diff first, to see what is going to get overwritten. E.g.,
conda env export -n bar > bar.yaml # this is also useful as backup
diff -u bar.yaml foo.yaml
A major thing to check for is the python version. They should match up to and including the minor version (e.g., 3.6.x and 3.6.y are okay; 3.6 and 3.7 are not).
To err on the side of caution, one should probably manually remove any packages from the YAML that would be reversions - however, this could lead to conflicts.
The deletions will not have an effect unless also using the --prune argument (essentially that would completely overwrite bar with foo).
Hopefully all these qualifications and warnings make the point: it could be a mess. It is usually better practice to thoughtfully design a fresh env from the start.

Cannot run git clone or pip install commands

So i'm pretty new to the whole windows repo cloning thing. I installed python 2.7, added the path to my windows cmd and I still cannot run any git clone commands. It shows the following output :
git clone
File "", line 1
git clone
^
I've been scouring the internet for an answer but apparently it should work if I use cmd.
Any help would be appreciated!
git clone
File "", line 1
git clone
^
I have had the same issue recently with 3.7, so I made a new username and it worked. Kind of nice, a clean new Windows profile, even though I just started using it again compared to linux. A pain though.
Make sure all other versions of Python are not installed, or at least affecting the path to the file you need the pip installs to be saved. Python can be saved in a few different locations, and some rare times it has been shown in very obscure places. Check where your file saved on your PC. Probablu C:/ then could be many paths. /Users/UnserNameHere/Windows/ProgramFiles. I would use the search bar in the good old GUI while searching under the C drive (if you have multiple HDD/SDD then pic the one used for the C drive, if nothing comes up try the other drives.
Your looking for a file PythonFoundation I can't remember the entire name, however it will have a very long name Python Foundation are in the name of the file. This is where thing are store and where the path should be sending modules, at least the correct file inside that file.
Also try doing it from other Python versions installed. If you had 3.6 and got 3.7 it doesn't mean 3.6 has been deleted. Also doesn't mean you path, is not set to 3.6 while using 3.7. Same with Python 2, most people or many have both. The pip commands vary between python versions pip3 I believe is used (getting windows and linux a little mixed up right now)
If all fails do it the old fashon way, find the mod, download it, and move it to the python file I mentioned above. The homepage for python should have a tab linking to a page, or it's on the main page, letting you know where it has been known to save. Google how to see he path Pip is taking, or how to see if pip is installed and where, and what paths are set to.

How to rebuild or clone base conda install?

I have a large conda installation that is being used by multiple users. I'm having problems with it where it seems to be getting fragile. I'd like to rebuild it from scratch. I can do a conda list and get a list of packages, but the dependencies will all be random. If I just run a script to install that list, I get constant messages of upgrading and downgrading versions etc.
Is there a way to create a "smart" list of my packages to do an efficient rebuild?
EDIT
Nehal suggested conda list --export. That gives me a list of the form:
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-64
<package>=<version>
<package>=<version>
...
I was able to make one like that just with just conda list and then awk, but that did have some duplicates and packages that caused errors.
Nevertheless, how do I then use this rebuild the install, rather than creating an environment as the list header says?
I tried
conda install $(cat packagelist | tr "\n" " ")
But got some inconsistencies. Could be my channel priorities?

package download fails , "GOPATH not set." why?

OS: Ubuntu 12.04
Go version reporting: 1.1.1
Action:
I have configured the .profile to contain the following lines:
export GOPATH="$HOME/workspace"
export PATH=$PATH:$GOPATH/bin
I have ensured that they are set in the go configuration by running "go env". However when I try to run the command, the screen reports as shown in the image below:
Possible constraining issues:
1) The box originally had Go v1.0 on it and I upgraded it to go version1.1.1, not sure that should mean anything...but if there is some twin configuration madness at work that may explain the fact it's not working despite the path being set.
2) I had the export commands in the .profile file but I see some threads indicate to put it in .bashrc, trying in either still gives the same problem.
Do I need to uninstall go 1.0 ? I just assumed version 1.1.1 would over ride it but that could be wrong. Ideally I wanted to uninstall go entirely and then install version 1.1.2 but I couldn't find anything at golang.org on uninstalling assuming that does solve the problem.
Thanks in advance for any assistance.
As the commenter above stated, you should not use sudo with go get. When you do, you have the root user's environment (which doesn't have your GOPATH) and any files or directories it creates won't be editable by your user. In the past, the go get command would not warn about not having a $GOPATH and so it was easier to get tripped up by this.
To fix your permissions, run the following command to change ownership back to your user:
sudo chown -R "$USER:" "$GOPATH"
You should only ever need to run a plain go get because you can (and should) set your $GOPATH to be a directory you can control. Be sure to read the How To Write Go Code and in particular its discusson on GOPATH.

What files did `make install` copy, and where?

Is there a way to get a list of filenames/paths that make install copies to the filesystem? Some packages come with a MANIFEST file, but not the ones that I am working with.
I was just investigating this myself while compiling a custom version of QEMU. I used the following method to work out what was installed and where (as well as using it as a basis for a .deb file):
mkdir /tmp/installer
./configure --target-list=i386-softmmu
make
sudo make install DESTDIR=/tmp/installer
cd /tmp/installer
tree .
Tree is a utility that recursively displays the contents of a directory in a visually appealing manner - sudo apt-get install tree for Debian / Ubuntu users
Hope that helps someone... it took me a bit of poking around to nut it out, but I found it quite a useful way of visualising what was going on.
The most fool-proof way is to use chroot: have "make install" run inside a chroot jail; compute a list of the files that you had before the installation, and compare that to the list of files after the installation.
Many installations will support either a --prefix configuration option, and/or a DESTDIR environment variable. You can use those for a lighter-wait version of chroot (trusting that the installation will fail if it tries to write to a location outside these if you run installation as a fairly unprivileged user).
Another approach is to replace the install program. Many packages support an INSTALL environment variable that, well, is the install program to use; there are tracing versions of install around.
make uninstall might show the files as it removes them if the author of the compiling instructions provides the information to allow an uninstall (it has been awhile since I have done one so I can't say for sure).
Also make -n install will do a "dry run" of the install process and it may be reasonable to extract the information from its results.
It differs for every project that you run 'make install' on. The files which are installed are controlled by the install target in the Makefile being used. Your best bet is to open the Makefile and search for 'install:' - from there you can see what files will be copied out to your system.
Take a snapshot of the contents of the install location before installing
Install
Compare the current contents with the old contents.
Example:
./configure --prefix /usr/local
make -j`nproc`
find /usr/local | sort -u > /tmp/snapshot1
make install
find /usr/local | sort -u > /tmp/snapshot2
comm -3 /tmp/snapshot{1,2} # this prints the files added by `make install` to stdout
If the install program you're using doesn't support DESTDIR or --prefix (or an equivalent), I have found that it may be possible to identify new files as follows:
Start with as clean a system as possible (a fresh VM image is preferable)
Compile the software, wait a few minutes.
Install the software package.
Find files modified within the past 5 minutes: sudo find / -mmin -5 -type f (the find command has a ton of parameters for querying based on file modification / creation times, but this worked pretty well for me; you just need to narrow the time span so that you pick up the files created by the installer but nothing else).

Resources