Conda custom channel on windows - windows

I've created a custom channel on a windows box following the steps detailed here.
Now I'd like to access it from a different machine but the channel parameter is a URI and I don't know what form it should take with Windows.
Here's the command I tried to execute:
conda search -c file://machine\C\channel --override-channels scipy
which failed with the following error message:
Fetching package metadata: Error: Invalid index file

I have been trying to do the same thing, and the answer by Paul made me a bit pessimistic.
It turns out that it is possible to use a UNC-path.
After trying a few hundred combinations of slashes and backslashes, I found this combination to work:
conda search -c "file://\\DOMAIN\SERVER\SHARE\conda\channel" --override-channels
Similarly,
conda config --add channels "file://\\DOMAIN\SERVER\SHARE\conda\channel"
Adds the channel to your config file.

Let's say that your custom channel is located in the following directory:
N:\conda\channel. Then we would expect to see the following in this directory (1) the win-64 directory (2) the index files inside, in this case the directory N:\conda\channel\win-64\, of repodata.json and repodata.json.bz2 (3) any packages that you have added to your channel. A search on this channel for the scipy package, ignoring all other channels, would look like this conda search -c file://N:\conda\channel --override-channels scipy
Did you add the scipy package into your custom channel? If you did, then did you run conda index on that directory?
I'm a little confused by your directory structure but, if your channel is machine\C\channel, then what happens when you do dir machine\C\channel?

I had no success with the other answers though #gDexter42 helped me in the right direction. Perhaps the API has changed. Testing several different options I was somewhat surprised that
you can use / or \ interchangably
you do not need to escape spaces
you do not need to enclose paths with spaces in quotes
After creating a custom channel in a network accessible directory, you can search for a conda package using the file path, excluding the file:// mentioned in other posts and in the documentation.
For a UNC Path:
$ conda search -c //my_nas/some/path with spaces/channel --override-channels
or
$ conda search -c \my_nas\some\path with spaces\channel --override-channels
If the folder is local, or you have mounted a network directory to a local path (D:\ in this example), you would use that file path.
$ conda search -c D:/some/path with spaces/channel --override-channels
or
$ conda search -c D:\some\path with spaces\channel --override-channels
I tested these commands using both Git Bash for Windows and Anaconda Prompt (which I think is just cmd.exe with the path modified so base/root is the active environment).
Note that if you then want to add that path to your .condarc file, you can use the same path.
channels:
- \\my_nas\some\path with spaces\channel # UNC
- D:/some/path with spaces/channel # local drive
- defaults # this gives defaults lower priority
ssl_verify: true

If you are trying to search for a conda package in a local directory (not on UNC), the following two approaches worked for me.
Navigate to the drive containing the package and try
conda search -c file://folder_path/channel --override-channels
the better way is to drop the file flag which allows you to search from any drive. Type
conda search -c Drive://folder_path/channel --override-channels thus if you are searching from D: drive you would type this as
conda search -c D://folder_path/channel --override-channels

If your conda channel is at C:\conda-channel then do:
conda search -c file://\conda-channel --override-channels
There is currently a bug in conda 4.6+ where file://C:\conda-channel will not work as it removes the colon when parsing. And downgrading to 4.5 is dangerous and can mess up your installation.

Related

Conda - gather tarballs with dependencies for offline install [duplicate]

I want to create a Python environment with the data science libraries NumPy, Pandas, Pytorch, and Hugging Face transformers. I use miniconda to create the environment and download and install the libraries. There is a flag in conda install, --download-only to download the required packages without installing them and install them afterwards from a local directory. Even when conda just downloads the packages without installing them, it also extracts them.
Is it possible to download the packages without extracting them and extract them afterwards before installation?
There is no simple command in the CLI to prevent the extraction step. The extraction is regarded as part of the FETCH operation to populate the package cache before running the LINK operation to transfer the package to the specified environment.
The alternative would be to do something manually. Naively, one could search Anaconda Cloud and manually download, however, it would probably be better to go through the solver to ensure package compatibility. All the info for operations to be run can be viewed by including the --json flag. This could be filtered to just the tarball URLs and then downloaded directly. Here's a script along these lines (assuming Linux/Unix):
File: conda-download.sh
#!/bin/bash -l
conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
which can be used as
./conda-download.sh -c conda-forge -c pytorch numpy pandas pytorch transformers
that is, it accepts all arguments conda create would, and will download all the tarballs locally.
Ignoring Cached Packages
If you already have some packages cached then the above will not redownload them. Instead, if you wish to download all tarballs needed for an environment, then you could use this alternate version which overrides the package cache using an empty temporary directory:
File: conda-download-all.sh
#!/bin/bash -l
tmp_dir=$(mktemp -d)
CONDA_PKGS_DIRS=$tmp_dir conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
rm -r $tmp_dir
Do you really want to use conda-pack? That lets you archive a conda-environment for reproducing without using the internet or re-solving for dependencies. To just prevent re-solving you can also use conda env export --explict but that still ties you to the source (internet or local disk repository).
If you have a static environment (read-only) and want to really reduce docker size, you can volume mount the environment at runtime. You would need to match the file paths (ie: /opt/anaconda => /opt/anaconda).

Download conda data science libraries without extracting the packages

I want to create a Python environment with the data science libraries NumPy, Pandas, Pytorch, and Hugging Face transformers. I use miniconda to create the environment and download and install the libraries. There is a flag in conda install, --download-only to download the required packages without installing them and install them afterwards from a local directory. Even when conda just downloads the packages without installing them, it also extracts them.
Is it possible to download the packages without extracting them and extract them afterwards before installation?
There is no simple command in the CLI to prevent the extraction step. The extraction is regarded as part of the FETCH operation to populate the package cache before running the LINK operation to transfer the package to the specified environment.
The alternative would be to do something manually. Naively, one could search Anaconda Cloud and manually download, however, it would probably be better to go through the solver to ensure package compatibility. All the info for operations to be run can be viewed by including the --json flag. This could be filtered to just the tarball URLs and then downloaded directly. Here's a script along these lines (assuming Linux/Unix):
File: conda-download.sh
#!/bin/bash -l
conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
which can be used as
./conda-download.sh -c conda-forge -c pytorch numpy pandas pytorch transformers
that is, it accepts all arguments conda create would, and will download all the tarballs locally.
Ignoring Cached Packages
If you already have some packages cached then the above will not redownload them. Instead, if you wish to download all tarballs needed for an environment, then you could use this alternate version which overrides the package cache using an empty temporary directory:
File: conda-download-all.sh
#!/bin/bash -l
tmp_dir=$(mktemp -d)
CONDA_PKGS_DIRS=$tmp_dir conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
rm -r $tmp_dir
Do you really want to use conda-pack? That lets you archive a conda-environment for reproducing without using the internet or re-solving for dependencies. To just prevent re-solving you can also use conda env export --explict but that still ties you to the source (internet or local disk repository).
If you have a static environment (read-only) and want to really reduce docker size, you can volume mount the environment at runtime. You would need to match the file paths (ie: /opt/anaconda => /opt/anaconda).

Gcloud components not being installed on local machine

I'm trying to use the gcloud components install to install anthoscli and kpt on my local machine but eventhough they are installed, everytime I try to run them as commands (e.g anthoscli apply) my zsh shell says there's no such command (eventhough kubectl works fine).
I tried to just to find where the component binaries are installed and then point to them in my .zshrc file but I couldn't find anywhere online that points to their file directory. The components seem to work as normal in my google cloud shell but not locally; any ideas?
You just need to search fot the command and include the directory in you $PATH variable.
To find the command anthoscli you should use the command:
find -name "*anthoscli*" -type f /
With the result, you can add the following line in your ~/.zshrc:
PATH="$PATH:<directory_found_in_previous_command>"
Then simply "reload" your zsh configuration with:
source ~/.zshrc

why do I need to create the symlinks and what does folder/in/path corresponds to ? when Installing aws cli 2 on mac for current user

I am trying to install AWS cli 2 for the current user, on mac as per blog
https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html#cliv2-mac-install-cmd-current-user
AWS got installed correctly, I am not able to understand the fourth point, why do I need to create the symlinks and what does folder/in/path corresponds to
4. Finally, you must create a symlink file in your $PATH that points to the actual aws and aws_completer programs. Because standard user permissions typically don't allow writing to folders in the path, the installer in this mode doesn't try to add the symlinks. You must manually create the symlinks after the installer finishes. If your $PATHincludes a folder you can write to, you can run the following command without sudo if you specify that folder as the target's path. If you don't have a writable folder in your $PATH, then you must use sudo in the commands to get permissions to write to the specified target folder.
$ sudo ln -s /folder/installed/aws-cli/aws /folder/in/path/aws
$ sudo ln -s /folder/installed/aws-cli/aws_completer /folder/in/path/aws_completer
There are two ways to configure path of the aws program which is under the folder aws-cli, First wayAdd the path of folder aws-cli to our PATH variable using the following command export PATH=$PATH:$HOME/aws-cli //assuming aws-cli is installed at $HOMEThis is sufficient to start using aws command.Second wayPATH variable contains path of /usr/local/bin folder=fA and this folder contains links to all the executable programs. So creating a symlink to the /aws-cli/aws in that folder=fA is another way our system can find aws-cli and it is more robust as there is no direct dependency on the PATH variable and that is what the AWS documentation is also referring to So in my case the commands would like
>> sudo ln -s /Users/akshayjain/aws-cli/aws /usr/local/bin/aws
>> sudo ln -s /Users/akshayjain/aws-cli/aws_completer /usr/local/bin/aws_completer
With either of way you can confirm your installation with following command aws --version

How does my system know to look in a deleted folder for a binary?

If I try to run virtualenv, I get this message:
$ virtualenv
-bash: /Users/me/Library/Python/3.6/bin/virtualenv: No such file or directory
It's not surprising that this happens, because I've removed this directories at an earlier point when trying to clean up my computer from different Python versions. However, how does my system know to look in that directory for virtualenv? I've looked in my bash profile, and there is no mention of virtualenv there.
When you type something your command interpreter has to search the command. Of course it cannot try every possible directory on your system. Then it provides to the user a way to control that process. This is the purpose of the PATH environment variable :
$ echo $PATH
will show you the actual value which looks like dir1:dir2:...:dirn, meaning that commands where searched for in dir1, then dir2, etc. You have to remove the value /Users/me/Library/Python/3.6/bin/ from it. The best way is to edit the .bashrc or .bash_profile file to remove the permanent setting of this variable. Then reconnect.

Resources