I have a code which was developed on Google colab and now I want to run it on local machine or on a server. The problem is my code has got a lot of dependencies and its getting difficult to prepare an virtual / conda environment. The code is working perfectly on colab. So is there any way so that I can have an image of that environment so its easier for me to run it wherever I want
You can run this command: !pip freeze and you will get all packages installed and their versions:
you can copy paste this to your local machine into a requirements.txt, or you can download this file with this commands:
Execute in one cell !pip freeze >> requirements.txt
Go to the left panel :
Right click on file created, and select download
the last step is create the environment, yoy can use conda or virtenv. Use pip with this command: pip install -r requirements.txt
Related
I have a notebook that works fine in Google Colab. I am not able to properly create an Anaconda environment with the packages due to dependency issues. Is there a way to install all required packages from colab to local Windows Anaconda environment? Using pip freeze gives a list that is appropriate for Linux but not for windows
In google Colab, run the following code:
!pip freeze >> requierements.txt
All the packages that were installed will appear here to be downloaded
Later on, on your localhost run,
pip install -r requierements.txt
I tried to follow the documentation and got stuck in the point
Open a terminal and follow the instructions to configure a new Python virtual environment and install the `google-assistant-library.
The link in this point redirects to a general page (Introduction to the Google Assistant Library) rather than the instructions.
I think it misses the explanation what it means to open the terminal and exact steps to be followed.
Is the link really correct?
Maybe I need help in using the console correctly, but I am not getting it from that poor documentation.
I can connect to RP with Serial to USB cable and Putty. But simply I do not know what that point 11 and onwards mean...
Any idea?
Thank you
It looks like the links in the Assistant SDK docs were modified, but it should be pointing to this page:
sudo apt-get update
sudo apt-get install python3-dev python3-venv # Use python3.4-venv if the package cannot be found.
python3 -m venv env
env/bin/python -m pip install --upgrade pip setuptools
source env/bin/activate
python -m pip install --upgrade google-auth-oauthlib[tool]
google-oauthlib-tool --scope https://www.googleapis.com/auth/assistant-sdk-prototype \
--save --headless --client-secrets /path/to/client_secret_client-id.json
This will save the credentials at /path/to/.config/google-oauthlib-tool/credentials.json, which you can then copy into your project in order to authenticate the Google Assistant.
As Nick and proppy noted, one step is to obtain authorization code to be used in later steps. Unfortunately the documentation skipped few very important steps and it can lead to confusion. Sadly Google did not simplify the process of integrating the Assistant in the same development environment and hope they will integrate this clumsy process to Android Studio as with other services
If you are developing under Windows you need to:
use a Linux environment and follow the steps in console of that Linux PC (not in the Android Things console of the RP!). Or install Python in Windows. I used the Raspbian in my RP3 to do the Linux version of the procedure...
install Python environment first in the Linux PC console
sudo apt-get update
sudo apt-get install python3-dev python3-venv
python3 -m venv env
env/bin/python -m pip install --upgrade pip setuptools
source env/bin/activate
in this Python environment install google-auth-oauthlib that will generate the credential file
python -m pip install --upgrade google-auth-oauthlib[tool]
change the directory to place you saved the downloaded json file from step before step 11 in the documenatation. e.g.
cd /home/pi/Downloads/
run the google auth tool with the path to your downloaded json file (including its long name, replace idxxx with your id)
google-oauthlib-tool --client-secrets /home/pi/Downloads/client_secret_client-idxxx.json --scope https://www.googleapis.com/auth/assistant-sdk-prototype --save --headless
there will be a link generated in the console. You have to insert the link into browser . You will be prompted in browser to let the tool to use your account and the you will receive an authentication code. Enter this code to the prompt back in the console.
find the generated authenticated authorization code file in the folder prompted in the console and continue in the original documentation steps
I would like to create a Conda environment from a .yaml file on an offline machine (i.e. no Internet access). On an online machine this works perfectly fine:
conda env create -f environment.yaml
However, it doesn't work on an offline machine as the packages are then not found. How do I do this?
If that's not possible is there another easy way to get my complete Conda environment to an offline machine (including both Conda and pip installed packages)?
Going through the packages one by one to install them from the .tar.bz2 files works, but it is quite cumbersome, so I would like to avoid that.
If you can use pip to install the packages, you should take a look at devpi, particutlarily its server. devpi can cache packages normally installed from PyPI, so only on first install it actually retrieves them. You have to configure pip to retrieve the packages from the devpi server.
As you don't want to list all the packages and their dependencies by hand you should, on a machine connected to the internet:
install the devpi server (I have that running in a Docker container)
run your installation
examine the devpi repository and gathered all the .tar.bz2 and .whl files out of there (you might be able to tar the whole thing)
On the non-connected machine:
Install the devpi server and client
use the devpi client to upload all the packages you gathered (using devpi upload) to the devpi server
make sure you have pip configured to look at the devpi server
run pip, it will find all the packages on the local server.
devpi has a small learning curve, which already worth traversing because of the speed up and the ability to install private packages (i.e. not uploaded to PyPI) as a normal dependency, by just generating the package and upload it to your local devpi server.
I guess that Anthon's solution above is pretty good but just in case anybody is interested in an easy solution that worked for me:
I first created a .yaml file specifying the environment using conda env export > file.yaml. Following the instructions on http://support.esri.com/en/technical-article/000014951, I automatically downloaded all the necessary installation files for conda installed packages and created a channel from the files. For that, I just adapted the code from the link above to work with my .yaml file instead of the conda list file they used. In addition, I automatically downloaded the necessary files for the pip installed packages by looping through the pip entries in the .yaml file and using pip download for downloading each of them. Furthermore, I automatically created separate conda and pip requirement lists from the .yaml file. Then I created the environment using conda create with the offline flag, the file with the conda requirements and my custom channel. Finally, I installed the pip requirements using pip install with the pip requirements file and the folder containing the pip installation files for the option --find-links.
That worked for me. The only problem is that you can only download binaries with pip download if you need to specify a different operating system than the one you are running, and for some packages no binaries are available. That was okay for me now as the target machine has the some characteristics but might be problem in the future, so I am planning to look into the solution suggested by Anthon.
I am very new to python and plan to use psychopy quite a lot. I am on a work computer but have full admin rights.
Psychopy came with python version 2.7.11 and includes setuptools already.
I am trying to install the selenium module, but having trouble getting pip to work at all.
In cmd, it is recognising the 'python' command, so I know python is in my path.
I get the message "can't open file 'pip': [Errno2] No such file or directory" from:
python pip install selenium
I get " 'pip' is not recognised as an internal or external command" from:
pip install selenium
When I change directory to where pip is located, I get:
Fatal error in launcher: Unable to create process using '"'
Using pip2 makes no difference.
It seems a simple thing but where am I going wrong with this?!
I never really got to the bottom of this, but this is what I found out and here are the commands that worked for me in Windows. Be aware that I am far from expert!
To run python scripts (*.py) from command line (cmd) then C:\PsychoPy2 and C:\PsychoPy2\DLLs need to be in path. ('Path' contains directories or file extensions that can be more globally accessed, i.e. do not require you to change the prompt to the relevant directories first).
To check, open cmd and either type echo %PATH% or just type python. (If python starts, the line will say >>>. You can exit by typing quit())
To add to path, get properties of computer, then advanced system settings, then environment variables.
To check pip.exe (a sort of installation wizard) is installed, either search for the file, or check C:\PsychoPy2\Scripts for it. This may also need to be in path.
To reinstall the latest versions of pip and setup tools, I went to cmd and typed:
python -m pip install -U pip setuptools
If the same code did not work for other modules (which in my case was due to network access), then I downloaded the wheel file (*.whl) for that module (from their website) and ran the following code:
python -m pip install c:/modulename.whl
These may not be the correct ways of doing things, but they worked for me when I couldn't get other ways to work!
I've just had the exact same issue with the pip install, and a conflict with PsychoPy installations. I think it's because python automatically wants to call on the path that's been set by Psychopy, so it can't get to the 'pip' folders that for me, remain in a temporary/hidden file. This wasn't intuitive for me - on any machine without psychopy python just 'works' when you download it.
I use Ipython Notebook and at times need to install new python packages like plotly, scikit etc. I have already tried using the most popular methods PIP and Easy Install to install the packages directly from cmd in windows but neither works. Here is the error that I get-
C:\Users\xxxx>pip install plotly
Fatal error in launcher: Unable to create process using '"'
And with easy install, I get some error as well.
Is there a third way of installing packages?
May be manually installing the package after downloading the .tar.gz file?
I found the answer. In case when both pip and easy_install fails, there is a third way (as simple as pip and easy_install).
Step 1) Go to https://pypi.python.org/ to find the desired python module/package and download the *.tar.gz file (example name: plotly-1.9.6.tar.gz) and save it anywhere.
Step 2) Unzip the file to get plotly-1.9.6. Inside this folder you will find setup.py. Open cmd and browse till the root folder of setup.py and use this command -
python setup.py install
And you are good.
If you know of any fourth method, do share it here.