In my Dockerfile, I create one Conda environment and install all packages I need. At the end of Dockerfile, I would like to start one service when container is created.
The original Dockerfile is not Conda environment, the commands look like:
EXPOSE 8868
CMD ["/bin/bash","-c","hub install deploy/hubserving/ocr_system/ && hub serving start -m ocr_system"]
I would like to modify the commands like so:
activate myenv
hua install and hub servering start
How do I activate the Conda environment in the container?
If you know where your conda is located (let's say /opt/miniconda3/condabin/conda), then you could directly use conda run. Something like:
CMD ['/opt/miniconda3/condabin/conda', 'run', '--no-capture-output',
'hub', 'install', 'deploy/hubserving/ocr_system/', '"&&"',
'hub', 'serving', 'start', '-m=ocr_system']
The argument delimiting might need some adjusting, but that's the spirit of it.If this is a non-base env, then you may also need a --name|-n argument to the conda run.
Related
So, I guess this is done in a really simple way, but I do not have experience with it, so I do not understand what is the issue. Let's say I have a conda environment example_env and a Singularity image example.simg.
I can run the image:
singularity run --nv /mnt/appl/singularity_images/pytorch-19.03-py3.simg
Once I am in the Singularity container, I can write conda activate example_env, and it works for me without any problem.
Now, I want to switch from the interactive section to a script. So, instead of using singularity run and entering the interactive shell, I have tried singularity exec: singularity exec example.simg bash scripts/train.sh, where train.sh contains only one (just for now, of course) command: conda activate example_env.
However, now it does not work, and gives me the following error: CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. To initialize your shell, run $ conda init <SHELL_NAME> If I try to follow the error message, and add the conda init command, it does not help.
What is wrong, and how can I run conda activate with singularity exec?
I don't think you run the echo commands as shown in this stackoverflow.
I think you should check out the following.
Activate conda environment on execution of Singularity container in Nextflow
OS: Windows 10
It's fine to create an env with either name or path, but it doesn't work with both name and path:
Command:
conda create -name myname --prefix D:\proj\myconda\myname
Error:
conda create: error: argument -p/--prefix: not allowed with argument -n/--name
So how to create an env both with a specific name and path?
The benefit from that is:
It's more convenient to remember a shorter nick name for the env.
It's better to move the path to other drives to save space of the default C system drive in Windows OS.
create a folder wherever you want to keep you environment files, inside the folder run:
conda create --prefix=yourenvname
When you wish to use this env, move to the folder in which you ran the previous command and do:
source activate yourenvname
Or
You can run:
conda create --prefix=path/yourenvname
This will create environment named "yourenvname" in the specified path.
Create conda environment with prefix:
conda create --prefix=path/to/envname # C:\path\to\envname for Windows users
Make sure that the directory specified is added to envs_dirs configuration
conda config --append envs_dirs path/to/envname # again, change if using Windows
Tip: If you are keeping multiple environments under a directory (e.g. /path/to holds multiple conda environments), then you can
conda config --append envs_dirs path/to # again, change if using Windows
and conda will pick up on all environments stored in /path/to.
After this step, conda should recognize that you want to include envname in your named environments. You can verify with:
conda info --envs
which should return something like:
# conda environments:
#
envname /path/to/envname
Finally, to activate the environment, all you should need to do is
conda activate envname # replace envname with your environment name from prefix
Is it possible to start an ipython shell(in terminal) within a conda or virtualenv ?
The ipython shell should belongs to the respective environment.
I know a way to start jupyter notebook within the env, by creating a kernelspecs for the virtual env and then choosing the env kernel within the jupyter notebook.
here is the link : http://help.pythonanywhere.com/pages/IPythonNotebookVirtualenvs
But this only setup the jupyter notebook for the current environment. Is there a to do the same for ipython shell
The answer given by Grisha Levit almost solved the problem. So, i am writing the complete details of the answer, how to setup a ipython console within a specific environment.
1.) Activate the virtual env:
source activate <environment-name>
2.) From within the virtual env:
jupyter kernelspec install-self --user
3.) This will create a kernelspec for your virtual env and tell you where it is:
Installed kernelspec pythonX in home/username/.local/share/jupyter/kernels/pythonX
Where pythonX is the version of the python in the virtualenv.
4.) Copy the new kernelspec somewhere useful. Choose a kernel_name for your new kernel that is not python2 or python3 or one you've used before and then:
mkdir -p ~/.ipython/kernels
mv ~/.local/share/jupyter/kernels/pythonX ~/.ipython/kernels/<kernel_name>
5.) If you want to change the name of the kernel that IPython shows you, you need to edit ~/.ipython/kernels//kernel.json and change the JSON key called display_name to be a name that you like.
6.) Running jupter/ipython console within the virtualenv.
jupyter console --kernel <kernel-name>
7.) This will start the jupyter console/shell for the current virtualenv and you can also see kernel in the IPython notebook menu: Kernel -> Change kernel and be able so switch to it (you may need to refresh the page before it appears in the list). IPython will remember which kernel to use for that notebook from then on.
I know a way to start jupyter notebook within the env, by creating a kernelspecs for the virtual env and then choosing the env kernel within the jupyter notebook.
You just need to do the same thing, but using console instead of notebook.
For example:
ipython console --kernel python2
If I create a new Conda environment, for example conda create --name Test, how can I launch Jupyter or QtConsole using the Test environment?
Do I need to manually launch it from the command line rather than using the shortcut?
Do I need to manually launch it from the command line rather than using the shortcut
That would be the easiest way to do it.
One alternative is creating a custom .bat which activate the Test environment then launch the QtConsole application.
Yesterday I've been asked about how to make a docker images with dockerfile
This time I want to add a question
If I want to make the OS ubuntu 14:04 on images docker, which it is installed, postgresql-9.3.10, install Java JDK 6, copy the file (significant location), and create a user on the images.
Whether I can combine of several dockerfile as needed for images? (dockerfile of postgresql, java, copyfile, and create a user so one dockerfile)
Example. I made one dockerfile "ubuntu"
which contains the command
top line
# Create dockerfile
# get OS ubuntu to images
FROM ubuntu: 14:04
# !!further adding a command on the following link, below the line per-dockerfile(intends command in dockerfile on the link)
# command on dockerfile postgresql-9.3
https://github.com/docker-library/postgres/blob/ed23320582f4ec5b0e5e35c99d98966dacbc6ed8/9.3/Dockerfile
# command on dockerfile java
https://github.com/docker-library/java/blob/master/openjdk-6-jdk/Dockerfile
# create a user on images ubuntu
RUN adduser myuser
# copy file/directory on images ubuntu
COPY /home/myuser/test /home/userimagedockerubuntu/test
# ?
CMD ["ubuntu:14.04"]
Please help me
No, you cannot combine multiple Dockerfile.
The best practice is to:
start from an imabe already included what you need, like this postgresql image already based on ubuntu.
That means that if your Dockerfile starts with:
FROM orchardup/postgresql
You would be building an image which already contains ubuntu and postgresql.
COPY or RUN what you need in your dockerfile, like for openjdk6:
RUN \
apt-get update && \
apt-get install -y openjdk-6-jdk && \
rm -rf /var/lib/apt/lists/*
ENV JAVA_HOME /usr/lib/jvm/java-6-openjdk-amd64
Finally, your default command should run the service you want:
# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
But since the Dockerfile of orchardup/postgresql already contains a CMD, you don't even have to specify one: you will inherit from the CMD defined in your base image.
I think nesting multiple Dockerfiles is not possible due to the layer system. You may however outsource tasks into shell scripts and run those in your Dockerfile.
In your Dockerfile please fix the base image:
FROM ubuntu:14.04
Further your CMD is invalid. You may want to execute a bash with CMD ["bash"] that you can work with.
I would suggest you to start with the doc on Dockerfile as you clearly missed this and it contains all the answers to your questions, and even questions you don't even think to ask yet.