I have to connect to a server where my user has access to one small partition from /home/users/user_name where I have a quota of limited space and a bigger partition into /big_partition/users/user
After I am logging into that server I will arrive at /home/users/user_name at the bigging. After that, I am doing the following steps.
cd /big_partition/users/user
create conda --prefix=envs python=3.6
on the 4th line, it says Package plan for installation in environment /big_partition/users/user/envs: which is ok.
press y, and not I am getting the following message.
OSError: [Errno 122] Disk quota exceeded: '/home/users/user_name/.conda/envs/.pkgs/python-3.6.2-0/lib/python3.6/unittest/result.py'
Can anyone help me to understand how can I move the .conda folder from /home/users/user_name to /big_partition/users/user at the moment when I am creating this environment?
Configure Environment and Package Default Locations
I'd guess that, despite your efforts to put your environments on the large partition, there is still a default user-level package cache and that is filling up the home partition. At minimum, set up a new package cache and a default environments directory on the large partition:
# create a new pkgs_dirs (wherever, doesn't have to be hidden)
mkdir -p /big_partition/users/user/.conda/pkgs
# add it to Conda as your default
conda config --add pkgs_dirs /big_partition/users/user/.conda/pkgs
# create a new envs_dirs (again wherever)
mkdir -p /big_partition/users/user/.conda/envs
# add it to Conda as your default
conda config --add envs_dirs /big_partition/users/user/.conda/envs
Now you don't have to fuss around with using the --prefix flag any more - your named environments (conda create -n foo) will by default be created inside this directory and you can activate by name instead of directory (conda activate foo).
Transferring Previous Environments and Package Cache
Unfortunately, there's not a great way to move Conda environments across filesystems without destroying the hardlinks. Instead, you'll need to recreate your environments. Since you may or may not want to bother with this, I'm only going to outline it. I can elaborate if needed.
Archive environments. Use conda env export -n foo > foo.yaml (One per environment.)
Move package cache. Copy contents of old package cache (/home/users/user_name/.conda/envs/.pkgs/) to new package cache.
Recreate environments. Use conda env create -n foo -f foo.yaml.
Again, you could just skip this altogether. This is mainly if you want to be very thorough about transferring and not having to redownload stuff for environments you already created.
After this you can delete some the stuff under the old ~/.conda/envs/pkgs folder.
I found the solution. All I need to do is to export CONDA_ENVS_PATH with the path where I want to be the .conda
export CONDA_ENVS_PATH=.
Related
In the documentation for elasticsearch 7 it says explicitly
For the package distributions, the config directory location defaults
to /etc/elasticsearch . The location of the config directory can also
be changed via the ES_PATH_CONF environment variable, but note that
setting this in your shell is not sufficient. Instead, this variable
is sourced from /etc/default/elasticsearch (for the Debian package)
and /etc/sysconfig/elasticsearch (for the RPM package). You will need
to edit the ES_PATH_CONF=/etc/elasticsearch entry in one of these
files accordingly to change the config directory location.
Is there a way to specify my own path for a different /etc/default/elasticsearch file for a package distribution installation? I already tried by adding the following in my systemd service file which uses the EnvironmentFile I want, but it still uses /etc/default/elasticsearch when the service is comming up.
[Service]
...
EnvironmentFile=-/etc/default/elasticsearch-development
The answer to this (at least for version 7.7.0) is that you can't do it seamlessly with any option or environment variable because it is not provided. However it is possible to edit the file /usr/share/elasticsearch/elasticsearch-env that comes with the package installation and replace line 81 with the following:
if [ ! -z "$ES_DEFAULT" ]; then
source $ES_DEFAULT
else
source /etc/default/elasticsearch
fi
Then it is possible to set ES_DEFAULT in the system service file to point to a different /etc/default file.
[Service]
...
Environment=ES_DEFAULT=/etc/default/elasticsearch-development
I'm looking for ways to optimize the build time of our singularity HPC containers. I know that I can save some time by building them layer by layer. But still, there is room for optimization.
What I'm interested in is using/caching whatever makes sense on the host system.
CCache for C++ build artifact caching
git repo cloning
APT package downloads
I did some experiments but haven't suceeded in any point.
What I found so far:
CCache
I install ccache in the container and instruct the build system to use it. I know that because I'm running singularity build with sudo, the cache would be under /root. But after running the build, /root/.ccache is empty. I verified the generated CMake build files, and they definitely use ccache.
I even created a test recipe containing a %post
touch "$HOME/.ccache/test"
but the test file did not appear anywhere on the host system (not in /root and not in my user's home). Does the build step mount a container-backed directory to /root instead of the host's root dir?
Is there something more needed to be done to utilize ccache?
Git
People suggest running e.g. git-cache-http-server (https://stackoverflow.com/a/43643622/1076564) and using git config --global url."http://gitcache:1234/".insteadOf https://.
Since singularity can read parts of the host filesystem, I think there could even be a way to have it working without a proxy program. However, if the host git repos are not inside $HOME or /tmp, how can singularity access them during build? singularity build has no --bind flag to specify additional mount directories. And using the %files section in recipe sounds inefficient - to copy everything each time the build is run.
APT
People suggest to use e.g. squid-deb-proxy (https://gist.github.com/dergachev/8441335). Again, since singularity is able to read host filesystem files, I'd like to just utilize the host's /var/cache/apt. But /var is not mounted to the container by default. So the same question again - how do I mount /var/cache/apt during container build time. And is it a good idea overall? Wouldn't it damage the APT cache of the host, given both host and container are based on the same version of Ubuntu and architecture?
Or does singularity do some clever APT caching itself? I've just noticed it downloaded 420 MB of packages in 25 seconds, which is possible on my connection, but not very probable given the standard speed of ubuntu mirrors.
Edit: I've created an issue on singularity repo: https://github.com/hpcng/singularity/issues/5352 .
As far as I know, there is no mechanism of caching the singularity build when building from a definition file. You can cache the download of the base image, but that's it.
There is a GitHub issue about this, where one of the main developers of Singularity gave the following reply:
You can build a Singularity container from an existing container on disk. So you could build your base container and save it and then modify the def file to build from the existing container to save time while you prototype.
But since Singularity does not create layers there is really no way to implement this as Docker does.
One point about your question:
I know that I can save some time by building them layer by layer
Singularity does not have a concept of layers, so this does not apply here. Docker uses layers, and those are cached.
The workflow I typically follow when building Singularity images is to first create a Docker image from a Dockerfile and then convert that to a Singularity image. The Docker build step has caching, so that might be useful to you.
# Build Docker image
docker build --tag my_image:latest .
# Convert to Singularity format
sudo singularity build my_image.sif docker-daemon://my_image:latest
This sounds like unnecessary optimization. As mentioned, you can build from a Docker image which can take advantage of some layer caching. If you plan on a lot of iteration, you can either do that to a base docker container or create the singularity image as a sandbox and write it out to a read-only SIF once it is working as you like it. If you are making frequent code changes, you can mount the source in when running the image until it is finalized.
Singularity does some caching on the host OS, by default to $HOME/.singularity/cache (generally in /root since most of the time it's a sudo singularity build ...). You can see more detail using singularity --verbose or singularity --debug. I believe this is mostly for caching images / layers from other formats, but I've not looked too in depth at it.
Building does not mount the host filesystem and is unable to be made to do so, to the best of my knowledge. This is by design for reproducibility. You could copy files (e.g, apt cache) to the image in the %files block, but that seems very hackish and ultimately questionable that it would be any faster while opening the possibility for some strange bugs.
The %post steps are built in isolation within the container and nothing is mounted in, so again it won't be able to take advantage of any caching on the host OS.
It shows there is a way to utilize some caches on the host. As stated by one of the singularity developers, host's /tmp is mounted during the %post phase of build. And it is not possible to mount any other directory.
So utilizing the host's caches is all about making the data accessible from /tmp.
CCache
Before running the build command, mount the ccache directory into /tmp:
sudo mkdir /tmp/ccache
sudo mount --bind /root/.ccache /tmp/ccache
Then add the following line to your recipe's %post and you're done:
export CCACHE_DIR=/tmp/ccache
I'm not sure how sharing the cache with your user and not root would work, but I assume the documentation on sharing caches could help (especially setting umask for ccache).
APT
On the host, bind the apt cache dir:
sudo mkdir /tmp/apt
sudo mount --bind /var/cache/apt /tmp/apt
In your %setup or %post, create container file /etc/apt/apt.conf.d/singularity-cache.conf with the following contents:
Dir{Cache /tmp/apt}
Dir::Cache /tmp/apt;
Git
The git-cache-http-server should work seamlessly - host ports should be accessible during build. I just did not use it in the end as it doesn't support SSH auth. Another way would be to manually clone all repos to /tmp and then clone in the build process with the --reference flag which should speed up the clone.
I have to change my cygwin Local Package Directory, which happen to be earlier as C:\Users\username\Downloads.
Folders like http%3a%2f%2fcygwin.mirror.constant.com%2f are all in place in my new directory for Local Package Directory.
How to do that? (I cannot find, where cygwin stores the config.)
Running setup from new location tries to install all over again instead from continue using earlier packages from the internet.
The information is on /etc/setup/setup.rc
$ head setup.rc
last-cache
e:\downloads\cygwin_cache
last-mirror
http://mirrors.kernel.org/sourceware/cygwin/
net-method
Direct
last-action
Download,Install
mirrors-lst
....
Please note that setup just propose the settings based on last run but you can always change typing new values.
i'm beginer in nativescript,i have correctly install ANDROID_HOME environment variable which return my sdk path after echo $ANDROID_HOME but despite this it return me The ANDROID_HOME environment variable is not set or it points to a non-existent directory. You will not be able to perform any build-related operations for Android
but if i put my project in the same directory with sdk directory it return me
Cannot resolve the specified connected device by the provided index or identifier. To list currently connected devices and verify that the specified index or identifier exists, run 'tns device'
I also notice that after each computer restarting environment variable disappear and i must resume a same process , i have edit .profile file, .bashrc file and zshrc file for environnement variable i have a same result
please tell me what wrong ... thank in advance
my ~./bashrc file
export ANDROID_HOME=/home/user/Android/Sdk
export PATH=$PATH:/home/user/Android/Sdk/tools
export PATH=$PATH:/home/user/Android/Sdk/platform-tools
export LD_LIBRARY_PATH=/home/user/Android/Sdk/emulator/lib64
In /home/user/Android/Sdk should be tools and platform-tools folders.
That's enough for me. (Linux Mint 18)
Maybe this information will be useful to someone:
Linux environment variables configuration files
.bashrc
This file is a variable for a particular user. It is loaded every time the user creates a terminal session, that is, in other words, opens a new terminal. All the environment variables created in this file take effect every time a new terminal session begins.
.bash_profile
These variables take effect every time the user connects remotely over SSH. If this file is missing the system will look for .bash_login or .profile.
/etc/environment
This file is for creating, editing and deleting any environment variables at the system level. The environment variables created in this file are available for the entire system, for each user and even for a remote connection.
/etc/bash.bashrc
System bashrc. This file is executed for each user, each time he creates a new terminal session. This only works for local users, when connected through the Internet, such variables will not be visible.
/etc/profile
System file profile. All variables from this file are accessible to any user on the system only if he entered remotely. But they will not be available when creating a local terminal session, that is, if you just open the terminal.
All the Linux environment variables created with these files can be deleted only by removing them from there. Only after each change, you need to either log out and log in, or execute this command:
$ source file_name
So, the environment variable can be of three types:
Local environment variables
These variables are defined only for the current session. They will be irretrievably erased after the session is completed, whether it is remote access or terminal emulator. They are not stored in any files, but are created and deleted using special commands.
Custom shell variables
These shell variables in Linux are defined for a specific user and are loaded each time it logs in using the local terminal, or it is remotely connected. Such variables are usually stored in configuration files: .bashrc, .bash_profile, .bash_login, .profile or in other files located in the user's directory.
System environment variables
These variables are available throughout the system, for all users. They are loaded when the system starts from the system configuration files: / etc / environment, / etc / profile, /etc/profile.d/ /etc/bash.bashrc.
If you are using nvm to manage different nodejs version, then try disabling nvm and using only one global nodejs version.
Regarding the environment variables that are being volatile, make sure that you track down the proper profile file that is being parse and place your changes there.
It would help if you can be more specific about your current platform. Then, people will be able to respond with more precision.
hi i solve my problem by adding in profile file environnement variable
export ANDROID_HOME=~/Android/Sdk
export ANDROID_HOME=~/Android/Sdk/tools
export ANDROID_HOME=~/Android/Sdk/platforms-tools
then i erased all path generate by all commands line entries from my terminal in .bashrc file(i think that it was that the problem source) . finally it work well thanks a lot to everybody for your helps
I have executed following commands (on Windows, using Git Bash) in the directory D:\vagrant\precise32\02-lamp\
$ vagrant box add precise32 http://files.vagrantup.com/precise32.box
$ vagrant init precise32
$ vagrant up
Note. I haven't changed original Vagrantfile.
I thought the directory D:\vagrant\precise32\02-lamp\ would be the place of the VDI-like file but it is not. The working directory serves as the shared folder.
I found the location of the Vagrant box
C:\Users\USER\.vagrant.d\boxes\precise32\0\virtualbox
According to Where is Vagrant saving changes to the VM I found in the VirtualBox GUI the location of the Virtual hard drive file. Which is
C:\Users\USER\VirtualBox VMs\02-lamp_default_1458429875795_57100\
I would like to put this file not in the system drive C:\ but in the data drive which is D:\.
How to set such vagrant configuration?
For VirtualBox, you can change the location of what is known as the Default Machine Folder through the GUI's Preferences dialog box.
This guide, while a couple of years old, works fine and I followed it last week for how to move an existing vagrant/VirtualBox drive to a new location.
EDIT
I have quoted the steps from the above link/guide, for posterity:
Move ~/.vagrant.d to the external drive. I renamed it vagrant_home so
I'd be able to see it without ls -a.
Set VAGRANT_HOME to
/path/to/drive/vagrant_home in ~/.bash_profile.
Open the VirtualBox
app, open Preferences, and set its Default Machine Folder to
/path/to/drive/VirtualBox VMs.
Close VirtualBox.
Move your
VirtualBox VMs folder to the drive. Reopen VirtualBox. You'll see
your VMs are listed as "inaccessible". Remove them from the list.
For
each VM in your VirtualBox VMs folder on the external drive, browse
to its folder in Finder and double-click the .vbox file to restore it
to the VirtualBox Manager. (Is there an easier method than this?)
Finally, move any existing Vagrant directories you've made with
vagrant init (these are the directories with a Vagrantfile in each) to
the external drive. Since these directories only store metadata you
could leave them on your main drive, but it's nice to keep everything
together so you could fairly easily plug the whole drive into another
machine and start your VMs from there.
It is also possible to do this via CLI for when you ALWAYS want to change where Virtualbox creates the VMs during import (because Virtualbox usually wants to put them in a single place rather than tracking wherever they live on the disk the way VMware does it).
Note that changing this setting via the GUI or CLI does NOT move existing VMs, it will simply set a new path to be used by the next machine imported/created. If you have existing machines you want to move, you could shutdown and close all of the instances of Virtualbox and then use mv /old/path /new/path from a cmd/shell window or cut and paste the folder to the new location in the GUI and then change the machinefolder to that path and open Virtualbox and it should detect all the existing VMs.
Using the CLI makes it much easier to script/automate if you have a large number of users needing to move the VMs path out of their home directory to avoid huge files getting automatically backed up. The "best" place for the VMs depends a little bit on your system, but /usr/local/ can be a good place to create a new folder on macOS or Linux.
# Look at the current path
vboxmanage list systemproperties | grep machine
# Output (commented for easier copying and pasting of commands)
# Default machine folder: /Users/<YourUser>/VirtualBox VMs
# Set it to a different folder in your home aka ~
# If you user has access to the path and can create files/folders, then
# the folder doesn't need to exist beforehand, Virtualbox will create it
vboxmanage setproperty machinefolder ~/VirtualMachines
# No output produced
vboxmanage list systemproperties | grep machine
# Output (commented for easier copying and pasting of commands)
# Default machine folder: /Users/<YourUser>/VirtualMachines
You can also set it to a folder outside of home, but this usually requires creation of the folder and the permissions to be fixed before Virtualbox can use it.
# [Optional] Only needed if moving out of the home directory to
# a place the user doesn't have permission to access by default
sudo mkdir -p /usr/local/VirtualMachines && \
sudo chown -R ${USER} /usr/local/VirtualMachines
# If you add : like this `${USER}:` to the above, instead of
# setting the group to admin or wheel it will use the user's default group
# which can be seen by running `id -g`
vboxmanage setproperty machinefolder /usr/local/VirtualMachines
# No output produced
vboxmanage list systemproperties | grep machine
# Output (commented for easier copying and pasting of commands)
# Default machine folder: /usr/local/VirtualMachines
If you change your mind you can easily set it back to the default, but you'll need to move your VMs back again yourself.
vboxmanage setproperty machinefolder default
vboxmanage list systemproperties | grep machine
# Output (commented for easier copying and pasting of commands)
# Default machine folder: /Users/<YourUser>/VirtualBox VMs
For each VM in your VirtualBox VMs folder on the external drive, browse to its folder in Finder and double-click the .vbox file to restore it to the VirtualBox Manager. (Is there an easier method than this?)
There is an easier way...
Go into the VirtualBox Manager GUI click Machine > Add and locate the .vbox you'd like to add back.