Running ddev with colima in external hard disk or anywhere besides home directory - ddev

I'm trying to run ddev outside my home directory. Colima is the docker platform. I'm wondering how I am able to make it fully run in an external hard disk.
I've added a Drupal9 site in my external hard disk and ran ddev config.
I then started it using ddev start.
I tried installing a Drupal site using the following command but got an error:
$ ddev drush si --site-name="Drupal 9" --account-name="admin" --account-pass="admin"
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "/mnt/ddev_config/.global_commands/web/drush": stat /mnt/ddev_config/.global_commands/web/drush: no such file or directory: unknown
Failed to run drush si --site-name=Drupal 9 --account-name=admin --account-pass=admin: exit status 126
Is there a way to have ddev fully run in the external hard disk? Thanks!

Colima by default mounts only the home directory. You can make it mount other things in the config file. You'll want to experiment with the mounts configuration in colima.yml. The default is:
# Colima default behaviour: $HOME and /tmp/colima are mounted as writable.
# Default: []
mounts: []

Thanks! I was able to get it running using the following:
$ colima start --cpu 4 --memory 8 --dns 8.8.8.8 --dns 1.1.1.1 --disk 128 --mount "/Volumes/ExtremeSSD/Projects:w" --mount "~:w"
I also needed to do ddev composer install instead of doing it in the host so that the permissions are handled correctly otherwise I get binaries inside vendor/bin not executable. I usually do composer install in the host so that it's faster.

Related

How to launch LXD container in WSL2?

I've set up wsl 2:
PS C:\Users\User> wsl --list --verbose
NAME STATE VERSION
* Ubuntu-18.04 Running 2
However when attempting to create a container from wsl the following error is returned:
$ lxc launch ubuntu:18.04 test
Error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: connect: no such file or directory
How to launch an LXD container from wsl2? From my understanding it should be possible given that wsl2 is a full linux kernel.
Adding sudo before the command works.
This needs to be done for every command of lxc/lxd.

Cannot run JavaFX app on docker for more than a few minutes

I developed an application used as a communication service for a separate web app. I had 0 issues "dockerizing" the web app but the service is proving to be a nightmare. It is based on JavaFX and there is a property that can be set by the user in the config file that makes it so the app does not initialize any windows, menus, containers, etc. This "headless" mode (not sure that is truly headless...) effectively turns the service app into a background service. Let me also preface this by saying that the app works absolutely flawlessly when run on my windows 10 machine and that i have deployed it on several other machines (all non-dockerized) with no issues.
Here is the dockerfile i came up with :
FROM openjdk:13.0.1-slim
RUN apt-get update && apt-get install libgtk-3-0 libglu1-mesa -y && apt-get update
VOLUME /tmp
ADD Some_Service-0.0.1-SNAPSHOT.jar Some_Service-0.0.1-SNAPSHOT.jar
ADD lib lib
ADD config.properties config.properties
ENTRYPOINT ["java", "--module-path", "lib/javafx-sdk-13", "-jar", "Some_Service-0.0.1-SNAPSHOT.jar"]
I then use this command to build the container :
docker run -t --name Some_Service -e DISPLAY=192.168.1.71:0.0 -e SERVICE_HOME= --link mySQLMD:mysql some_service
Assuming VcXsrv is running on my PC, the app start correctly, although it does give these warnings when first starting :
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Prism-ES2 Error : GL_VERSION (major.minor) = 1.4
The issue is that it only works for like 2 minutes. Eventually the container comes up with this error and crashes :
Gdk-Message: 15:28:54.770: java: Fatal IO error 11 (Resource temporarily unavailable) on X server 192.168.1.71:0.0.
I understand the initial messages are due to the container having no NVidia driver but the fallback to the software pipeline seems to work fine. Honestly I have no idea what the fatal IO error could be caused by. I have tried on different hosts running docker and the same issue happens.
Any idea how to fix this? Even better, any idea how to make a JavaFX app TRULY headless and not even require any of this stuff to be initialized? When running headless, i use Tasks and such which are part of JavaFX so I can't just not use it...
Install xvfb in your container this create a virtual screen.
change to Docker file:
FROM openjdk:13.0.1-slim
RUN apt-get update && apt-get install libgtk-3-0 libglu1-mesa xvfb -y &&
apt-get update
VOLUME /tmp
ADD Some_Service-0.0.1-SNAPSHOT.jar Some_Service-0.0.1-SNAPSHOT.jar
ADD lib lib
ADD config.properties config.properties
apt-get install xvfb
ENV DISPLAY=:99
ADD run.sh /run.sh
RUN chmod a+x /run.sh
CMD /run.sh
Add new bash Script in your project folder and name it "run.sh"
run.sh:
#!/bin/bash
#remove old
rm /tmp/.X99-lock #needed when docker container is restarted
Xvfb :99 -screen 0 640x480x8 -nolisten tcp &
java --module-path lib/javafx-sdk-13 -jar Some_Service-0.0.1-SNAPSHOT.jar
Dont forget to remove -e DISPLAY=192.168.1.71:0.0 from your docker run command

Docker Toolbox: Could not locate Gemfile since Host directory is not mounted to Home directory

I have Docker Toolbox installed on my local machine and I'm trying to run Ruby commands to perform database migrations. I am using the following docker commands within the Docker Toolbox Quickstart Terminal Command Line:
docker-compose run app /usr/local/bin/bundle exec rake db:migrate
docker-compose run app bundle exec rake db:create RAILS_ENV=production
docker-compose run app /usr/local/bin/bundle exec rake db:seed
However, after these commands are called, I get the following error:
Could not locate Gemfile or .bundle/ directory
Within Docker Toolbox, I am within my project's directory as I run these commands (C:\project).
After doing some research, it appears that I need to mount my Host directory somewhere inside my Home directory.
So I tried using the following Docker Mount commands:
docker run --mount /var/www/docker_example/config/containers/app.sh:/usr/local/bin
docker run --mount /var/www/docker_example/config/containers/app.sh:/c/project
Both of these commands are giving me the following error:
invalid argument "/var/www/docker_example/config/containers/app.sh:/usr/local/bin" for --mount: invalid field '/var/www/docker_example/config/containers/app.sh:/usr/local/bin' must be a key=value pair
See 'docker run --help'
Here is what I have in my docker-compose.yml file:
docker-compose.yml:
app:
build: .
command: /var/www/docker_example/config/containers/app.sh
volumes:
- C:\project:/var/www/docker_example
expose:
- "3000"
- "9312"
links:
- db
tty: true
Any help would be greatly appreciated!
The issue is because you are running on windows. You need a shared folder between your Docker machine and the Host machine.
Above is on my mac. You can see my /Users is shared as /Users inside the VM. Which means when I do
docker run -v ~/test:/test ...
It will share /Users/tarun.lalwani/test inside the VM to /test inside the container. Now since /Users inside the VM is shared to my host this would work perfectly. But if I do
docker run -v /test:/test ...
Then even if I have /test on my mac it won't be shared. Because the host mount path is dependent on the Docker host server.
So in your case you should check which folder is shared and then to what path is it shared. Assuming C:\ is shared at /c Then you would use below to get your files inside the VM
docker run -v /c/Project:/var/www/html ..

How to run a library in docker - confusion

I am trying to use microsoft's cntk library on a mac; for this purpose I am using Docker. I am not an expert in all this, though, so I am having a hard time figuring out how to make it work.
From my understanding, Docker provides a way to run an app in a virtualized environment, without having to virtualize the entire operating system. So you download (or create) images, and you run them in "containers".
Alright, so I have followed the required steps to make the cntk library work on Docker; and if I list the images, I find
$: docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
microsoft/cntk latest c2c192036e19 7 days ago 5.92 GB
ubuntu 14.04 7c09e61e9035 5 weeks ago 188 MB
hello-world latest 48b5124b2768 2 months ago 1.84 kB
At this point I want to run of the tutorials that are in the cntk repository. I have downloaded the master branch of the cntk repository on my desktop and try to run one of the examples in the "Tutorial" folder, but I get the following error:
terminal~ username$ docker run -w /Users/username/Desktop/CNTK-master/Tutorials microsoft/cntk configFile=lr_bs.cntk
container_linux.go:247: starting container process caused "exec: \"configFile=lr_bs.cntk\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"configFile=lr_bs.cntk\": executable file not found in $PATH".
ERRO[0001] error getting events from daemon: net/http: request canceled
terminal~ username$
Essentially I call docker run with the -w flag to inform him of where the files are, but it does not work. I tried searching online but it's not clear to me how to solve the issue. Should I create a new image? Should I call the docker run command with different parameters?
The -w flag sets the working directory, which is just the default directory inside the container. Your directory is on the host, so it won't work here. Instead you need to use volumes to mount your directory on the host into the container. The final paragraph in the document you link has an example:
$ docker run --name cntk_container1 -ti -v /project1/data:/data -v /project1/config:/config cntk bash

Can't mount Windows host directory to Docker container

I'm on Windows 10 Pro with Docker Version 1.12.0-rc3-beta18 (build: 5226). I would like use Docker for PHP development on Windows machine. I tried all possible (I hope) variations of mounting host directory into Docker container:
//c/Users/...
/c/Users/...
//C/Users/...
/c/Users/...
c:/Users/...
c:\Users...
"c:\Users..."
Neither of variants launch container. Yes, docker run creates container and I can see it with docker ps --all. But I can't it start. E.g. I tried simple documentation example:
docker run -d -P -v "C:\temp":/opt/webapp training/webapp python app.py
and
docker logs e030ba0f7807
replays as
python: can't open file 'app.py': [Errno 2] No such file or directory
What happened?
If you are using docker with docker-machine, you would need to register c:\temp first as a shared folder in VirtualBox.
See "docker with shared folder d drive"
From within a docker-machine ssh session:
sudo touch /mnt/sda1/var/lib/boot2docker/bootlocal.sh
Add to that file:
mkdir -p /mnt/temp
mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` temp /mnt/temp
That path would then be accessible through /mnt/temp for instance.
The same applies for C:\Users, which is already a shared folder c/Users.
It is accessible with /c/Users.
With Hyper-V, see "Running Docker on Hyper-V" from Henning M Stephansen:
Hyper-V is a more isolated and restrictive environment than VMWare or VirtualBox is, so there’s no concept of shared folders.
However we can mount and access Windows shares from our Docker VM.
The first thing you need to do is to share a folder. This folder can be restricted to just your user.
If the VM has access to the network through an External Virtual Switch or an Internal Virtual Switch you should be able to mount your folder from the docker VM.
To be able to mount a windows share from Boot2Docker/Tiny Core Linux we need to install some additional module (This might be included in your image):
wget http://distro.ibiblio.org/tinycorelinux/5.x/x86/tcz/cifs-utils.tcz
tce-load -i cifs-utils.tcz
Now we can mount the shared folder using the following command
sudo mount -t cifs //HOST-IP-HERE/SharedFolderPath /path/where/we/want/it/mounted -o username=HOST_USERNAME_HERE

Resources