microk8s ctr image ls does not show my images - microk8s

My microk8s runs in my Windows WSL 2 instance. I need to use my WSL IP address to access the container registry, like 172.21.115.208:32000.
After I pushed an image to that repository, I could not find it through microk8s ctr images ls. But I can find it using the REST API, like http://172.21.115.208:32000/v2/_catalog.
I think the ctr images ls command is referring to a different registry on my system. But I should have only one container registry on my system.

Related

How to use vscode remote containers with docker machine (and docker toolbox)?

I use Windows 7 and can't install Docker for Windows, so I use Docker Toolbox.
Docker Toolbox is not supported by Microsoft Visual Studio Code for Remote Container Development.
But I need to use this functionality with my docker toolbox.
There is an issue on Github not solved yet https://github.com/microsoft/vscode-remote-release/issues/95
Docker Toolbox was a product based on docker-machine and virtualbox to use a local VM. That VM has all your user profile shared by default, so you can share any folder on your profile with a container in the VM using the path /c/user/<profile_name>/folder/a/b.
Warning: Be careful to avoid sharing all your user profile with an image you don't trust
Steps to enable VSCode remote containers when using docker machine
You need to start your docker-machine (tested with vscode 1.40.2+)
In your .devcontainer.json you can overwrite the workspace mount volume command (More info here)
"workspaceMount":
"src=//c/Users/yourusername/git/reponame,dst=/workspaces/reponame,type=bind,consistency=delegated"
VSCode search the default workspace inside the container in /workspaces with the same name as the original and opens it automatically, but you can override this in .devconatiner if you need or open it manually.
Important: your repository should always be inside your windows user profile (%userprofile%). This is a requirement from Docker Toolbox default shares.
Note: the problem with Docker Toolboox is that Visual Studio Code doesn't support the docker-machine path style to mount volumes by default. But this workaround can help you.
Updated 2020/05/13
Tested with 1.44 it still works but you can't use an environment variable to config mount paths yet. So each developer should customize the local path of the repo after clone the repository.
Updated 2020/10/29
Microsoft added information about how to use VSCode remote containers with Docker Machine here. Microsoft docs let's you imagine what kind of path should you use because it doesn't assume that the docker-machine environment is a local VM. This is where you can found this answer useful.

Docker: Mount volume on Windows host - Windows Container

I seem to be running the least supported combination of docker. Im running on a windows host machine and a windows container.
I want to mount C:\temp -> C:\temp
I have tried
docker run ... -v C:\temp:C:Temp
docker run ... -v C:/temp:C/:Temp
docker run ... -v //C:\temp://C/:Temp
Im supposed to go to docker settings "Shared Volumes". But that is not available for Windows containers
When I try to read from that directory in my application, the directory does not exist
I too was not able to mount C drive inside windows container to my
host OS C drive. The solutions given on mounting drives seems to be
related to Linux containers running on windows which is not expected
here.
There is a workaround I did to access the host C drive from inside the
container and copy the files to a folder outside the container and
watch them in explorer(If that is the behavior which solves you
purpose). Below are the steps I performed.
1) docker exec -it container(name/id) powershell (open powershell
inside container)
2) net use X: \SERVER\Share (X is the name of the network drive. You
can choose any name. SERVER is the IP address of your host machine.
Share is the folder name which is present in the host machine. In your
case it is c:\temp) Example: X:\ \XXX.XXX.XXX.XXX\c$\temp You will be
prompted to enter your username and password
3) Once you have this setup, you can browse your network drive (X:).
Anything on your host temp folder will be accessible inside your
container. You can copy the files and folders to and from this network
drive and it would be visible in your C:\temp folder on your host
machine.
Hope this helps!

How to include IP of Docker host in docker-compose.yml?

I want to include support for Xdebug in a PHP Docker container, however as part of this, I need to specify the IP of the Windows machine running the Docker container via XDEBUG_CONFIG=remote_host=${HOST_IP} - Currently HOST_IP is manually specified in a .env file, but I'd like to automate this to reduce the setup steps for other users.
My problem is that I can't seem to find a way to easily determine the IP of the host machine. It also needs to work on both Windows and Linux Docker hosts, as not all users use Windows as their desktop environment. I also can't use ${HOSTNAME}, as this fails to resolve in DNS.
Does anyone have any suggestions on how to achieve this?
EDIT2: Updating this answer for the newer versions of Docker: From 18.03 onward, Docker For Windows and other Docker platforms have been updated to include a cross-platform hostname for their Docker host, host.docker.internal - which is bloody helpful.
You might try a formatted docker info:
https://docs.docker.com/engine/reference/commandline/info/#format-the-output
docker info --format '{{json .}}'
docker info --format '{{json .path.to.ip}}'
E.g. in a (single-host) Docker Swarm you can get the host ip by:
docker info --format '{{json .Swarm.NodeAddr}}'
Via command substition stored in a variable:
docker_host_ip=$(docker info --format '{{json .Swarm.NodeAddr}}')
I could not try it on on Windows or on Docker without Swarm ... but docker info should work across platforms.
Update (according to comments below):
Not really (syntactically) "beautiful" you can use --format {{index path arrayIndex "Key"}} with docker network inspect and access the first element of an array (index 0) and then access the map inside this array via its key ("Gateway"):
docker network inspect docker_dockernet --format '{{index .IPAM.Config 0 "Gateway"}}'

Where exactly, are files in docker container stored on the host machine

I am using docker on windows. With the use of kitematic, I have created an ubuntu container. This ubuntu image has postgresql installed on it.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)?
Where exactly does the container store its file system on the host machine?
I hope it would be part of image file with format VMDK.
Please correct me if I'm wrong.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)
That is not how Docker would allow you to modify a file in a container.
For that, you should mount a host (Windows) folder when starting (docker run -v) your container.
See "Mount a host directory as a data volume"
docker run -d -P --name web -v /c/Users/<myACcount>/src/webapp:/opt/webapp training/webapp python app.py
Issue 247 mentions ~/Library/Application Support/Kitematic for App data, and ~/Kitematic "for easy access to volume data".

How to run multiple Docker Containers in different terminals using Docker compose or with Shell?

I have to pull docker image from Docker Hub and start running multiple peers as containers.
now, I am manually opening terminal and executing my docker run command on downloaded image but I am planning to automate this process like if I/user want 2 peers to run then I should be able to provide IP Address and Port information to Docker run command and start these peers in different terminals without manual step.
After executing these commands I should be able to store these IP address and port numbers in a JSON file further transactions.
Could you please help me!!! Thanks!!
Got quick solution for the above problem.. Below is the command I have applied docker run -d IMAGE NAME /bin/bash above command runs the container in background process. Also, I am taking network credentials by applying docker inspect <Container Id>

Resources