So I have docker set up on OS X by using boot2docker. I have everything working, however, when I run docker run -i -t base /bin/bash it works, the prompt shows up... but it is EXTREMELY slow. By slow, I mean that if I would type one character, it takes about 30 seconds to a minute for that character to show up on the screen. I checked my Activity Monitor to make sure my system wasn't low on memory but that was not the case. It was showing around 85% of idle memory while this process was running. I was curious to see if anyone else on OS X was experiencing issues like this. Any input would be appreciated.
I've experienced the very same problem and as Julian already stated, it's a known issue. But, there is one post in that issue-thread that worked for me (well, at least an adapted version of it).
./boot2docker stop # stop a currently running deamon-instance
./boot2docker delete # remove the vm
rm -rf boot2docker.iso # in my case I had a (very old) iso-image
At this stage any new attempt to re-initialize the boot2docker-vm failed for me. So I was forced to re-install boot2docker itself (and as it turned out I had a very old version). You can do this via Homebrew or with the new installer (whereas, the solution demonstrated in the docker-docu doesn't work anymore). In any case, don't forget to set the DOCKER_HOST variable correct as explained in the documentation:
export DOCKER_HOST=tcp://127.0.0.1:4243
After re-installing boot2docker the following commands should work again:
./boot2docker init # fetches a brand new vm image and initializes
./boot2docker up # now we're back in business
That did it for me - now the performance is as expected
docker run -i -t --rm dockerfile/ubuntu /bin/bash
gives me an (almost) instant bash-prompt.
Related
Problem:
I started my system as usual but my docker-desktop doesn't work, WSL doesn't respond to commands and there is a process called "Vmmem" using 25% of my memory. I have tried a bunch of thing but nothing seems to work.
System Attributes:
Windows 10 Pro (10.0.19045.2486)
docker: 4.15
WSL: 1.0.3.0
More context:
Recently I was having trouble with my docker set up. I have one particular container that was "crashing" the docker. It was not throwing any exception but after some event (that I couldn't find out) all the other container where unreachable any attempt to stop/start another container would result on "Error: 500 failed to respond...". When this happens I usually just restart the system and everything works fine, but today it wasn't the case. I restarted and I noticed that I had the "Vmmem" process already running at 25% (it usually just reaches this point at the end of the day), the docker desktop could not start the docker backend and when I tried running wsl -l -v I got no response. I can use some docker commands like docker -v but the docker compose up doesn't work at all.
What I've tried:
restart the system again (nothing changed, still starting with 25% mem usage)
deactivating Hyper-V (nothing happened)
stop/start docker service using net start/stop <service> (it gives a response but didn't solve the problem)
Uninstall docker-desktop (it crashes before even starting the uninstall process)
Terminate WSL wsl -t Ubuntu (got no response from wsl)
Overwrite installation with Docker 4.16 (it gets stuck on "Preparing for update... / Stopping VM and preparing for update")
Forcefully kill the "Vmmem" (I've got Access denied error)
Edit 1:
I managed to finally install the Docker desktop 4.16 but the problem continues, system starts with 25% Vmmem memory usage and docker desktop is not able to initiate backend.
the process Vmeem It represents the memory and CPU consumed by the combination of all the virtual machines running on your Windows PC, there is a possibility that processes are still running on your PC. I recommend you try to launch these commands from the console:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and delete them.
If this doesn't work, I recommend you enter your bios settings and disable virtualization, that way those processes will stop, then you can enable it again and try. I wish you luck and I hope this resolves.
Steps that I did to be able to stop "Vmmem" process and install docker desktop again:
disable Hyper-V
disable virtualization (BIOS)
restart system
to this point the "Vmmem" problem was gone
uninstall docker desktop
rm all wsl instances
enable Hyper-V
enable hypervisorlaunchtype
restart system
enable virtualization (BIOS)
install wsl Ubuntu instance
install Docker Desktop
Maybe some steps listed here are redundant but that is what I did. hope it helps if other people is passing through the same problem
https://i.imgur.com/hYf1Bes.jpgm
I am trying to set up ROS and Gazebo in a VM running Ubuntu.
The goal is that I want to simulate Turtlebot with the open manipulator.
I installed everything without any issues.
Though I am not able to launch the Turtlebot environment on Gazebo (like here: http://emanual.robotis.com/docs/en/platform/turtlebot3/simulation/)
$roslaunch turtlebot3_fake turtlebot3_fake.launch
results in Gazebo staying forever in the state loading your world. After some time, it stops responding.
Launching the empty world however works.
I am using ROS 1 with Gazebo 7.0
My hardware setup:
MacBook Pro 13" 2019 with 16 GB RAM
Parallels VM: 3D virtual. ON, no performance limit, 4 CPU kernels, 12 GB RAM enabled
Thank you so much for your help.
After every change you made source your bash and make sure to run :
catkin make
if you've done this already then check if ros is installed properly by running
roscore
on one terminal and let it stay running.
After that try to launch your turtlebot on another terminal.
If it doesnt work even you have installed all of the needed things, i think the problem is with your VM, id recommend you to run ROS on Ubuntu running USB Stick.
cd ~/.gazebo/
mkdir models
cd models/
wget http://file.ncnynl.com/ros/gazebo_models.txt
wget -i gazebo_models.txt
ls model.tar.g* | xargs -n1 tar xzvf
try this gazebo try to download to packages that's why it waits u need internet for that this may take few mins
I'm running docker on Windows.
When I run a command like docker ps -a or even just docker --version there is a 10 second pause before it produces any output. This seems to happen only when the machine has been idle for a while (for maybe 20 seconds or more?) Then subsequent executions of docker --version will be quick and responsive. If I let the machine sit for a bit, then it goes back to exhibiting the long pause before output again. If I run another docker command immediately after, then it responds quickly again.
I looked through my path to make sure there was nothing weird. I checked power/performance settings. I don't really know where else to look. It's only docker that is behaving like this. Other apps on the machine work fine. So I don't think it has anything to do with the disk or memory or anything like that. I have 32 GB of RAM and <10 GB of it is in use.
This seems a bit mysterious, I know, I'm sorry. Any suggestions of things that I could try to get more information and ultimately figure this out?
It's annoying that half the time when I execute a docker command I have to needlessly wait 10 seconds before the command will start.
Docker version 18.06.0-ce, build 0ffa825
Windows 10 Enterprise
I am trying to run any GUI container I can on MacOS. With every container I try (firefox, chrome, tor, spotify, etc) I always get the error Error: cannot open display. And it's not specific to the docker run command where I pass the environment flag with my $DISPLAY. When I try to run xhost + I get the same error.
I have a fresh XQuartz installation. It is up and running. I have turned on "allow connections from network" under security. I've tried building my own images and pulling jessie frazelle's images. I do not suspect it is a docker issue or the Dockerfiles. It is something on the host, my laptop. I can't seem to figure out what it is.
MacOS Sierra 10.12.5
Docker 17.12.0 Stable
XQuartz 2.7.11 (xorg-server 1.18.4)
My local's $DISPLAY is set to :0.0
So I finally got this to work. And it seems it was pretty simple. I am not certain how this actually fixes the issue, but now the containers work.
How I fixed it was opening up XQuartz and then opening up the "Terminal" app from the "Applications" menu. Then running the command export DISPLAY=192.168.1.X:0, then xhost +. It outputted something like this "access control disabled, clients can connect from any host". After that I was able to run my docker run commands to launch the desired GUI containers.
I am still uncertain how this works and not running them from my laptops Terminal app, but it worked. It must be something I have set in my local env. Hopefully this helps someone else out who may be running into the same issues.
Based on #Byron's answer, I've found out that I could get it to work by running these 2 commands in the normal terminal:
export DISPLAY=:0
/opt/X11/bin/xhost +
I installed DockerToolbox 1.11.1 on my Mac OS 10.11 and it does start docker via Kinematic and if I click Docker CLI it wills start a terminal where docker is properly running (docker version returns info and success).
Still, If I try to do the same from normal console it does fail to detect docker and I do want to have docker available in any console window, starting it at login time, automatically or on demand. Still, once started I expect to be able to use it from any console.
I guess that this part was missing from the tutorials and I would like to find a solution for it. How can I do this?
This is what docker machine is for. Your docker instance is running in a virtual machine, and you have to set a few environment variables to connect to it(DOCKER_HOST, DOCKER_TLS_VERIFY, DOCKER_CERT_PATH and DOCKER_MACHINE_NAME) . If you run eval $(docker-machine env [machine name]) this will set those variables automatically for you assuming the VM is up. You could then put that line into your bash profile for automatic setup.
Check out the docs here https://docs.docker.com/machine/overview/
Also, there is a native version of Docker for OSX (currently in limited beta) which removes the need for docker machine, so hopefully in the near future none of this will be necessary.
I was able to come up with some code that works across all tested platforms, including OS X:
docker version > /dev/null || {
# that's in case docker machines is the the current one (OS X)
eval "$(docker-machine env default)"
}
# keep this here, it will return an error code if docker is not usable
docker version