My Kubernetes cluster on docker for desktop on Mac is non responsive.
So I tried to reset Kubernetes as was suggested in
delete kubernetes cluster on docker-for-desktop OSX
The results are:
All Kubernetes resources are deleted
Kubernetes restart hangs
GUI to disable Kubernetes is grayed out and non responsive
I would like to avoid reseting docker so I can keep my image repository
How do I manually remove Kubernetes from docker VM?
You can try disabling Docker Kubernetes in the settings file. You can find settings file in path ~/Library/Group\ Containers/group.com.docker/settings.json. Edit kubernetesEnabled property to false.
"kubernetesEnabled" : false,
I have ended up in situation where k8s is partly deleted and was not able to start docker. Restarting and/or changing this setting helped and did not delete images. I was not able to reproduce the situation later.
Also make sure you are running latest version of Docker.
How about this?
‚docker rm -f $(docker ps -aq)‘
This deletes and removes all containers.
I can't give you a technical answer that immediately fixes your problem, and this text is too long for a comment... but as someone who also had the same issue (couldn't disable k8s without a factory reset in Docker for Mac), my recommendation is:
Is it really worth for you to keep the image repository? Consider, what's a container? A program. It's not a VM. Would you backup your ls, ssh, vim... binaries when you want to initialize your OS? No right? But this is the same, you should view the container like another binary.
Odds here are that if you mess with manual actions, you will end up with a docker daemon in an undesired state. So, IMO, just go ahead and purge the Docker for Mac and start over, it's not really a big deal.
If you have tons of own images, you can build them right away. If you have tons of downloaded images, consider this as a good thing to do some cleaning. Also, notice that images work in layers, so if your images are correctly build leveraging the use of layers, the building process will be quite fast.
To remove the kuberntes cluster from docker desktop you need to run: rm -rf ~/.kube
Related
I'm using Docker Desktop for M1 MAC and I was playing around with allocating different amount of resources and I have somehow ended up breaking the application. Docker Desktop is constantly stopping/starting, I cannot open preferences as I only get the spinner and cannot proceed.
I have reinstalled the app multiple times, tried all deletion scripts I could find online, deleted all Docker related files from the Library and ~/.docker, but upon reinstalling the app, my previous settings are still there (Starting Docker upon login is enabled, which is disabled by default). I have also restarted my computer multiple times and tried resetting to factory settings and uninstalling from the Troubleshoot menu, but to no avail.
What could I do? Thanks!
It is possible that something is set in your "defaults" that is upsetting docker.
If I generate a list of all "defaults" domains on my Mac, and search for docker, I get the following:
defaults domains | tr ',' '\n' | grep -i docker
com.docker.docker
com.electron.dockerdesktop
So, if all else is lost, you might consider using the defaults command to delete those two domains and hope that docker starts with sensible defaults, or then re-install. I haven't tried this as my docker instance is working nicely.
I simply cannot find a solution for this " Unable to connect to the server eof", hopefully we solve this and it helps somebody in future, when searching to fix this issue. I tried to include all information in the screenshot. Let me know if more information is required.
I have tried to add the information requested, let me know if theres anything else you need or want me to try.
Please try to get into logs of kube-dns docker container directly as below:
Enable 'Show system containers' option in Docker Desktop settings:
Check if kube-dns container was ever run:
docker ps -a --filter name=k8s_kubedns_kube-dns --format "table
{{.ID}}\t{{.Image}}"
You should see similar command output to this:
CONTAINER ID IMAGE 9009731e183d 80cc5ea4b547
Get kube-dns container logs with:
docker logs {your_container_id} --tail 100
I had this error (with Docker Desktop on Windows 11, using wsl2), and the solution for me was to use the Settings page (in DD), then choose "Kubernetes" on the left, then chose "reset kubernetes cluster". That warned that all stacks and k8s resources would be deleted, but as nothing else had worked for me (and I wasn't worry about losing much), I tried it, and it did solve the problem.
FWIW, k8s had been working fine for me for some time. It was just that one day I found I was getting this error. I searched and searched and found lots of folks asking about it but not getting answers, let alone this answer (even here). To be clear, I had tried restarting Docker Desktop, restarting the host machine, even downloading and installing an available DD update (I was only a bit behind), and nothing worked. But the reset did.
I realize some may fear losing something, but again it worked and was worth it for me. Hope that may help others.
I'm a junior web developer working in a small web agency. We work on windows 10, with wampserver and mainly with Prestashop, Wordpress and Symfony websites. For every task I am given a "ticket" for which I must develop on a new branch (if needed). When the work is done I merge my branch on the develop branch which is hosted on a preproduction server, and if it is considered ok, it is then merged on the master branch, which is the website in production hosted on another server.
I was given the task to do some research on Docker and to find how it could improve our workflow.
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
No, Docker containers are not only useful for testing.
When building a correct workflow with Docker you can achieve 100% parity between development, staging and production if all use the same docker images.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
This pre production server, aka what is normally called a staging server should also use docker to run the code.
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
Yes... You create the base docker images with only the necessary stuff to run in production and from them you can build other docker images with the necessary developers tools.
Production image company/app-name:
FROM php:alpine
# other your production dependencies here
Assumming that ones build the docker image with name company/app-name then the image for development.
Development image company/app-name-dev:
FROM company/app-name
# add here only developer tools
Now the developer uses both images company/app-name and company/app-name-dev during development and in the staging server only company/app-name docker image will be used to run the code.
After soem months of interaction in this flow you may even fill confident to start using the company/app-name to deploy the app to production and now you are with a 100% parity between development, staging and production.
Take a look to the Php Docker Stack for some inspiration, because I built it with this goal in mind for my latest job, but I end up to leave the company when we where in the process of adopting it in development...
But don't put all services you need in one single docker image because that is a bad practice in docker, instead use one service per docker image, like one service for PHP, another for the database, another for the Nginx server, etc.. See here how several services can be composed together with docker compose.
I need some straight answers about this as the current docker info and general Web info mixes hyperv and vmware info up.
I have installed docker on my windows 10 pro machine. I do not have vmware/virtual box installed I don't need it I have hyperv? I can use docker on a Linux Ubuntu box fairly well and I (think!) I understand volumes. Sorry for the background...
I am developing a node app and I simply want to have a volume within my Linux container mapped to a local directory on my windows machine, this should be simple but everytime I run my (lets say alpine linux) container '-v /c/Users:/somedata' the directory within the Linux /somedata directory is empty?
I just don't get this functionality on Windows? If you have a decent link I would be very grateful as I have been going over the docker info for two days and I feel I am nowhere!
If volumes are not supported between Windows and Linux because of the OS differences would the answer be to use copy within a Docker file? And simply copy my dev files into the container being created?
MANY MANY THANKS IN ADVANCE!
I have been able to get a link to take place, but I don't really know what the rules are, yet.
(I am using Docker for windows 1.12.0-beta21 (build: 5971) )
You have to share the drive(s) in your docker settings (this may require logging in)
The one or two times I've gotten it to work, I used
-v //d/vms/mysql:/var/lib/mysql
(where I have a folder D:\vms\mysql)
(note the "//d" to indicate the drive letter)
I am trying to reproduce this with a different setup, though, and I am not having any luck. Hopefully the next release will make this even easier for us!
Ok. I am playing around with different tools to prepare dev environment. Docker is nice option. I created the whole dev environment in docker and can build a project in it.
The source code for this project lives outside of docker container (on the host). This way you can use your IDE to edit it and use docker just to build it.
However, there is one problem
a) Docker on OS X uses VM (VirtualBox VM)
b) File sharing is reasonably slow (way slower than file IO on host)
c) The project has something like a gazzilion files (which exaggerate problems #a an #b).
If I move source code in the docker, I will have the same problem in IDE (it will have to access shared files and it will be slow).
I heard about some workaround to make it fast. However, I can't seem to find any information on this subject.
Update 1
I used Docker file sharing feature (meaning I run)
docker run -P -i -v <VMDIR>:<DOCKERDIR> -t <imageName> /bin/bash
However, sharing between VM and Docker isn't a problem. It's fast.
The bottle neck is sharing between host and VM.
The workaround I use is not to use boot2docker but instead have a vagrant VM provisioned with docker. No such big penalty for mounting folders host->vagrant->docker.
On the downside, I have to pre-map folders to vagrant (basically my whole work directory) and pre-expose a range of ports from the vagrant box to the host to have access to the docker services directly from there.
On the plus side, when I want to clean unused docker garbage (images, volumes, etc.) I simply destroy the vagrant vm and re-create it again :)
Elaboration
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "trusty-docker"
config.vm.box_url = "https://oss-binaries.phusionpassenger.com/vagrant/boxes/latest/ubuntu-14.04-amd64-vbox.box"
config.vm.provision "docker"
#by default we'll claim ports 9080-9090 on the host system
for i in 9080..9090
config.vm.network :forwarded_port, guest: i, host: i
end
#NB: this folder mapping will not have the boot2docker issue of slow sync
config.vm.synced_folder "~/work", "/home/vagrant/work"
end
Having that:
host$ vagrant up && vagrant ssh
vagrant$ docker run -it --rm -v $(pwd)/work:/work ubuntu:12.04 find /work
This is unfortunately a typical problem Windows and OS X users are currently struggling with that cannot be solved trivially, especially in the case of Windows users. The main culprit is VirtualBox's vboxfs which is used for file sharing which, despite being incredibly useful, results in poor filesystem I/O.
There are numerous situations by which developing the project sources inside the guest VM are brought to a crawl, the main two being scores of 3rd party sources introduce by package managers and Git repositories with a sizable history.
The obvious approach is to move as much of the project-related files outside of vboxfs somewhere else into the guest. For instance, symlinking the package manager directory into the project's vboxfs tree, with something like:
mkdir /var/cache/node_modules && ln -s /var/cache/node_modules /myproject/node_modules
This alone improved the startup time from ~28 seconds down to ~4 seconds for a Node.js application with a few dozen dependencies running on my SSD.
Unfortunately, this is not applicable to managing Git repositories, short of splatting/truncating your history and committing to data loss, unless the Git repository itself is provisioned within the guest, which forces you to have two repositories: one to clone the environment for inflating the guest and another containing the actual sources, where consolidating the two worlds becomes an absolute pain.
The best way to approach the situation is to either:
drop vboxfs in favor of a shared transport mechanism that results in better I/O in the guest, such as the Network File System. Unfortunately, for Windows users, the only way to get NFS service support is to run the enterprise edition of Windows (which I believe will still be true for Windows 10).
revert to mounting raw disk partitions into the guest, noting the related risks of giving your hypervisor raw disk access
If your developer audience is wholly compromised of Linux and OS X users, option 1 might be viable. Create a Vagrant machine and configure NFS shares between your host and guest and profit. If you do have Windows users, then, short of buying them an enterprise license, it would be best to simply ask them to repartition their disks and work inside a guest VM.
I personally use a Windows host and have a 64 GB partition on my SSD that I mount directly into my Arch Linux guest and operate from there. I also switched to GPT and UEFI and have an option to boot directly into Arch Linux in case I want to circumvent the overhead of the virtualized hardware, giving me the best of both worlds with little compromise.
Two steps can improve your performance quite well:
Switch to NFS. You can use this script to do this.
Switch your VBox NIC driver to FAST III. You can do this on your default machine by running:
VBoxManage modifyvm default --nictype1 Am79C973
VBoxManage modifyvm default --nictype2 Am79C973
I run a simple watch script that kicks off an rsync inside my container(s) from the shared source-code volume to a container-only volume whenever anything changes. My entry-point only reads from the container-only volume so that avoids the performance issues, and rsync works really well, especially if you set it up correctly to avoid things like .git folders and libraries that don't change frequently.
Couple of pieces of info which I found
I started to use Vagrant to work with VirtualBox and install Docker in it. It gives more flexibility
Default sharing in Vagrant VirtualBox provisioning is very slow
NFS sharing is much faster. However, it could be reasonably slow it (especially if your build process will create files which needs to be written back to this share).
Vagrant 1.5+ have a rsync option (to use rsync to copy files from host to VM). It's faster because it doesn't have to write back any changes.
This rsync option has autosync (to continiously sync it).
This rsync option consumes a lot of CPU and people came up with a gem to overcome it (https://github.com/smerrill/vagrant-gatling-rsync)
So, Vagrant + VirtualBox + Rsync shared folder + auto rsync + vagrant gatling looks like a good option for my case (still researching it).
I tried vagrant gatling. However, it results in non deterministic behavior. I never know whether new files were copied into VM or not. It wouldn't be a problem if it would take 1 second. However, it make take 20 seconds which is too much (a user can switch a window and start build when new files weren't synced yet).
Now, I am thinking about some way to copy over ONLY files which changed. I am still in research phase.The idea would be to use FSEvent to listen for file changes and send over only changed files. It looks like there are some tools around which do that.
BTW. Gatling internally using FSEvent. The only problem that it triggers full rsync (which goes and start comparing date/times and sizes for 40k files)