How to reset docker preferences on MAC - macos

I'm using Docker Desktop for M1 MAC and I was playing around with allocating different amount of resources and I have somehow ended up breaking the application. Docker Desktop is constantly stopping/starting, I cannot open preferences as I only get the spinner and cannot proceed.
I have reinstalled the app multiple times, tried all deletion scripts I could find online, deleted all Docker related files from the Library and ~/.docker, but upon reinstalling the app, my previous settings are still there (Starting Docker upon login is enabled, which is disabled by default). I have also restarted my computer multiple times and tried resetting to factory settings and uninstalling from the Troubleshoot menu, but to no avail.
What could I do? Thanks!

It is possible that something is set in your "defaults" that is upsetting docker.
If I generate a list of all "defaults" domains on my Mac, and search for docker, I get the following:
defaults domains | tr ',' '\n' | grep -i docker
com.docker.docker
com.electron.dockerdesktop
So, if all else is lost, you might consider using the defaults command to delete those two domains and hope that docker starts with sensible defaults, or then re-install. I haven't tried this as my docker instance is working nicely.

Related

Manual uninstall of Docker Desktop on Windows?

I tried uninstalling Docker Desktop in the traditional way (Control Panel -> Programs -> Programs and Features) but ran into some complications. I now have a bunch of Docker files scattered across my machine (e.g. in AppData/Roaming, the docker command still works, etc.) but it doesn't appear in Programs and Features any longer, so I can't try a traditional uninstall.
The Docker Desktop icon is still there, but when I click it I get a "Docker failed to initialize" error. When running the installer, it hangs. Is there a way to manually uninstall whatever Docker files I have left, the way the traditional uninstall would? Or just reset it somehow so I can use the application again?

How to manually remove Kuberntes cluster from docker

My Kubernetes cluster on docker for desktop on Mac is non responsive.
So I tried to reset Kubernetes as was suggested in
delete kubernetes cluster on docker-for-desktop OSX
The results are:
All Kubernetes resources are deleted
Kubernetes restart hangs
GUI to disable Kubernetes is grayed out and non responsive
I would like to avoid reseting docker so I can keep my image repository
How do I manually remove Kubernetes from docker VM?
You can try disabling Docker Kubernetes in the settings file. You can find settings file in path ~/Library/Group\ Containers/group.com.docker/settings.json. Edit kubernetesEnabled property to false.
"kubernetesEnabled" : false,
I have ended up in situation where k8s is partly deleted and was not able to start docker. Restarting and/or changing this setting helped and did not delete images. I was not able to reproduce the situation later.
Also make sure you are running latest version of Docker.
How about this?
‚docker rm -f $(docker ps -aq)‘
This deletes and removes all containers.
I can't give you a technical answer that immediately fixes your problem, and this text is too long for a comment... but as someone who also had the same issue (couldn't disable k8s without a factory reset in Docker for Mac), my recommendation is:
Is it really worth for you to keep the image repository? Consider, what's a container? A program. It's not a VM. Would you backup your ls, ssh, vim... binaries when you want to initialize your OS? No right? But this is the same, you should view the container like another binary.
Odds here are that if you mess with manual actions, you will end up with a docker daemon in an undesired state. So, IMO, just go ahead and purge the Docker for Mac and start over, it's not really a big deal.
If you have tons of own images, you can build them right away. If you have tons of downloaded images, consider this as a good thing to do some cleaning. Also, notice that images work in layers, so if your images are correctly build leveraging the use of layers, the building process will be quite fast.
To remove the kuberntes cluster from docker desktop you need to run: rm -rf ~/.kube

Docker toolbox with Visual studio - Volume sharing is not enabled

I'm trying to get running a docker support with Visual studio 2017 for a .net core 2.0 web app running on linux containers. I'm working on machine with win 7 OS, so I must use a Docker toolbox with Virtual box. I've already checked this question: How to get docker toolbox to work with .net core 2.0 project, but I got stuck in the following problem, when trying to run it with VS:
Volume sharing is not enabled. Enable volume sharing in the docker ce
for windows settings
So far I know that there is a default volume mounted under the C:\Users, so my project files should be copied somewhere under this folder in case I don't want to mount any other volume. So I copied them there.
When I check the settings of my Virtual box, folder seems to be shared:
I can even cd into this folder with command line, but still can't get over this problem. Any ideas about this?
Finally I got this running. Error message comming from VS is very misleading and it has nothing to do with volume sharing. Eventually I realized that problem is in running a debugger, because when I ran solution with Ctrl + F5 everything was ok and container started correctly. Problem occurred only when running with F5 and trying to attach a debugger.
Then I found some clues in console output. VS tries to download some tooling for debugging containers with powershell script named GetVsDbg.ps1. When running this script I could observe errors like:
Add-Type : Cannot add type. The assembly
'System.IO.Compression.FileSystem' could not be found.
Finally I fixed this issue by updating powershell version which was somehow in collision with my .net framework installed on my machine.
Well in my case it turned out that I had changed my windows password and docker wasn't able to get access.
So it was just
uncheck shared drives
Apply
Check again. Enter new password
restart docker
Below setting helped me getting rid of this error. Check the drive you want to share and click apply. This might ask you your network credential just enter in case it pops up.
Docker settings
Thanks,
Rakesh
I fixed it by running following command in Powershell:
docker network create nat
I got same issue attempting to publish an Azure Function App to a Container Registry.
Newer version of Docker Desktop for Windows 2.3, has new interface. I had to got to Resources|File Sharing and add a new Folder. This resolved that issue...

Docker won't start on Windows: Not Enough memory to start docker

I am trying to get started with Docker on Windows. My machine has 4GB of RAM and a 1.9GHz - 2.5GHz Intel i5 processor, running Windows 10 Pro x64. I know these aren't powerful specs, but I would have thought I should be able to run Docker?
However, having downloaded Docker, I get the error message:
Not Enough memory to start docker
I have seen various forum posts and github issues about this and followed all the advice I can see, such as modifying the settings in Docker, I tried these:
They also mentioned changing the settings of the Hyper-V VM however, this seems to be deleted and recreated with the Docker specified settings on every attempted launch. I tried 2048MB, 1792MB, 1536MB, 1280MB and 1024MB of RAM, all of which failed.
What else can I do? Surely I can run docker in some form on my machine? NB: I have closed all non-essential background apps. There doesn't seem to be many other suggestions for what seems to be a fairly common issue, where the given solutions don't work?
I have also encountered the same problem. Tried everything from giving dynamic memory to enabling and disabling Hyper V and many more. But with all that, I got no success.
Then I tried these steps for
Docker won't start on Windows: Not Enough memory to start docker:
From System Tray menu, right click on Docker icon
Select Switch to Windows containers...
Restart the system.
There you go after restarting your docker status should be showing as: Docker is running
PS: Switching back to Linux container should work now after switching to Windows Containers for most of the users as said by rfay.
Updates (May 01, 2019)
Despite of the above methods if you're still unable to start docker on your Windows Machine, try with the following things:
Download RAMMap from Microsoft's Official website
Open the application and select Empty menu
From the sub-menu list select the first option Empty Working Sets
Now refresh it by pressing F5
Now try running docker and I believe this should work.
I solved this issue by right clicking on the docker tray icon chose settings and then tapped on the "Advanced" section.
Then I lowered the memory from the default 2048 to 1536 and works like charm.
Another option is try to switch to Windows Containers then Restart the Machine and switch back to Linux Containers.
Below is my docker settings with Advanced tab open. Note the Memory is 1536 and My laptop has 4GB Ram.
Also the virtual machine "MobyLinuxVM" is running as shown below;
I hope this helps someone one day even if was a late answer :)
if you are on window and got this error,
Go to Search box
#1 type Hyper-V Manager
Click on it,
a window like attached screenshot open,
#2 Select MobilinuxVM(normally same name if running windows docker)
#3 Right click and open - Setting
2 The second window will open(setting for MobiLinux) i.e to the screenshot.
#4 Go to Memory Tab in left Pane.
#5 click on a dynamic checkbox and set minimum value to some lower amount say 512, and max value to the desired one,
#6 apply
now it will start running as well after few minutes take the amount it required as well.
It is not the problem of RAM. It is the allocated disk memory to docker.
It means there is not enough space for docker to create an image or any other docker related operations.
Open the docker settings >> advanced >> disk image max size
Increase this size and apply the changes.
It will restart automatically and then you're good to go.
In Settings, I did a reset to factory defaults.
And restarted the laptop.
It Worked for me
Posting what worked for me:
Open Resources settings in docker
Set memory to lowest setting, in my case 1024MB
Open Task manager, verify that I've at least the memory I specified above free
Restart docker, switch to linux containers
In my case this worked because I was using almost all of my RAM with VScode and firefox, so closed them and tried and it worked
Have you enabled NUMA spanning in your HyperV settings? if not enable it i bet that will solve your issue.
By default, Windows Server enables NUMA spanning, which provides the most flexibility as virtual machines (VMs) can access and use memory in any NUMA node. But it may result in lower performance compared to forcing VMs to use memory on the same NUMA node as the processor cores.
By disabling NUMA spanning, you ensure that VMs use memory and processor cores in the same NUMA node, giving the best performance.
This should only be changed once, if, as an administrator, you feel comfortable with NUMA and the implications of disabling and also if you have some additional management suite that can help ensure best configuration.
To configure NUMA spanning, open the Hyper-V Settings and select the NUMA Spanning option and disable it, I am sure, it will get solved; I struggled with the issue for a week and resolved it by disabling NUMA.
I am sure this would be marked as resolved by disabling NUMA in Hyper-V Manager.
I lowered my memory and swap to the lowest it would go as well as the disk image size to 32gb and it finally started without switching to windows containers or having to reboot.
To fix this issue, you need (but firstly see Note #4 below):
Back up the DockerDesktopVM virtual drive
To find this path, open Hyper-V manager and Open setting of DockerDesktopVM, and find path. Usually it exists in vm-data folder in DockerDesktop ProgramData folder.
Switch to Windows containers via tray icon
Usually this file is locked. To unlock it, for me works, turning of all services with name Hyper-V and with name docker: Docker and Docker Desctop. Also, Docker Desctop UI should be turned off via tray icon right click on it.
Back up the file DockerDesktopVM !!!
When the file DockerDesktopVM was back up, so all folder vm-data with this file maybe deleted (be aware and careful this file contains all your containers and images.)
Starts all services back and run docker desktop UI.
Switch back to Linux containers
At this moment you will see the settings in DockerDesktop UI and new file was created in vm-data folder with name DockerDesktopVM
Stop the all services again and replace the new file DockerDesktopVM with your old file which was backed up at the step 5.
Start all services and Docker Desktop UI.
Note #1: most of difficulties ware with locked file DockerDesktopVM. Reboot is not required during manipulations with locked file. Updated: This file maybe acidentially attached as a disk to the host system. So, you need diskmgmt.msc on the host server. The disk was listed there, right click and choose detach. It prompts for confirmation that you have the correct file. At that point, process explorer confirms that the file is no longer open by pid 4 (nt kernel & system) and I am able to work freely with the .vhdx file. Updated 2: Or you need to run command net stop vmms. Manipulate with file and start the vmms back with command net stop vmms (origin https://community.spiceworks.com/topic/603713-solved-vhdx-can-t-be-deleted) Update 3: Anyway the vhdx file maybe locked due to VM is still running or hang. To determine this you can open vhdx file permission and see in the list of user one user with strange name similar to GUID - this is NT VIRTUAL MACHINE{GUID}. So, this is a virtual user under which your VM's process is running in windows. Then you can find the process vmwp.exe under this user in Taskmanager -> Details. Another way, you can find this process from Process Explorer latest version in Find Handler or DLL section by a search keyword 'vhdx'. You need to kill this process! After that, the vhdx file will be unlocked.
Note #2: If you backed up your DockerDesktopVM.vhdx file, so you can probably reset Docker to default for instance after step 7, or just reinstall the Docker Desktop
Note #3: Sometimes DockerDesktopVM.vhdx will be unlocked when it was deleted from Hyper-V Mager UI
Note #4: If your docker was able to start with wrong settings some how, but now it does not able to start. So, probably, you can try to avoid all manipulations above and just close all applications which consume a lot of memory, like chrome. And, try start docker again.
But the core idea run Docker with fresh DockerDesktopVM file and replace it with the old one after when settings UI will be unlocked.
I have also the same problem. Maybe you have other virtual machine in Hyper-v, other Virtual machin need memory too. please stop all other Hyper-v Virtual machine and test again. for me worked
My Hyper-v Manager
When I experienced this problem I modified the PowerShell script MobyLinux.ps1 found in the resources folder on the Docker install in C:\Program Files\Docker\Docker\resources. Essentially I forced the values for the $CPUs to 2 and the $Memory to 512, which worked for my dev box's limited resources!
At this point when Docker drops the MobyLinuxVM instance in Hyper-V and re-creates it from the PowerShell script it now uses my values:
This time the VM remains up and stable, and Docker successfully switches from the Windows Containers to the Linux Containers:
Hope this helps someone.
Issue resolved after just restarting the PC -_-. Dont know what is that.
So to begin with I normally start off with opening Visual Studio Code then my terminal and finally Docker Desktop WSL2. The problem being is that Visual Studio Code is a chunky memory hogger and initially requires a lot of memory to run. Especially if you used the integrated terminal, multiple tabs, and ultimately multiple windows.
When I open Docker Desktop last it gives me not enough resources error. After a little messing and testing, I found out that Docker initially needs to load first because it needs to obtain a certain amount of memory for your containers and images to run. So starting Docker Desktop manually, not on windows startup, then your other programs and tools should, I am saying should as everyone's environment and problems are different from mine and I am not expecting them to be the same, work fine.
So here are the steps:
On opening your computer, mine is Windows 10 using WSL2 with Home, do not immediately have Docker open on startup. Instead, run the program manually by double-clicking the icon or searching in your start menu and clicking on Docker Desktop.
Next, we then want to open Windows Visual Studio Code and other programs after that.
Before running any commands, as I do run them through node js with specific package.json defined commands, check docker desktop as sometimes your containers and images are already running and therefore shouldn't need to run any commands to bring them up again.
If all this fails try going into your settings and allocating specific memory. Check your task manager processes and see what is taking up all of your resources. I hope this helps. Again everyone's environment is not the same so do not expect similar results as I have had. This SHOULD work doesn't mean it will. Read the documentation as well as it does help with identifying problems faster.
Just follow the step:
Go to Troubleshoot in the Docker dashboard.
Click on Clean/Purge data.
Select all options and press delete.
It takes a few minutes.
(that's work for me)
I had the same problem. In my case I had another VM running on Hyper-V that was consuming all the resources. Even after system restart the VM was always active. I opened Hyper-V Administrator and deactivated the problematic VM. Then I could start Docker properly.
My Windows 10 Laptop has 8 GB of RAM. I also use virtual memory.
When i start my OS and immediately run some RAM hungry applications, I can't start Docker until i stop most of the applications.
Yet: https://stackoverflow.com/a/45816385/7082956 helped me as well.
This may happen because the ram is not free at the time you starting docker
I had opened 20 tabs of the browser, that leads to no free ram so I closed all the tabs refresh the computer several times, and tried restarting once again and it works for me
I have faced same issue: Docker out of memory in windows.
I have solved issue, by following three steps.
1. Quit Docker Desktop by clicking mouse right button.
2. Now run Docker Desktop as Administrator.
3. Now restart your windows system.
Now Docker will work properly. This solution has worked for me. :)
Problem:
Installed Docker Desktop.
Got Out of Memory error upon starting with linux instance.
Details:
OS: Windows 10 Professional
Host: Lenovo ThinkPad Carbon X1, 4GB RAM
Docker Desktop: Version 2.1.0.1 (37199)
Docker advanced settings:
CPUs: 2
Memory: 2048MB (this is the maximum)
Swap: 2048MB
Disk Image Size: 59.6GB (4MB used)
Hyper-V settings for DockerDesktopVM:
Settings > Memory > RAM: 2048MB (tried to increase to 4096; still doesn't work)
Settings > Memory > Enable Dynamic Memory (checked/un-checked; both doesn't work)
Under variations of the above settings, Docker Desktop gives this error when starting/ re-starting:
Not enough memory to start Docker Desktop
You are trying to start Docker Desktop but you don't have enough memory.
Free some memory or change your settings.
The problem resolutions reported in the following links, e.g. starting with Windows instance, then switching back to Linux, don't work for me, regardless of how much memory I allocate via Hyper-V or Docker settings.
It is utterly frustrating because apparently people are reporting being able to start with linux instances on host machines with 4GB of RAM. So I wonder what I am doing wrong.
Resources researched/ tried:
https://forums.docker.com/t/not-enough-memory-to-start-docker/13512/24
Docker won't start on Windows: Not Enough memory to start docker
Questions:
Can I even run Docker Desktop with linux instance on my host machine?
If (1) is yes, then what settings will allow me to do this?

Using WebStorm (JetBrains) with SSHFS mounted development server (Mavericks, OSXFUSE)? Constantly dismounts drive

UPDATE: I saw that someone was trying to use PyCharm with SSHFS and JetBrains said: "no". Perhaps this just won't work?
I'm trying to work with WebStorm on an SSHFS mounted disk at a client's office I'm working at — I've never used SSHFS before. I am using OSX 10.9.2, installed SSHFS thru home-brew and installed OSXFUSE.
The SSHFS mount dismounts periodically in any case, but since I started trying to use WebStorm with it it dismounts every time I start WebStorm and it starts scanning the files on the SSHFS disk — WebStorm gives the message "external file changes sync may be slow: Project files cannot be watched (are they under network mount?)" and if I try to open files it freezes. The SSHFS disc meanwhile has been dismounted. If I remount via terminal WebStorm isn't happy and either freezes or just sits there.
I set up the WebStorm project using "New project from existing files" — is there a way to set it up using SSHFS as a server? Beyond the login and password to the SSHFS disc I don't have any other server-specific info, but perhaps could get it.
Thanks for any help — 
This is how I operate, and maybe it can help you. If there's a config setting I seem to have glossed over, just ask and I'll fix this up. But all in all, this is wonderfully successful:
My build environment is tucked away on a Linux distro, but my development environment is co-located on a Mac Desktop (when I'm at work) and a Mac Air (when I'm at home). My projects are enormous, and contractually I can't move the code to any machine where it might be accessible if my laptop is stolen. So I pretty much have to use ssh (and sshfs) to get anything done.
When I am at home, and I sit down to work, I manually initiate the VPN -- since there are so many variations, I'll assume you know how to do this part.
I open a terminal and invoke:
caffeinate &
because I hate getting disconnected whenever the computer goes into screen saver. This may be why you get disconnected? I leave this terminal open whenever I'm developing. I also use tmux so that my terminal session can be shared between computers. Anyway...
I set up a mount point set up between the server and the client. I have a script that I run when the mount point goes down (customize for your own work):
umount -f /Volumes/$MOUNTDIR/
umount -f /Users/$HOMEUSER/$MOUNTDIR
mkdir /Users/$HOMEUSER/$MOUNTDIR
sshfs $HOMEUSER##SERVERADDR:/usr/$HOMEUSER/$MOUNTDIR /Users/$HOMEUSER/$MOUNTDIR
I then launch Webstorm, PyCharm, ADS, IntelliJ (I'm a Jetbrains fan).
At this point you can open the directory within $MOUNTDIR and start working. If you find that you need to run builds, here's a tip -- do not build locally. Instead use SSH to issue the build commands (or run scripts) on the server. The overhead of synching after the build has run is most likely far less than fetching and writing all of the steps of the build.
I only find I get disconnected if I lose the VPN. I used to get disconnected whenever the computer would sleep. Caffeinate fixed that.
For reasonable sized projects, this is probably all you need. So what follows is an optimization -- only do it if you are having headaches:
To speed up load times, what I do is create a local project that is not part of the mount. There is a .IDEA directory that gets created and written to a lot at the base of the first directory you open as a project. Inside of this directory are lots of files that get written to a lot, and depending on your network speed, it might cause grief. It does mean some settings have to be maintained everywhere you go, but in my case it's a small price to pay for big performance gains.
So because I do this, I'll have to manually add directories to my project (Under Preferences/Directories). But if you work with huge APIs, you might be doing this anyway. I am careful to mark directories I don't need to reference as 'excluded', to make life easier on the indexer. I work in a shared directory structure with thousands of other employees, and I make sure the streams don't cross.
Now I have many many thousands of files, and it is true that sync can be slow. But sync is only triggered when you leave the app and come back in. And honestly, it's not that terrible, so long as you have a reasonable internet connection.
I hope this helps. Once I started using this as my workflow, I never went back.

Resources