VMware Workstation and Device/Credential Guard are not compatible - cmd

I have been running VMware for the last year no problems, today I opened it up to start one of my VM and get an error message, see screen shot.
I did follow the link and went through the steps, on step 4 I need to mount a volume using "mountvol".
when I try to mount a volume using mountvol X: \\?\Volume{5593b5bd-0000-0000-0000-c0f373000000}\ it keeps saying The directory is not empty. I even created a partition with 2GB and still the same message.
My Questions:
How can I mount the volume that is not empty even though it is?
Why did this Device/Credential Guard auto enable itself and how can I get rid of it or disable it.
CMD:

There is a much better way to handle this issue. Rather than removing Hyper-V altogether, you just make alternate boot to temporarily disable it when you need to use VMWare. As shown here...
http://www.hanselman.com/blog/SwitchEasilyBetweenVirtualBoxAndHyperVWithABCDEditBootEntryInWindows81.aspx
C:\>bcdedit /copy {current} /d "No Hyper-V"
The entry was successfully copied to {ff-23-113-824e-5c5144ea}.
C:\>bcdedit /set {ff-23-113-824e-5c5144ea} hypervisorlaunchtype off
The operation completed successfully.
note: The ID generated from the first command is what you use in the second one. Don't just run it verbatim.
When you restart, you'll then just see a menu with two options...
Windows 10
No Hyper-V
So using VMWare is then just a matter of rebooting and choosing the No Hyper-V option.
If you want to remove a boot entry again. You can use the /delete option for bcdedit.
First, get a list of the current boot entries...
C:\>bcdedit /v
This lists all of the entries with their ID's. Copy the relevant ID, and then remove it like so...
C:\>bcdedit /delete {ff-23-113-824e-5c5144ea}
As mentioned in the comments, you need to do this from an elevated command prompt, not powershell. In powershell the command will error.
update:
It is possible to run these commands in powershell, if the curly braces are escaped with backtick (`). Like so...
C:\WINDOWS\system32> bcdedit /copy `{current`} /d "No Hyper-V"

Device/Credential Guard is a Hyper-V based Virtual Machine/Virtual Secure Mode that hosts a secure kernel to make Windows 10 much more secure.
...the VSM instance is segregated from the normal operating
system functions and is protected by attempts to read information in
that mode. The protections are hardware assisted, since the hypervisor
is requesting the hardware treat those memory pages differently. This
is the same way to two virtual machines on the same host cannot
interact with each other; their memory is independent and hardware
regulated to ensure each VM can only access it’s own data.
From here, we now have a protected mode where we can run security
sensitive operations. At the time of writing, we support three
capabilities that can reside here: the Local Security Authority (LSA),
and Code Integrity control functions in the form of Kernel Mode Code
Integrity (KMCI) and the hypervisor code integrity control itself,
which is called Hypervisor Code Integrity (HVCI).
When these capabilities are handled by Trustlets in VSM, the Host OS
simply communicates with them through standard channels and
capabilities inside of the OS. While this Trustlet-specific
communication is allowed, having malicious code or users in the Host
OS attempt to read or manipulate the data in VSM will be significantly
harder than on a system without this configured, providing the
security benefit.
Running LSA in VSM, causes the LSA process itself (LSASS) to remain in
the Host OS, and a special, additional instance of LSA (called LSAIso
– which stands for LSA Isolated) is created. This is to allow all of
the standard calls to LSA to still succeed, offering excellent legacy
and backwards compatibility, even for services or capabilities that
require direct communication with LSA. In this respect, you can think
of the remaining LSA instance in the Host OS as a ‘proxy’ or ‘stub’
instance that simply communicates with the isolated version in
prescribed ways.
And Hyper-V and VMware didn't work the same time until 2020, when VMware used Hyper-V Platform to co-exist with Hyper-V starting with Version 15.5.5.
How does VMware Workstation work before version 15.5.5?
VMware Workstation traditionally has used a Virtual Machine Monitor
(VMM) which operates in privileged mode requiring direct access to the
CPU as well as access to the CPU’s built in virtualization support
(Intel’s VT-x and AMD’s AMD-V). When a Windows host enables
Virtualization Based Security (“VBS“) features, Windows adds a
hypervisor layer based on Hyper-V between the hardware and Windows.
Any attempt to run VMware’s traditional VMM fails because being inside
Hyper-V the VMM no longer has access to the hardware’s virtualization
support.
Introducing User Level Monitor
To fix this Hyper-V/Host VBS compatibility issue, VMware’s platform
team re-architected VMware’s Hypervisor to use Microsoft’s WHP APIs.
This means changing our VMM to run at user level instead of in
privileged mode, as well modifying it to use the WHP APIs to manage
the execution of a guest instead of using the underlying hardware
directly.
What does this mean to you?
VMware Workstation/Player can now run when Hyper-V is enabled. You no
longer have to choose between running VMware Workstation and Windows
features like WSL, Device Guard and Credential Guard. When Hyper-V is
enabled, ULM mode will automatically be used so you can run VMware
Workstation normally. If you don’t use Hyper-V at all, VMware
Workstation is smart enough to detect this and the VMM will be used.
System Requirements
To run Workstation/Player using the Windows Hypervisor APIs, the
minimum required Windows 10 version is Windows 10 20H1 build
19041.264. VMware Workstation/Player minimum version is 15.5.5.
To avoid the error, update your Windows 10 to Version 2004/Build 19041 (Mai 2020 Update) and use at least VMware 15.5.5.

I'm still not convinced that Hyper-V is The Thing for me, even with last year's Docker trials and tribulations and I guess you won't want to switch very frequently, so rather than creating a new boot and confirming the boot default or waiting out the timeout with every boot I switch on demand in the console in admin mode by
bcdedit /set hypervisorlaunchtype off
Another reason for this post -- to save you some headache: You thought you switch Hyper-V on with the "on" argument again? Nope. Too simple for MiRKoS..t. It's auto!
Have fun!
G.

To make it super easy:
Just download this script directly from Microsoft.
Run your Powershell as an admin and then execute following commands:
To Verify if DG/CG is enabled DG_Readiness.ps1 -Ready
To Disable DG/CG. DG_Readiness.ps1 -Disable

For those who might be encountering this issue with recent changes to your computer involving Hyper-V, you'll need to disable it while using VMWare or VirtualBox. They don't work together. Windows Sandbox and WSL 2 need the Hyper-V Hypervisor on, which currently breaks VMWare. Basically, you'll need to run the following commands to enable/disable Hyper-V services on next reboot.
To disable Hyper-V and get VMWare working, in PowerShell as Admin:
bcdedit /set hypervisorlaunchtype off
To re-enable Hyper-V and break VMWare for now, in PowerShell as Admin:
bcdedit /set hypervisorlaunchtype auto
You'll need to reboot after that. I've written a PowerShell script that will toggle this for you and confirm it with dialog boxes. It even self-elevates to Administrator using this technique so that you can just right click and run the script to quickly change your Hyper-V mode. It could easily be modified to reboot for you as well, but I personally didn't want that to happen. Save this as hypervisor.ps1 and make sure you've run Set-ExecutionPolicy RemoteSigned so that you can run PowerShell scripts.
# Get the ID and security principal of the current user account
$myWindowsID = [System.Security.Principal.WindowsIdentity]::GetCurrent();
$myWindowsPrincipal = New-Object System.Security.Principal.WindowsPrincipal($myWindowsID);
# Get the security principal for the administrator role
$adminRole = [System.Security.Principal.WindowsBuiltInRole]::Administrator;
# Check to see if we are currently running as an administrator
if ($myWindowsPrincipal.IsInRole($adminRole))
{
# We are running as an administrator, so change the title and background colour to indicate this
$Host.UI.RawUI.WindowTitle = $myInvocation.MyCommand.Definition + "(Elevated)";
$Host.UI.RawUI.BackgroundColor = "DarkBlue";
Clear-Host;
}
else {
# We are not running as an administrator, so relaunch as administrator
# Create a new process object that starts PowerShell
$newProcess = New-Object System.Diagnostics.ProcessStartInfo "PowerShell";
# Specify the current script path and name as a parameter with added scope and support for scripts with spaces in it's path
$newProcess.Arguments = "-windowstyle hidden & '" + $script:MyInvocation.MyCommand.Path + "'"
# Indicate that the process should be elevated
$newProcess.Verb = "runas";
# Start the new process
[System.Diagnostics.Process]::Start($newProcess);
# Exit from the current, unelevated, process
Exit;
}
Add-Type -AssemblyName System.Windows.Forms
$state = bcdedit /enum | Select-String -Pattern 'hypervisorlaunchtype\s*(\w+)\s*'
if ($state.matches.groups[1].ToString() -eq "Off"){
$UserResponse= [System.Windows.Forms.MessageBox]::Show("Enable Hyper-V?" , "Hypervisor" , 4)
if ($UserResponse -eq "YES" )
{
bcdedit /set hypervisorlaunchtype auto
[System.Windows.Forms.MessageBox]::Show("Enabled Hyper-V. Reboot to apply." , "Hypervisor")
}
else
{
[System.Windows.Forms.MessageBox]::Show("No change was made." , "Hypervisor")
exit
}
} else {
$UserResponse= [System.Windows.Forms.MessageBox]::Show("Disable Hyper-V?" , "Hypervisor" , 4)
if ($UserResponse -eq "YES" )
{
bcdedit /set hypervisorlaunchtype off
[System.Windows.Forms.MessageBox]::Show("Disabled Hyper-V. Reboot to apply." , "Hypervisor")
}
else
{
[System.Windows.Forms.MessageBox]::Show("No change was made." , "Hypervisor")
exit
}
}

the simplest solution for this issue is to download the "Device Guard and Credential Guard hardware readiness tool" to correct the incompatibility :
https://www.microsoft.com/en-us/download/details.aspx?id=53337
Decompress the zip
you will find :
execute the "DG_Readiness_Tool_v3.6.ps1" with PowerShell
Now you should be able to power on your virtual machine normally .

I don't know why but version 3.6 of DG_Readiness_Tool didn't work for me.
After I restarted my laptop problem still persisted.
I was looking for solution and finally I came across version 3.7 of the
tool and this time problem went away.
Here you can find latest powershell script:
DG_Readiness_Tool_v3.7

I also struggled a lot with this issue. The answers in this thread were helpful but were not enough to resolve my error. You will need to disable Hyper-V and Device guard like the other answers have suggested. More info on that can be found in here.
I am including the changes needed to be done in addition to the answers provided above. The link that finally helped me was this.
My answer is going to summarize only the difference between the rest of the answers (i.e. Disabling Hyper-V and Device guard) and the following steps :
If you used Group Policy, disable the Group Policy setting that you
used to enable Windows Defender Credential Guard (Computer
Configuration -> Administrative Templates -> System -> Device Guard
-> Turn on Virtualization Based Security).
Delete the following registry settings:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\LSA\LsaCfgFlags
HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\DeviceGuard\EnableVirtualizationBasedSecurity
HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\DeviceGuard\RequirePlatformSecurityFeatures
Important :
If you manually remove these registry settings, make sure to delete
them all. If you don't remove them all, the device might go into
BitLocker recovery.
Delete the Windows Defender Credential Guard EFI variables by using
bcdedit. From an elevated command prompt(start in admin mode), type
the following commands:
mountvol X: /s
copy %WINDIR%\System32\SecConfig.efi X:\EFI\Microsoft\Boot\SecConfig.efi /Y
bcdedit /create {0cb3b571-2f2e-4343-a879-d86a476d7215} /d "DebugTool" /application osloader
bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} path "\EFI\Microsoft\Boot\SecConfig.efi"
bcdedit /set {bootmgr} bootsequence {0cb3b571-2f2e-4343-a879-d86a476d7215}
bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} loadoptions DISABLE-LSA-ISO
bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} device partition=X:
mountvol X: /d
Restart the PC.
Accept the prompt to disable Windows Defender Credential Guard.
Alternatively, you can disable the virtualization-based security
features to turn off Windows Defender Credential Guard.

install the latest vmware workstation > 15.5.5 version
which has support of Hyper-V Host
With the release of VMware Workstation/Player 15.5. 5 or >, we are
very excited and proud to announce support for Windows hosts with
Hyper-V mode enabled! As you may know, this is a joint project from
both Microsoft and VMware
https://blogs.vmware.com/workstation/2020/05/vmware-workstation-now-supports-hyper-v-mode.html
i installed the VMware.Workstation.Pro.16.1.0
and now it fixed my issue now i am using docker & vmware same time even my window Hyper-V mode is enabled

Windows 1909 (18363.1377)
In my case I was using windows 1909, Device Guard was disabled and so was the Hyper V. While trying docker I installed and enabled wsl2. After uninstalling wsl from control panel and disabling it from powershell my vmware started working again.
Following is the command to disable wsl
Run in powershell as admin
dism.exe /online /disable-feature /featurename:Microsoft-Windows-Subsystem-Linux
Uninstall WSL shown in the screenshot
Reboot your system

If you are someone who maintains an open customized "Run as administrator" command prompt or powershell command line window at all the times you can optionally setup the following aliases / macros to simplify executing the commands mentioned by #gue22 for simply disabling hyper-v hypervisor when needing to use vmware player or workstation and then enabling it again when done.
doskey hpvEnb = choice /c:yn /cs /d n /t 30 /m "Are you running from elevated command prompt" ^& if not errorlevel 2 ( bcdedit /set hypervisorlaunchtype auto ^& echo.^&echo now reboot to enable hyper-v hypervisor )
doskey hpvDis = choice /c:yn /cs /d n /t 30 /m "Are you running from elevated command prompt" ^& if not errorlevel 2 ( bcdedit /set hypervisorlaunchtype off ^& echo.^&echo now reboot to disable hyper-v hypervisor )
doskey bcdL = bcdedit /enum ^& echo.^&echo now see boot configuration data store {current} boot loader settings
With the above in place you just type "hpvenb" [ hypervisor enabled at boot ], "hpvdis" [ hypervisor disabled at boot ] and "bcdl" [ boot configuration devices list ] commands to execute the on, off, list commands.

Well Boys and Girls after reading through the release notes for build 17093 in the wee small hours of the night, I have found the change point that affects my VMware Workstation VM's causing them not to work, it is the Core Isolation settings under Device Security under windows security (new name for windows defender page) in settings.
By default it is turned on, however when I turned it off and restarted my pc all my VMware VM's resumed working correctly. Perhaps a by device option could be incorporated in the next build to allow us to test individual devices / Apps responses to allow the core isolation to be on or off per device or App as required .

Here are proper instructions so that everyone can follow.
First download Device Guard and Credential Guard hardware readiness tool from this link: https://www.microsoft.com/en-us/download/details.aspx?id=53337
extract the zip folder content to some location like: C:\guard_tool
you will have files like this copy file name of ps1 extension file in my case its v3.6 so it will be : DG_Readiness_Tool_v3.6.ps1
Next click on start menu and search for powershell and then right click on it and run as Administrator.
After that you will see blue color terminal enter command cd C:\guard_tool , replace the path after cd with your extracted location of the tool
Now enter command: .\DG_Readiness_Tool_v3.6.ps1 -Disable
After that reboot system
When your system is restarting it boot time system will show notification with black background to verify that you want to disable these features so press F3 to confirm.
do +1 if it helped :)

QUICK SOLUTION EVERY STEP:
Fixed error in VMware Workstation on Windows 10 host
Transport (VMDB) error -14: Pipe connection has been broken.
Today we will be fixing VMWare error on a windows 10 computer.
In RUN box type "gpedit" then Goto [ERROR SEE POINT 3]
1- Computer Configuration
2- Administrative Templates
3- System - Device Guard : IF NO DEVICE GUARD : (DOWNLOAD https://www.microsoft.com/en-us/download/100591 install this "c:\Program Files (x86)\Microsoft Group Policy\Windows 10 November 2019 Update (1909)\PolicyDefinitions" COPY to c:\windows\PolicyDefinitions )
4- Turn on Virtualization Based Security.
Now Double click that and "Disable"
Open Command Prompt as Administrator and type the following
gpupdate /force [DONT DO IF YOU DONT HAVE DEVICE GUARD ELSE IT WILL GO AGAIN]
Open Registry Editor, now Go to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\DeviceGuard. Add a new DWORD value named EnableVirtualizationBasedSecurity and set it to 0 to disable it.
Next Go to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\LSA. Add a new DWORD value named LsaCfgFlags and set it to 0 to disable it.
In RUN box, type Turn Windows features on or off, now uncheck Hyper-V and restart system.
Open command prompt as a administrator and type the following commands
bcdedit /create {0cb3b571-2f2e-4343-a879-d86a476d7215} /d "DebugTool" /application osloader
bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} path "\EFI\Microsoft\Boot\SecConfig.efi"
bcdedit /set {bootmgr} bootsequence {0cb3b571-2f2e-4343-a879-d86a476d7215}
bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} loadoptions DISABLE-LSA-ISO,DISABLE-VBS
bcdedit /set hypervisorlaunchtype off
Now, Restart your system

I had the same problem. I had VMware Workstation 15.5.4 and Windows 10 version 1909 and installed Docker Desktop.
Here how I solved it:
Install new VMware Workstation 16.1.0
Update my Windows 10 from 1909 to 20H2
As VMware Guide said in this link
If your Host has Windows 10 20H1 build 19041.264 or newer,
upgrade/update to Workstation 15.5.6 or above.
If your Host has Windows 10 1909 or earlier, disable Hyper-V on the host to resolve this issue.
Now VMware and Hyper-V can be at the same time and have both Docker and VMware at my Windows.

Related

Detecting the Windows HyperVisor via CLI

How do I detect if the hypervisor is active on Windows via a CLI command?
I run a vagrant based project that uses VirtualBox, and sometimes we encounter issues with the Windows Hypervisor.
The problem is that the only way we can reliably tell if it's turned on is by asking if Hyper-V is checked in the Windows features dialog. But there are times when Hyper-V is unchecked, but a hypervisor is present because it's needed for other Windows features
For example, when using the Windows Susbystem for linux, or when the Virtual Machine platform is turned on, or certain security options.
With VirtualBox 5.2, this was easy, it would fail to create VMs, but with VirtualBox 6, it uses this hypervisor if it's present.
So, either via powershell, or the command line, how can I determine if the Windows hypervisor is present and active?
Note that I am not testing if the Hyper-V product is active, it is possible for there to be a hypervisor with Hyper-V turned off
It can be queried via WMI property Win32_ComputerSystem.HypervisorPresent. In PowerShell that would be
(Get-WmiObject Win32_ComputerSystem).HypervisorPresent
Or use gcim instead of Get-WmiObject.
I tried restarting Windows with bcdedit /set {current} hypervisorlaunchtype Off and Auto, and it returned False and True respectively.
What's interesting, property HypervisorPresent is hidden from Get-WmiObject Win32_ComputerSystem output. To see hidden fields, call one of these:
gcim Win32_ComputerSystem | Format-List * -Force
gcim Win32_ComputerSystem | Format-Table * -Force
Sources:
https://devblogs.microsoft.com/scripting/use-powershell-to-detect-if-hypervisor-is-present/
https://devblogs.microsoft.com/scripting/powertip-display-hidden-properties-from-object/
Right-click Start > Run > msinfo32
Shown at the bottom of the initial view pane (System Summary) will read the line "A hypervisor has been detected. Features required for Hyper-V will not be displayed."

Docker : Hyper-V was unable to find a virtual switch with name "DockerNAT"

I updated my docker for desktop app (Version 2.0.0.3) on Windows 10 pro.But since then my docker is not starting and throwing following error.
Hyper-V\Get-VMNetworkAdapter : Hyper-V was unable to find a virtual switch with name "DockerNAT".
At C:\Program Files\Docker\Docker\resources\MobyLinux.ps1:121 char:25
+ ... etAdapter = Hyper-V\Get-VMNetworkAdapter -ManagementOS -SwitchName $S ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (DockerNAT:String) [Get-VMNetworkAdapter], VirtualizationException
+ FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVMNetworkAdapter
I followed the steps mentioned in the link (Docker on windows 10 can't startup after deleting MobyLinuxVM in Hyper-V manually ) , but it did not fix the issue.
I have also tried disabling --> restarting --> and then enabling Hyper V and containers option using "Turn windows feature on or off" present at "Control Panel\Programs\Programs and Features"
My network connection has following information
But still I am not being able to start my windows Docker app, which keeps throwing
Hyper-V was unable to find a virtual switch with name "DockerNAT".
at New-Switch, <No file>: line 121
at <ScriptBlock>, <No file>: line 411
I also faced this issue once.
I tried several workarounds but nothing worked. The issue was that the MobyLinuxVM could not create the Docker NAT switch, as a result Docker service could not be started.
The working solution was to reset my network settings. I cannot remember if I had to remove all network related entries in Computer Management in order to be re-initialized from scratch.
Important: You will lose all user defined network-related settings. Try it if everything else fails.
Edit: Another thing you can try is to restart the Hyper-V management service by executing the following commands in an admin shell:
net stop vmms
net start vmms
Found in related github issue
open the hyper-v manger and check in the "Virtual switch manager" if you can see the DockerNAT there or not , Docker for windows created this switch when it starts before creating the mobylinux vm.
if your powershell script is not creating this switch ten try to create it directly there .
I was facing the same issue after updating the docker version and got resolved by doing the following steps. Please note that i have following OS running in my machine.
Edition Windows 10 Enterprise
Version 1903
Os Build 18362.295
1:- Open "Window Security"
2:- Open "App & Browser control"
3:- Click "Exploit protection settings" at the bottom
4:- Switch to "Program settings" tab
5:- Locate "C:\WINDOWS\System32\vmcompute.exe" in the list and expand it
6:- Click "Edit"
7:- Scroll down to "Code flow guard (CFG)" and uncheck "Override system settings"
8:- Start vmcompute from powershell "net start vmcompute"
None of these worked for me. I have tried countless of possible solutions reported by others. In the end, this rather old post helped:
https://forums.docker.com/t/latest-failed-docker-update-makes-hyper-v-unable-to-create-virtual-ethernet-switch-0x80041002/44109
So to fix the issue:
uninstall crippled Docker for Windows
remove both Hyper-V and Containers features then reboot
add Hyper-V and Containers features back then reboot
reinstall Docker for Windows then start it
Hope this helps!
Running the MOFCOMP command and a reboot fixed this problem for me.
Running this command: (Command Prompt as administrator)
MOFCOMP %SYSTEMROOT%\System32\WindowsVirtualization.V2.mof
Then restart
(https://community.spiceworks.com/how_to/122307-fix-error-managing-hyper-v-server-2012-r2-from-windows-10)
To solve the issue follow the steps which write in the microsoft's document below
https://support.microsoft.com/en-us/help/3101106/you-cannot-create-a-hyper-v-virtual-switch-on-64-bit-versions-of-windo
Then restart your PC.
After restarting
Open Hyper-V Manager
Goto Virtual Switch Manager
Create new Internal virtual switch with name DockerNAT
Start your docker
I had the same problem on windows 10 and after installing "MicrosoftEasyFix20159.mini.diagcab" my problem was solved. I think instead of creating manually a new "Internal virtual switch with name DockerNAT", installing this Microsoft Easy Fix works.
My Docker Desktop gave me a similar error
It was exactly this:
The virtual switch 'DockerNAT' cannot be deleted because it is being
used by running virtual machines or assigned to child pools.
My Solution was
Open Hyper-V manager
Shutdown the default machine or how your docker-machine is called
Then try to open Docker Desktop
I hope this was helpful for someone

How to reset bcdedit /dbgsettings in windows?

In my test system, I have modified busparams as 0.40.0 . Now If I have to reset it to the default value , is there a reset/delete like command which will delete all dbgsettings parameters that are currently set?
bcdedit -deletevalue {dbgsettings} busparams
Although (bcdedit /debug off) does reset debugging, but the Windows system still thinks that Kernel Debugger is enabled.
We use a VPN to connect to our Windows activation servers and, native or our Global Protect VPN won't connect with Kernel Debugger enabled.

devcon disable cannot disable device not found

I'm on Windows 8.1 trying to disable my clickpad programatically. I've installed the correct x64 bit version of devcon as noted here. I can find the correct device but devcon disable with the same parameters fails.
PS C:\...\7600.16385.win7_wdk.100208-1538\tools\devcon\amd64> .\devcon.exe disable 'ACPI\SYN1ECA*'
ACPI\SYN1ECA\4&22077A96&0 : Disable failed
No matching devices found.
Which is rather confusing. It obviously finds the right device, but then reports "No matching devices found". What the heck?
Please note that I am aware of this similar question but, in addition to not having an accepted answer, that question has a different error and is likely using the wrong version of devcon.
No Matching Devices is the way that windows tells you that it cannot find or access the devices you are looking for. There can be a couple of causes for this:
Incorrect Permissions caused by not running the command prompt/BAT as an administrator. Simply right-click the relevant access method and select 'Run as administrator"
Incorrect Access caused by running the wrong version of devcon.exe. As a remnant of the shift to 64 bit computer there are two version of devcon located in the 'Tools' folder, one for x86 and one for x64, ensure that you are running the correct version for your computer and you should be able to perform your tasks without issue.
You are using the wrong "spelling" in your command.
This should work:
devcon.exe disable "ACPI\SYN1ECA*"
If you already found the exact device you want to disable you can do it like this:
devcon.exe disable "#<instace ID>"
In your case:
devcon.exe disable "#ACPI\SYN1ECA\4&22077A96&0"
If this also doesn't work you should use the remove command. remove works almost always, but the device will be back after you restart the system.
devcon.exe remove "#<instance ID>"
No matching devices found. is a confusing way for devcon to tell that you are running the command without elevation. This is without elevation:
devcon restart "PCI\VEN_10EC&DEV_8168&SUBSYS_85051043&REV_09"
PCI\VEN_10EC&DEV_8168&SUBSYS_85051043&REV_09\4&21A1C3AE&0&00E5: Restart failed
No matching devices found.
This is with elevation:
devcon restart "PCI\VEN_10EC&DEV_8168&SUBSYS_85051043&REV_09"
PCI\VEN_10EC&DEV_8168&SUBSYS_85051043&REV_09\4&21A1C3AE&0&00E5: Restarted
1 device(s) restarted.
To elevate right click on command prompt and select "run as administrator".
Look at superuser question
Resume:
To download correct version devcon x86/x64. Run the devcon commands in cmd.exe with administrative privileges
To block/unblock:
USB\VID_1C4F&PID_0002&MI_01\6&1578F7C2&0&0001 : USB storage device
%windir%\system32\devcon.exe disable *VID_1C4F*
and
%windir%\system32\devcon.exe enable *VID_1C4F*
Sometimes devcon does not disable:
USB\VID_1C4F&PID_0002&MI_01\6&1578F7C2&0&0001 : Disabled
HID\VID_1C4F&PID_0002&MI_00\7&2B89365C&0&0000 : Disable failed
In this case, the only solution is replace the command: "disable" by "remove":
%windir%\system32\devcon.exe remove *VID_1C4F*
HID\VID_1C4F&PID_0002&MI_00\7&2B89365C&0&0000 : Removed
1 device(s) were removed.
But devcon is not a permanent solution for locking and unlocking devices.
The test is that you can lock a usb device and then run bash script renewusb_2k.bat, and you will see that the script reinstall the usb drivers again and the locked usb device becomes accessible again.
Programmatic approach in Python. What ended up working for me as well was of course Running as Administrator my app and the remove device(s) / rescan trick:
DevConFX3Regex = re.compile(r'(?P<device_id>USB[^\s]*)\s+ : FX3')
DevConCOMRegex = re.compile(r'(?P<device_id>[^\s]*)\s+ : .*\(COM[0-9]{1,3}\).*')
def auto_repair_usb_com_ports(self):
os.system('devcon findall * > DevCon.txt')
with open('DevCon.txt', 'r') as devcon_text:
devcon_text = devcon_text.read()
for match in self.DevConFX3Regex.finditer(devcon_text):
device_id = match.group("device_id")
print(device_id)
device_id = f'"#{device_id}"'
os.system(f'devcon remove {device_id}')
for match in self.DevConCOMRegex.finditer(devcon_text):
device_id = match.group('device_id')
device_id = f'"#{device_id}"'
os.system(f'devcon remove {device_id}')
os.system('devcon rescan')

Does anyone here uses Linux host/VMWare/VirtualKD debug environment?

Does anyone had a successful experience with VirtualKD setup on Linux host running VMWare Workstation 8 (with Win7 guests)?
Despite the facts there's a lot of admiring comments about 'speed' and 'other benefits' of that VirtualKD, most of them come from Windows/VirtualBox users, and I really don't want to waste my time on trying to get it working on unsupported configuration.
P.S. Official VirtualKD forum has a similar thread that is still unanswered for two years, so decided to ask for reviews here.
P.P.S. My actual problem is that VMWare's socket-based COM port debugging is very slow.. it takes 10 to 20x more time to copy debug output from debuggee to debugger machine, than it takes to print same output to DbgView.
Does anyone had a successful experience with VirtualKD setup on Linux host running VMWare Workstation 8 (with Win7 guests)?
VirtualKD is a Windows-only application. The poster on the forum has worked around the problem of it being Windows-only by redirecting a Unix socket to TCP, therefore allowing Windows clients to connect over the network.
I've used socat to successfully bridge two VMs using a tcp socket. I created pipes in /tmp and ran socat between them; one VM can then debug the other.
In my case, because I'd configured the debugger to use serial connections I was rate limited by the serial connections. I haven't tried the VirtualKD-style setup; however, my bet is it won't work. From the VirtualKD explanation of its internals on VMWare, the client-side code is basically using hypervisor provided functionality. VirtualBox has to be patched by VirtualKD; I expect this is to provide such functionality to VirtualBox clients.
The bad news is this means, ultimately, that the Linux host hypervisor (VMware/VirtualBox on the host) must know to process that information and pass it out to the appropriate location. By default, it won't know how to do this.
I have a successful experience running it on Windows if anyone is looking for that:
Install VirtualBox 5.x or less and create a virtual machine with a Windows .iso SATA device and set it up
Download VirtualKD-3.0
Open the VM and run vminstall.exe on the guest
On admin cmd on the guest: bcdedit /dbgsettings SERIAL DEBUGPORT:1 BAUDRATE:115200
Shut down VM, close VirtualBox and kill the VBoxSVC.exe process
Run VirtualIntegration.exe. If it crashes open an admin cmd and cd to C:\Program Files\Oracle\VirtualBox and then type vboxmanage setextradata <VMNAME> VBoxInternal/Devices/VirtualKD/0/Config/Path <VirtualKD-3.0 folder> i.e. vboxmanage setextradata Windows7 VBoxInternal/Devices/VirtualKD/0/Config/Path C:\Users\lewis\Downloads\VirtualKD-3.0
Open vmmon64.exe and set the debugger path i.e C:\Program Files\Debugging Tools for Windows (x64)\windbg.exe, and then select WINDBG.EXE and start debugger automatically
Launch VM and highlight the VirtualKD entry and press f8 and disable driver signature enforcement and you will soon break into the debugger at nt!RtlpBreakWithStatusInstruction, which is a debugger symbol for the first address of the DbgBreakPointWithStatus, which is called from InitBootProcessor, which is the breakpoint you'd get from sxe ibp;.reboot. There is an earlier breakpoint of sxe ld:nt
You will need to unpatch the VM in order to be able to boot it without vmmon open. VirtualKD is good for logging with debugging protocol packets and automating windbg connection but you can't boot debug with it. In order for boot debugging to work you will need to create a COM1 serial port on the VM and set it to create a pipe \\.\pipe\pipename. You then need to connect to the pipe via windbg manually. Make sure that you have done bcdedit /bootdebug /on && bcdedit /bootdebug {bootmgr} /on && bcdedit /set {bootmgr} debugtype serial && bcdedit /set {bootmgr} baudrate 115200 && bcdedit /set {bootmgr} debugport 1 on the guest before booting.

Resources