How does Windows 10 task manager detect a virtual machine? - windows

The Windows 10 task manager (taskmgr.exe) knows if it is running on a physical or virtual machine.
If you look in the Performance tab you'll notice that the number of processors label either reads Logical processors: or Virtual processors:.
In addition, if running inside a virtual machine, there is also the label Virtual machine: Yes.
See the following two screen shots:
My question is if there is a documented API call taskmgr is using to make this kind of detection?
I had a very short look at the disassembly and it seems that the detection code is somehow related to GetLogicalProcessorInformationEx and/or IsProcessorFeaturePresent and/or NtQuerySystemInformation.
However, I don't see how (at least not without spending some more hours of analyzing the assembly code).
And: This question is IMO not related to other existing questions like How can I detect if my program is running inside a virtual machine? since I did not see any code trying to compare smbios table strings or cpu vendor strings with existing known strings typical for hypervisors ("qemu", "virtualbox", "vmware"). I'm not ruling out that a lower level API implementation does that but I don't see this kind of code in taskmgr.exe.
Update: I can also rule out that taskmgr.exe is using the CPUID instruction (with EAX=1 and checking the hypervisor bit 31 in ECX) to detect a matrix.
Update: A closer look at the disassembly showed that there is indeed a check for bit 31, just not done that obviously.
I'll answer this question myself below.

I've analyzed the x64 taskmgr.exe from Windows 10 1803 (OS Build 17134.165) by tracing back the writes to the memory location that is consulted at the point where the Virtual machine: Yes label is set.
Responsible for that variable's value is the return code of the function WdcMemoryMonitor::CheckVirtualStatus
Here is the disassembly of the first use of the cpuid instruction in this function:
lea eax, [rdi+1] // results in eax set to 1
cpuid
mov dword ptr [rbp+var_2C], ebx // save CPUID feature bits for later use
test ecx, ecx
jns short loc_7FF61E3892DA // negative value check equals check for bit 31
...
return 1
loc_7FF61E3892DA:
// different feature detection code if hypervisor bit is not set
So taskmgr is not using any hardware strings, mac addresses or some other sophisticated technologies but simply checks if the hypervisor bit (CPUID leaf 0x01 ECX bit 31)) is set.
The result is bogus of course since e.g. adding -hypervisor to qemu's cpu parameter disables the hypervisor cpuid flag which results in task manager not showing Virtual machine: yes anymore.
And finally here is some example code (tested on Windows and Linux) that perfectly mimics Windows task manager's test:
#include <stdio.h>
#ifdef _WIN32
#include <intrin.h>
#else
#include <cpuid.h>
#endif
int isHypervisor(void)
{
#ifdef _WIN32
int cpuinfo[4];
__cpuid(cpuinfo, 1);
if (cpuinfo[2] >> 31 & 1)
return 1;
#else
unsigned int eax, ebx, ecx, edx;
__get_cpuid (1, &eax, &ebx, &ecx, &edx);
if (ecx >> 31 & 1)
return 1;
#endif
return 0;
}
int main(int argc, char **argv)
{
if (isHypervisor())
printf("Virtual machine: yes\n");
else
printf("Virtual machine: no\n"); /* actually "maybe */
return 0;
}

Related

User space CR3 value when PTI is enabled

While executing in the kernel mode, is there any way to get the userspace CR3 value when Page Table Isolation(PTI) is enabled?
In current Linux, see arch/x86/entry/calling.h for asm .macro SWITCH_TO_USER_CR3_NOSTACK and other stuff to see how Linux flips between kernel vs. user CR3. And the earlier comment on the constants it uses:
/*
* PAGE_TABLE_ISOLATION PGDs are 8k. Flip bit 12 to switch between the two
* halves:
*/
#define PTI_USER_PGTABLE_BIT PAGE_SHIFT
#define PTI_USER_PGTABLE_MASK (1 << PTI_USER_PGTABLE_BIT)
#define PTI_USER_PCID_BIT X86_CR3_PTI_PCID_USER_BIT
#define PTI_USER_PCID_MASK (1 << PTI_USER_PCID_BIT)
#define PTI_USER_PGTABLE_AND_PCID_MASK (PTI_USER_PCID_MASK | PTI_USER_PGTABLE_MASK)
It looks like the kernel CR3 is always the lower one, so setting bit 12 in the current CR3 always makes it point to the user-space page directory. (If the current task has a user-space, and if PTI is enabled. These asm macros are only used in code-paths that are about to return to user-space.)
.macro SWITCH_TO_USER_CR3_NOSTACK scratch_reg:req scratch_reg2:req
...
mov %cr3, \scratch_reg
...
.Lwrcr3_\#:
/* Flip the PGD to the user version */
orq $(PTI_USER_PGTABLE_MASK), \scratch_reg
mov \scratch_reg, %cr3
These macros are used in entry_64.S, entry_64_compat.S, and entry_32.S in paths that returns to user-space.
There's presumably a cleaner way to access user-space page tables from C.
Your best bet might be to look at the page-fault handler to find out how it accesses the process's page table. (Or mmap's implementation of MAP_POPULATE).

Cortex M0 HardFault_Handler and getting the fault address

I'm having a HardFault when executing my program. I've found dozens of ways to get PC's value, but I'm using Keil uVision 5 and none of them has worked.
As far as I know I'm not in a multitasking context, and PSP contains 0xFFFFFFF1, so adding 24 to it would cause overflow.
Here's what I've managed to get working (as in, it compiles and execute):
enum { r0, r1, r2, r3, r12, lr, pc, psr};
extern "C" void HardFault_Handler()
{
uint32_t *stack;
__ASM volatile("MRS stack, MSP");
stack += 0x20;
pc = stack[pc];
psr = stack[psr];
__ASM volatile("BKPT #01");
}
Note the "+= 0x20", which is here to compensate for C function stack.
Whenever I read the PC's value, it's 0.
Would anyone have working code for that?
Otherwise, here's how I do it manually:
Put a breakpoint on HardFault_Handler (the original one)
When it breaks, look as MSP
Add 24 to its value.
Dump memory at that address.
And there it is, 0x00000000.
What am I doing wrong?
A few problems with your code
uint32_t *stack;
__ASM volatile("MRS stack, MSP");
MRS supports register destinations only. Your assembler migt be clever enough to transfer it to a temporary register first, but I'd like to see the machine code generated from that.
If you are using some kind of multitasking system, it might use PSP instead of MSP. See the linked code below on how one can distinguish that.
pc = stack[pc];
psr = stack[psr];
It uses the previous values of pc and psr as an index. Should be
pc = stack[6];
psr = stack[7];
Whenever I read the PC's value, it's 0.
Your program might actually have jumped to address 0 (e.g. through a null function pointer), tried to execute the value found there, which was probably not a valid instruction but the initial SP value from the vector table, and faulted on that. This code
void (*f)(void) = 0;
f();
does exactly that, I'm seeing 0x00000000 at offset 24.
Would anyone have working code for that?
This works for me. Note the code choosing between psp and msp, and the __attribute__((naked)) directive. You could try to find some equivalent for your compiler, to prevent the compiler from allocating a stack frame at all.

How does gcc know the register size to use in inline assembly?

I have the inline assembly code:
#define read_msr(index, buf) asm volatile ("rdmsr" : "=d"(buf[1]), "=a"(buf[0]) : "c"(index))
The code using this macro:
u32 buf[2];
read_msr(0x173, buf);
I found the disassembly is (using gnu toolchain):
mov eax,0x173
mov ecx,eax
rdmsr
mov DWORD PTR [rbp-0xc],edx
mov DWORD PTR [rbp-0x10],eax
The question is that 0x173 is less than 0xffff, why gcc does not use mov cx, 0x173? Will the gcc analysis the following instruction rdmsr? Will the gcc always know the correct register size?
It depends on the size of the value or variable passed.
If you pass a "short int" it will set "cx" and read the data from "ax" and "dx" (if buf is a short int, too).
For char it would access "cl" and so on.
So "c" refers to the "ecx" register, but this is accessed with "ecx", "cx", or "cl" depending on the size of the access, which I think makes sense.
To test you can try passing (unsigned short)0x173, it should change the code.
There is no analysis of the inline assembly (in fact it is after text substitution direclty copied to the output assembly, including syntax errors). Also there is no default register size, depending on whether you have a 32 or 64 bit target. This would be way to limiting.
I think the answer is because the current default data size is 32-bit. In 64-bit long mode, the default data size is also 32-bit, unless you use "rex.w" prefix.
Intel specifies the RDMSR instruction as using (all of) ECX to determine the model specific register. That being the case, and apparently as specified by your macro, GCC has every reason to load your constant into the full ECX.
So the question about why it doesn't load CX seems completely inappropriate. It looks like GCC is generating the right code.
(You didn't ask why it stages the load of ECX inefficiently by using EAX; I don't know the answer to that).

_asm int 5H, can use visual studio to excute this instruction?

_asm int 5h usually work as prtscrn. how can check this one. not only prntscrn any interrupt like reboot int 19h.. etc. can interrupt through application.
I tried to code for reboot
int _tmain(int argc, _TCHAR* argv[])
{
//_asm mov al, 2
_asm int 19h //reboot
//_asm in 3
}
its giving access violation
Neither of the BIOS or MSDOS interrupts (int 0x10 through 0x33 and a few rarely used ones with bigger numbers) will work in a Windows application. They can only work in DOS programs. Windows provides its functionality for Windows apps using different methods and all these BIOS/DOS ints are not supported in Windows apps. In Windows apps they cause an exception, and typically result in a termination of your program by the OS.
Generally speaking these interrupts are protected (assuming you're not running in real mode DOS). Perhaps int 5h might work because it was the interrupt for having pressed print screen key.

SDL memory leaks and Visual Leak Detector

Alright, so I think my program might have a memory leak. It's an SDL application, and it seems to have grown too large for me to manually pinpoint the leak. I searched around for a windows equivalent of Valgrind (I'm running Windows 7 x64 and using Visual Studio 2010), and eventually came across Visual Leak Detector. Unfortunately, it doesn't seem to want to generate ay output.
I set up another project, an empty console application, and set up VLD the same way as in my SDL app. Upon running the program, VLD worked perfectly and caught every memory leak that I threw at it. But in the SDL app, it just outputs "Visual Leak Detector Version 2.2 installed." at the beginning of the debug session and nothing else, even when I intentionally created a memory leak right in the main function.
The closest I can tell, it might have to do with SDL screwing with the program entry point. But that's just a guess. Is there any way to get VLD to work with SDL?
You could try to deleaker. It is a powerful tool for debugging memory leaks.
I had a similar problem using SDL library as well. In my case though, I was trying to use the default Memory Leak detection of Visual Studio 2010 because I didn't wanted to use a third party library/application.
Fixing the issue
If after all the required includes, define and function call you still don't see any memory leaks printed out, it might be that your Runtime Library is not set properly.
Double check if you have the debug version of the Runtime Library instead of the non-debug one (/MT and /MD).
Multi-threaded Debug (/MTd)
Multi-threaded Debug DLL (/MDd)
The compiler defines _DEBUG when you specify the /MTd or /Mdd option. These options specify debug versions of the C run-time library. See _DEBUG reference MSDN
Thus, the _DEBUG symbol must be defined in order to enable CRT code.
[...] When _DEBUG is not defined, calls to _CrtSetDbgFlag are removed during preprocessing [...]. See MSDN reference
So building a debug build is not enough to ensure _DEBUG will be defined.
This is something that you usually don't change in a normal project, but following a tutorial for SDL could lead you were I was.
Hopefully, it is going to help someone else, or even you.
More Details below
I was following the MSDN page to enable Memory leak detection out of the box with VS 2010.
After declaring those
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
I enabled them into my code and I inserted a deliberate memory leak
int main( int argc, char* args[] )
{
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
_CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_DEBUG );
int *pArray = (int*)malloc(sizeof(int) * 24); // Memory not freed
return 0;
}
Nothing was printed out.
So, I looked at the assembly and it was definitively not generating the CRT code at all as you can see:
int main( int argc, char* args[] )
{
012932F0 push ebp
012932F1 mov ebp,esp
012932F3 sub esp,0CCh
012932F9 push ebx
012932FA push esi
012932FB push edi
012932FC lea edi,[ebp-0CCh]
01293302 mov ecx,33h
01293307 mov eax,0CCCCCCCCh
0129330C rep stos dword ptr es:[edi]
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
_CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_DEBUG ); // Nothing in both case!
int *pArray = (int*)malloc(sizeof(int) * 24);
0129330E mov esi,esp
01293310 push 60h
01293312 call dword ptr [__imp__malloc (129E4CCh)]
01293318 add esp,4
0129331B cmp esi,esp
0129331D call #ILT+580(__RTC_CheckEsp) (1291249h)
01293322 mov dword ptr [pArray],eax
Then, I realized that the _DEBUG symbol was probably not getting defined.

Resources