_asm int 5H, can use visual studio to excute this instruction? - winapi

_asm int 5h usually work as prtscrn. how can check this one. not only prntscrn any interrupt like reboot int 19h.. etc. can interrupt through application.
I tried to code for reboot
int _tmain(int argc, _TCHAR* argv[])
{
//_asm mov al, 2
_asm int 19h //reboot
//_asm in 3
}
its giving access violation

Neither of the BIOS or MSDOS interrupts (int 0x10 through 0x33 and a few rarely used ones with bigger numbers) will work in a Windows application. They can only work in DOS programs. Windows provides its functionality for Windows apps using different methods and all these BIOS/DOS ints are not supported in Windows apps. In Windows apps they cause an exception, and typically result in a termination of your program by the OS.

Generally speaking these interrupts are protected (assuming you're not running in real mode DOS). Perhaps int 5h might work because it was the interrupt for having pressed print screen key.

Related

How does Windows 10 task manager detect a virtual machine?

The Windows 10 task manager (taskmgr.exe) knows if it is running on a physical or virtual machine.
If you look in the Performance tab you'll notice that the number of processors label either reads Logical processors: or Virtual processors:.
In addition, if running inside a virtual machine, there is also the label Virtual machine: Yes.
See the following two screen shots:
My question is if there is a documented API call taskmgr is using to make this kind of detection?
I had a very short look at the disassembly and it seems that the detection code is somehow related to GetLogicalProcessorInformationEx and/or IsProcessorFeaturePresent and/or NtQuerySystemInformation.
However, I don't see how (at least not without spending some more hours of analyzing the assembly code).
And: This question is IMO not related to other existing questions like How can I detect if my program is running inside a virtual machine? since I did not see any code trying to compare smbios table strings or cpu vendor strings with existing known strings typical for hypervisors ("qemu", "virtualbox", "vmware"). I'm not ruling out that a lower level API implementation does that but I don't see this kind of code in taskmgr.exe.
Update: I can also rule out that taskmgr.exe is using the CPUID instruction (with EAX=1 and checking the hypervisor bit 31 in ECX) to detect a matrix.
Update: A closer look at the disassembly showed that there is indeed a check for bit 31, just not done that obviously.
I'll answer this question myself below.
I've analyzed the x64 taskmgr.exe from Windows 10 1803 (OS Build 17134.165) by tracing back the writes to the memory location that is consulted at the point where the Virtual machine: Yes label is set.
Responsible for that variable's value is the return code of the function WdcMemoryMonitor::CheckVirtualStatus
Here is the disassembly of the first use of the cpuid instruction in this function:
lea eax, [rdi+1] // results in eax set to 1
cpuid
mov dword ptr [rbp+var_2C], ebx // save CPUID feature bits for later use
test ecx, ecx
jns short loc_7FF61E3892DA // negative value check equals check for bit 31
...
return 1
loc_7FF61E3892DA:
// different feature detection code if hypervisor bit is not set
So taskmgr is not using any hardware strings, mac addresses or some other sophisticated technologies but simply checks if the hypervisor bit (CPUID leaf 0x01 ECX bit 31)) is set.
The result is bogus of course since e.g. adding -hypervisor to qemu's cpu parameter disables the hypervisor cpuid flag which results in task manager not showing Virtual machine: yes anymore.
And finally here is some example code (tested on Windows and Linux) that perfectly mimics Windows task manager's test:
#include <stdio.h>
#ifdef _WIN32
#include <intrin.h>
#else
#include <cpuid.h>
#endif
int isHypervisor(void)
{
#ifdef _WIN32
int cpuinfo[4];
__cpuid(cpuinfo, 1);
if (cpuinfo[2] >> 31 & 1)
return 1;
#else
unsigned int eax, ebx, ecx, edx;
__get_cpuid (1, &eax, &ebx, &ecx, &edx);
if (ecx >> 31 & 1)
return 1;
#endif
return 0;
}
int main(int argc, char **argv)
{
if (isHypervisor())
printf("Virtual machine: yes\n");
else
printf("Virtual machine: no\n"); /* actually "maybe */
return 0;
}

Running code at memory location in my OS

I am developing an OS in C (and some assembly of course) and now I want to allow it to load/run external (placed in the RAM-disk) programs. I have assembled a test program as raw machine code with nasm using '-f bin'. Everything else i found on the subject is loading code while running Windows or Linux. I load the program into memory using the following code:
#define BIN_ADDR 0xFF000
int run_bin(char *file) //Too many hacks at the moment
{
u32int size = 0;
char *bin = open_file(file, &size);
printf("Loaded [%d] bytes of [%s] into [%X]\n", size, file, bin);
char *reloc = (char *)BIN_ADDR; //no malloc because of the org statement in the prog
memset(reloc, 0, size);
memcpy(reloc, bin, size);
jmp_to_bin();
}
and the code to jump to it:
[global jmp_to_bin]
jmp_to_bin:
jmp [bin_loc] ;also tried a plain jump
bin_loc dd 0xFF000
This caused a GPF when I ran it. I could give you the registers at the GPF and/or a screenshot if needed.
Code for my OS is at https://github.com/farlepet/retro-os
Any help would be greatly appreciated.
You use identity mapping and flat memory space, hence address 0xff000 is gonna be in the BIOS ROM range. No wonder you can't copy stuff there. Better change that address ;)

SDL memory leaks and Visual Leak Detector

Alright, so I think my program might have a memory leak. It's an SDL application, and it seems to have grown too large for me to manually pinpoint the leak. I searched around for a windows equivalent of Valgrind (I'm running Windows 7 x64 and using Visual Studio 2010), and eventually came across Visual Leak Detector. Unfortunately, it doesn't seem to want to generate ay output.
I set up another project, an empty console application, and set up VLD the same way as in my SDL app. Upon running the program, VLD worked perfectly and caught every memory leak that I threw at it. But in the SDL app, it just outputs "Visual Leak Detector Version 2.2 installed." at the beginning of the debug session and nothing else, even when I intentionally created a memory leak right in the main function.
The closest I can tell, it might have to do with SDL screwing with the program entry point. But that's just a guess. Is there any way to get VLD to work with SDL?
You could try to deleaker. It is a powerful tool for debugging memory leaks.
I had a similar problem using SDL library as well. In my case though, I was trying to use the default Memory Leak detection of Visual Studio 2010 because I didn't wanted to use a third party library/application.
Fixing the issue
If after all the required includes, define and function call you still don't see any memory leaks printed out, it might be that your Runtime Library is not set properly.
Double check if you have the debug version of the Runtime Library instead of the non-debug one (/MT and /MD).
Multi-threaded Debug (/MTd)
Multi-threaded Debug DLL (/MDd)
The compiler defines _DEBUG when you specify the /MTd or /Mdd option. These options specify debug versions of the C run-time library. See _DEBUG reference MSDN
Thus, the _DEBUG symbol must be defined in order to enable CRT code.
[...] When _DEBUG is not defined, calls to _CrtSetDbgFlag are removed during preprocessing [...]. See MSDN reference
So building a debug build is not enough to ensure _DEBUG will be defined.
This is something that you usually don't change in a normal project, but following a tutorial for SDL could lead you were I was.
Hopefully, it is going to help someone else, or even you.
More Details below
I was following the MSDN page to enable Memory leak detection out of the box with VS 2010.
After declaring those
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
I enabled them into my code and I inserted a deliberate memory leak
int main( int argc, char* args[] )
{
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
_CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_DEBUG );
int *pArray = (int*)malloc(sizeof(int) * 24); // Memory not freed
return 0;
}
Nothing was printed out.
So, I looked at the assembly and it was definitively not generating the CRT code at all as you can see:
int main( int argc, char* args[] )
{
012932F0 push ebp
012932F1 mov ebp,esp
012932F3 sub esp,0CCh
012932F9 push ebx
012932FA push esi
012932FB push edi
012932FC lea edi,[ebp-0CCh]
01293302 mov ecx,33h
01293307 mov eax,0CCCCCCCCh
0129330C rep stos dword ptr es:[edi]
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
_CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_DEBUG ); // Nothing in both case!
int *pArray = (int*)malloc(sizeof(int) * 24);
0129330E mov esi,esp
01293310 push 60h
01293312 call dword ptr [__imp__malloc (129E4CCh)]
01293318 add esp,4
0129331B cmp esi,esp
0129331D call #ILT+580(__RTC_CheckEsp) (1291249h)
01293322 mov dword ptr [pArray],eax
Then, I realized that the _DEBUG symbol was probably not getting defined.

GetIpAddrTable() leaks memory. How to resolve that?

On my Windows 7 box, this simple program causes the memory use of the application to creep up continuously, with no upper bound. I've stripped out everything non-essential, and it seems clear that the culprit is the Microsoft Iphlpapi function "GetIpAddrTable()". On each call, it leaks some memory. In a loop (e.g. checking for changes to the network interface list), it is unsustainable. There seems to be no async notification API which could do this job, so now I'm faced with possibly having to isolate this logic into a separate process and recycle the process periodically -- an ugly solution.
Any ideas?
// IphlpLeak.cpp - demonstrates that GetIpAddrTable leaks memory internally: run this and watch
// the memory use of the app climb up continuously with no upper bound.
#include <stdio.h>
#include <windows.h>
#include <assert.h>
#include <Iphlpapi.h>
#pragma comment(lib,"Iphlpapi.lib")
void testLeak() {
static unsigned char buf[16384];
DWORD dwSize(sizeof(buf));
if (GetIpAddrTable((PMIB_IPADDRTABLE)buf, &dwSize, false) == ERROR_INSUFFICIENT_BUFFER)
{
assert(0); // we never hit this branch.
return;
}
}
int main(int argc, char* argv[]) {
for ( int i = 0; true; i++ ) {
testLeak();
printf("i=%d\n",i);
Sleep(1000);
}
return 0;
}
#Stabledog:
I've ran your example, unmodified, for 24 hours but did not observe that the program's Commit Size increased indefinitely. It always stayed below 1024 kilobyte. This was on Windows 7 (32-bit, and without Service Pack 1).
Just for the sake of completeness, what happens to memory usage if you comment out the entire if block and the sleep? If there's no leak there, then I would suggest you're correct as to what's causing it.
Worst case, report it to MS and see if they can fix it - you have a nice simple test case to work from which is more than what I see in most bug reports.
Another thing you may want to try is to check the error code against NO_ERROR rather than a specific error condition. If you get back a different error than ERROR_INSUFFICIENT_BUFFER, there may be a leak for that:
DWORD dwRetVal = GetIpAddrTable((PMIB_IPADDRTABLE)buf, &dwSize, false);
if (dwRetVal != NO_ERROR) {
printf ("ERROR: %d\n", dwRetVal);
}
I've been all over this issue now: it appears that there is no acknowledgment from Microsoft on the matter, but even a trivial application grows without bounds on Windows 7 (not XP, though) when calling any of the APIs which retrieve the local IP addresses.
So the way I solved it -- for now -- was to launch a separate instance of my app with a special command-line switch that tells it "retrieve the IP addresses and print them to stdout". I scrape stdout in the parent app, the child exits and the leak problem is resolved.
But it wins "dang ugly solution to an annoying problem", at best.

Critical Sections leaking memory on Vista/Win2008?

It seems that using Critical Sections quite a bit in Vista/Windows Server 2008 leads to the OS not fully regaining the memory.
We found this problem with a Delphi application and it is clearly because of using the CS API. (see this SO question)
Has anyone else seen it with applications developed with other languages (C++, ...)?
The sample code was just initialzing 10000000 CS, then deleting them. This works fine in XP/Win2003 but does not release all the peak memory in Vista/Win2008 until the application has ended.
The more you use CS, the more your application retains memory for nothing.
Microsoft have indeed changed the way InitializeCriticalSection works on Vista, Windows Server 2008, and probably also Windows 7.
They added a "feature" to retain some memory used for Debug information when you allocate a bunch of CS. The more you allocate, the more memory is retained. It might be asymptotic and eventually flatten out (not fully bought to this one).
To avoid this "feature", you have to use the new API InitalizeCriticalSectionEx and pass the flag CRITICAL_SECTION_NO_DEBUG_INFO.
The advantage of this is that it might be faster as, very often, only the spincount will be used without having to actually wait.
The disadvantages are that your old applications can be incompatible, you need to change your code and it is now platform dependent (you have to check for the version to determine which one to use). And also you lose the ability to debug if you need.
Test kit to freeze a Windows Server 2008:
- build this C++ example as CSTest.exe
#include "stdafx.h"
#include "windows.h"
#include <iostream>
using namespace std;
void TestCriticalSections()
{
const unsigned int CS_MAX = 5000000;
CRITICAL_SECTION* csArray = new CRITICAL_SECTION[CS_MAX];
for (unsigned int i = 0; i < CS_MAX; ++i)
InitializeCriticalSection(&csArray[i]);
for (unsigned int i = 0; i < CS_MAX; ++i)
EnterCriticalSection(&csArray[i]);
for (unsigned int i = 0; i < CS_MAX; ++i)
LeaveCriticalSection(&csArray[i]);
for (unsigned int i = 0; i < CS_MAX; ++i)
DeleteCriticalSection(&csArray[i]);
delete [] csArray;
}
int _tmain(int argc, _TCHAR* argv[])
{
TestCriticalSections();
cout << "just hanging around...";
cin.get();
return 0;
}
-...Run this batch file (needs the sleep.exe from server SDK)
#rem you may adapt the sleep delay depending on speed and # of CPUs
#rem sleep 2 on a duo-core 4GB. sleep 1 on a 4CPU 8GB.
#for /L %%i in (1,1,300) do #echo %%i & #start /min CSTest.exe & #sleep 1
#echo still alive?
#pause
#taskkill /im cstest.* /f
-...and see a Win2008 server with 8GB and quad CPU core freezing before reaching the 300 instances launched.
-...repeat on a Windows 2003 server and see it handle it like a charm.
Your test is most probably not representative of the problem. Critical sections are considered "lightweight mutexes" because a real kernel mutex is not created when you initialize the critical section. This means your 10M critical sections are just structs with a few simple members. However, when two threads access a CS at the same time, in order to synchronize them a mutex is indeed created - and that's a different story.
I assume in your real app threads do collide, as opposed to your test app. Now, if you're really treating critical sections as lightweight mutexes and create a lot of them, your app might be allocating a large number of real kernel mutexes, which are way heavier than the light critical section object. And since mutexes are kernel object, creating a excessive number of them can really hurt the OS.
If this is indeed the case, you should reduce the usage of critical sections where you expect a lot of collisions. This has nothing to do with the Windows version, so my guess might be wrong, but it's still something to consider. Try monitoring the OS handles count, and see how your app is doing.
You're seeing something else.
I just built & ran this test code. Every memory usage stat is constant - private bytes, working set, commit, and so on.
int _tmain(int argc, _TCHAR* argv[])
{
while (true)
{
CRITICAL_SECTION* cs = new CRITICAL_SECTION[1000000];
for (int i = 0; i < 1000000; i++) InitializeCriticalSection(&cs[i]);
for (int i = 0; i < 1000000; i++) DeleteCriticalSection(&cs[i]);
delete [] cs;
}
return 0;
}

Resources