I want to build simple emulator using Intel VT-x.
Suppose we've allocated memory (guest RAM), of course from paged pool, and built EPT.
Everything working.
Now host OS (Win10 x64) decide to swap out those pages or move or smth else, so physical addresses in EPT are not valid anymore. VMM needs to update EPT entries accordinly (not present), and when guest will use such page, an EPT violation will happen. Then VMM performs a dummy read, so host OS will swap page back to memory, and updates EPT entry as present.
So the question: is there any way do detect swap or physical address change for particual linear addresses range of the current thread? Kernel or user mode. I've googled couple of days, but found nothing.
If there is no such way, what approach I can use in this situation? Where I can read about that?
Thanks in advance!
Related
I want to split RAM in my PC into two parts; half for my Windows OS and the other half for an image buffer for my application. For example, my desktop has 32GB memory, and I want to assign 16GB for Windows and assign another 16GB for my application access only. Windows doesn't touch the other 16GB but my application should use that 16GB image buffer. I know how to do this in Linux, but I need to do this in Windows OS. I think I have to configure the BIOS and need to implement a page remap Windows driver of image buffer for my application access.
Is there any good way to do this?
You can do this with the Address Windowing Extensions API. Although this was originally designed for 32-bit applications, it is still available to 64-bit applications, and memory allocated this way is not available to the virtual memory management system.
However, you should note that in most cases allowing the virtual memory manager to do its job will result in better overall performance than explicitly locking down memory will.
How does a computer know where in the filesystem the bootloader is located? Is there a common file among all operating systems and all computers (maybe not all computers, but all architectures) that points to the bootloader? I know Raspberry Pi always loads bootcode.bin from the first partition of the SD card. Do PCs share a common file like this?
The Master Boot Record occupies the first 512 bytes of the first hard disk, and is the first thing loaded by the BIOS to hand over control to a program capable of booting the desired operating system. In general, a bootloader gets installed in the MBR, removing its previous content. It is (in dual boot cases) possible for them to live in co-existence, which is known as multi-booting.
It is different among different architectures. But usually there is a register the cpu reads its first instruction from after reset to begin execution. This register is often contains the bits for an assembly jump operation to another memory address which is the address of the boot code. On the next clock cycle it will fetch the operation at that address and so on.
The hardware designer will have to determine how this is implemented. For example the first instruction could be to read from a memory address on an eeprom chip that contains the boot code.
As far as PC's go the motherboard has its own boot process which will load the OS bootloader. Hence the reason you can still startup a pc and see the BIOS without an OS installed
Or at least thats what I remember from my Comp. Arch. class forever ago.
Given a 32-bit/64-bit processor can a 4GB process run on 2GB RAM. Will it use virtual memory or it wont run at all?
This is HIGHLY platform dependent. On many 32bit OS's, no single process can ever use more than 2GB of memory, regardless of the physical memory installed or virtual memory allocated.
For example, my work computers use 32bit Linux with PAE (Physical Address Extensions) to allow it to have 16GB of RAM installed. The 2GB per process limit still applies however. Having the extra RAM simply allows me to have more individual processes running. 32bit Windows is the same way.
64bit OS's are more of a mixed bag. 64bit Linux will allow individual processes to map memory well in excess of 32GB (but again, varies from Kernel to Kernel). You will be limited only by the amount of Swap (Linux virtual memory) you have. 64bit Windows is a complete crap shoot. Certain versions will only allow 2GB per process, but most will allow >32GB limited only by the amount of Page File the user has allocated.
Microsoft provides a useful table breaking down the various memory limits on various OS versions/editions. Unfortunately there is no such table that I can find with cursory searching for Linux since it is so fragmented.
Short answer: Depends on the system.
Most 32-bit systems have a limitation of 2GB per process. If your system allows >2GB per process, then we can move on to the next part of your question.
Most modern systems use Virtual Memory. Yet, there are some constrained (and various old) systems that would just run out of space and make you cry. I believe uClinux supports both MMU and MMU-less architectures. Most 32-bit processors have a MMU (a few don't, see ARM Cortex-M0) and a handful of 16-bit or 8-bit have it as well (see Atmel ATtiny13A-MMU and Atari MMU).
Any process that needs more memory than is physically available will require a form of Memory Swap (e.g., a partition or file).
Virtual Memory is divided in pages. At some point, a page reside either in RAM or in Swap. Any attempt to access a memory page that's not loaded in RAM will trigger an interruption called Page Fault, which is handled by the kernel.
A 64-bit process needing 4GB on a 64-bit OS can generally run in 2GB of physical RAM, by using virtual memory, assuming disk swap space is available, but performance will be severely impacted if all of that memory is frequently accessed.
A 32-bit process can't address exactly 4GB of memory in practice (some address space overhead is required by the operating system), so it won't run. Depending on the OS, it can probably run a process that needs > 2GB and < 3-4GB.
I am running Windows XP on an Intel x86 machine and I got an error in the instruction at memory location 0x00000001.
I am not worried about debugging the error, but I was interested to know what instructions would generally be located at the very begging of memory.
The only processors I have written low-level code for are PIC microcontrollers, and I know that the first memory location would be a GOTO and then the interrupt vectors.
Windows guarantees that the first 64k and the last 64k of memory will always cause an access violation to read or write. This makes detecting null pointer dereferences easier.
See the graphic in this page below the heading
Free, Reserved, and Committed Virtual Memory
http://msdn.microsoft.com/en-us/library/ms810627.aspx
That is not the actual physical address space 000001. XP is a modern virtual paged memory OS so each application gets its own address space.
I believe that the zero page of a process's address space is deliberately inaccessible on Windows to help detect null pointer errors, in which case there's nothing there.
If a Windows application has the IMAGE_FILE_LARGE_ADDRESS_AWARE set in the image header (via the /LARGEADDRESSAWARE compiler flag), this is typically to allow a 32-bit application to use more than 2GB of memory (only makes sense if the 32-bit Operating System has set the 3GB switch in boot.ini). See MSDN article /3GB for more info.
My questions is, what happens if you run this application on a system that does NOT have the 3GB switch set. Is it simply ignored? Or will the app try and use a 3GB heap and get out-of-memory errors because the userspace only has 2GB available?
I keep hearing anecdotally that the LARGEADDRESSAWARE switch is ignored for 2GB userspace systems but cannot find any official Microsoft documentation on this.
Thanks in advance.
Basically the IMAGE_FILE_LARGE_ADDRESS_AWARE tells the system, "I know that addresses with the high bit set are not negative, and can handle them".
If the system is prepared to provide user mode addresses above 2GB, then it will. If the system is not prepared to give those addresses (ie., a 32-bit Windows OS without the /3GB setting), the process can't get those addresses anyway - but no harm done.
Also note that if an image has the IMAGE_FILE_LARGE_ADDRESS_AWARE bit set it will get access to address space above 2GB on Win64 systems, which do not support (or need) the /3GB switch. A 32-bit application will get an address space of something close to 4GB and a 64-bit application will get a huge address space - 7TB to 8TB depending on the platform (64-bit builds set the bit by default).
http://msdn.microsoft.com/en-us/library/aa366778.aspx#memory_limits
The switch is ignored, if you can call it that. For once, Microsoft actually managed to come up with a descriptive name.
The flag means exactly what it says. This image file is aware that large addresses exist.
That is, it won't crash, if it is given a pointer above the 2GB boundary.
And that's all. The OS doesn't have to treat the process special in any way. It simply indicates that if the OS is able to provide more than 2GB memory, this process can handle it without crashing.
You can make a simple hello world application which never uses more than 1.5MB, and still has this flag set. It doesn't mean "I want to use 3GB of memory", it means "When I request memory, I don't care if it's above or below the 2GB boundary".
So since the flag doesn't require the OS to do anything special, the OS simply won't do anything special if there is nothing special it can do.