How to find size of a semaphore object in windows?
I tried using sizeof() but we cannot give name of the sempahore object as an argument to sizeof. It has to be the handle. sizeof(HANDLE) gives us the size of handle and not semaphore.
This what is known as an "opaque handle.". There is no way to know how big it really is, what it contains or how any of the functions work internally. This gives Microsoft the ability to completely rewrite the implementation with each new version of Windows if they want to without worrying about breaking existing code. It's a similar concept to having a public and private interface to a class. Since we are not working on the Windows kernel, we only get to see the public interface.
Update:
It might be possible to get a rough idea of how big they are by creating a bunch and monitoring what happens to your memory usage in Process Explorer. However, since there is a good chance that they live in the kernel and not in user space, it might not show up at all. In any case, there are no guarantees about any other version of Windows, past or future, including patches/service packs.
It's something "hidden" from you. You can't say how big it is. And it's a kernel object, so it probably doesn't even live in your address space. It's like asking "how big is the Process Table?", or "how many MB is Windows wasting?".
I'll add that I have made a small test on my Windows 7 32 bits machine: 100000 kernel semaphores (with name X{number} with 0 <= number < 100000)) : 4 mb of kernel memory and 8 mb of user space (both measured with Task Manager). It's about 40 bytes/semaphore in kernel space and 80 bytes/semaphore in user space! (this in Win32... In 64 bits it'll probably double)
Related
I'm currently studying memory management of OS by the video lecture. The instructor says,
In fact, you may have, and it is quite often the case that there may
be several parts of the process memory, which are not even accessed at
all. That is, they are neither executed, loaded or stored from memory.
I don't understand the saying since even if in a simple C program, we access whole address space of it. Don't we?
#include <stdio.h>
int main()
{
printf("Hello, World!");
return 0;
}
Could you elucidate the saying? If possible could you provide an example program wherein "several parts of the process memory, which are not even accessed at all" when it is run.
Imagine you have a large and complicated utility (e.g. a compiler), and the user asks it for help (e.g. they type gcc --help instead of asking it to compile anything). In this case, how much of the utility's code and data is used?
Most programs have various optional parts that aren't used (e.g. maybe something that works with graphics will have some code for 16 bits per pixel and other code for 32 bits per pixel, and will determine which code to use and not use the other code). Most heap allocators are "eager" (e.g. they'll ask the OS for 20 MiB of space and then might only "malloc() 2 MiB of it). Sometimes a program will memory map a huge file but then only access a small part of it.
Even for your trivial "hello world" example code; the virtual address space probably contains a huge (several MiB) shared library to support lots of C standard library functions (e.g. puts(), fprintf(), sprintf(), ...) and your program only uses a small part of that shared library; and your program probably reserves a conservative amount of space for its stack (e.g. maybe 20 KiB of space for its stack) and then probably only uses a few hundred bytes of stack.
In a virtual memory system, the address space of the process is created in secondary store at start up. Little or nothing gets placed in memory. For example, the operating system may use the executable file as the page file for the code and static data. It just sets up an internal structure that says some range of memory is mapped to these blocks in the executable file. The same goes for shared libraries. The other data gets mapped to the page file.
As your program runs it starts page faulting rapidly because nothing is in memory and the operating system has to load it from secondary storage.
If there is something that your program does not reference, it never gets loaded into memory.
If you had global variable declared like
char somedata [1045] ;
and your program never references that variable, it will never get loaded into memory. The same goes for code. If you have pages of code that done get execute (e.g. error handling code) it does not get loaded. If you link to shared libraries, you will likely bece including a lot of functions that you never use. Likewise, they will not get loaded if you do not execute them.
To begin with, not all of the address space is backed by physical memory at all times, especially if your address space covers 248+ bytes, which your computer doesn't have (which is not to say you can't map most of the address space to a single physical page of memory, which would be of very little utility for anything).
And then some portions of the address space may be purposefully permanently inaccessible, like a few pages near virtual address 0 (to catch NULL pointer dereferences).
And as it's been pointed out in the other answers, with on-demand loading of programs, you may have some portions of the address space reserved for your program but if the program doesn't happen to need any of its code or data there, nothing needs to be loader there either.
I'm disassembling "Test Drive III". It's a 1990 DOS game. The *.EXE has MZ format.
I've never dealt with segmentation or DOS, so I would be grateful if you answered some of my questions.
1) The game's system requirements mention 286 CPU, which has protected mode. As far as I know, DOS was 90% real mode software, yet some applications could enter protected mode. Can I be sure that the app uses the CPU in real mode only? IOW, is it guaranteed that the segment registers contain actual offset of the segment instead of an index to segment descriptor?
2) Said system requirements mention 1 MB of RAM. How is this amount of RAM even meant to be accessed if the uppermost 384 KB of the address space are reserved for stuff like MMIO and ROM? I've heard about UMBs (using holes in UMA to access RAM) and about HMA, but it still doesn't allow to access the whole 1 MB of physical RAM. So, was precious RAM just wasted because its physical address happened to be reserved for UMA? Or maybe the game uses some crutches like LIM EMS or XMS?
3) Is CS incremented automatically when the code crosses segment boundaries? Say, the IP reaches 0xFFFF, and what then? Does CS switch to the next segment before next instruction is executed? Same goes for SS. What happens when SP goes all the way down to 0x0000?
4) The MZ header of the executable looks like this:
signature 23117 "0x5a4d"
bytes_in_last_block 117
blocks_in_file 270
num_relocs 0
header_paragraphs 32
min_extra_paragraphs 3349
max_extra_paragraphs 65535
ss 11422
sp 128
checksum 0
ip 16
cs 8385
reloc_table_offset 30
overlay_number 0
Why does it have no relocation information? How is it even meant to run without address fixups? Or is it built as completely position-independent code consisting from program-counter-relative instructions? The game comes with a cheat utility which is also an MZ executable. Despite being much smaller (8448 bytes - so small that it fits into a single segment), it still has relocation information:
offset 1
segment 0
offset 222
segment 0
offset 272
segment 0
This allows IDA to properly disassemble the cheat's code. But the game EXE has nothing, even though it clearly has lots of far pointers.
5) Is there even such thing as 'sections' in DOS? I mean, data section, code (text) section etc? The MZ header points to the stack section, but it has no information about data section. Is data and code completely mixed in DOS programs?
6) Why even having a stack section in EXE file at all? It has nothing but zeroes. Why wasting disk space instead of just saying, "start stack from here"? Like it is done with BSS section?
7) MZ header contains information about initial values of SS and CS. What about DS? What's its initial value?
8) What does an MZ executable have after the exe data? The cheat utility has whole 3507 bytes in the end of the executable file which look like
__exitclean.__exit.__restorezero._abort.DGROUP#.__MMODEL._main._access.
_atexit._close._exit._fclose._fflush._flushall._fopen._freopen._fdopen
._fseek._ftell._printf.__fputc._fputc._fputchar.__FPUTN.__setupio._setvbuf
._tell.__MKNAME._tmpnam._write.__xfclose.__xfflush.___brk.___sbrk._brk._sbrk
.__chmod.__close._ioctl.__IOERROR._isatty._lseek.__LONGTOA._itoa._ultoa.
_ltoa._memcpy._open.__open._strcat._unlink.__VPRINTER.__write._free._malloc
._realloc.__REALCVT.DATASEG#.__Int0Vector.__Int4Vector.__Int5Vector.
__Int6Vector.__C0argc.__C0argv.__C0environ.__envLng.__envseg.__envSize
Is this some kind of debugging symbol information?
Thank you in advance for your help.
Re. 1. No, you can't be sure until you prove otherwise to yourself. One giveaway would be the presence of MOV CR0, ... in the code.
Re. 2. While marketing materials aren't to be confused with an engineering specification, there's a technical reason for this. A 286 CPU could address more than 1M of physical address space. The RAM was only "wasted" in real mode, and only if an EMM (or EMS) driver wasn't used. On 286 systems, the RAM past 640kb was usually "pushed up" to start at the 1088kb mark. The ISA and on-board peripherals' memory address space was mapped 1:1 into the 640-1024kb window. To use the RAM from the real mode needed an EMM or EMS driver. From protected mode, it was simply "there" as soon as you set up the segment descriptor correctly.
If the game actually needed the extra 384kb of RAM over the 640kb available in the real mode, it's a strong indication that it either switched to protected mode or required the services or an EMM or EMS driver.
Re. 3. I wish I remembered that. On reflection, I wish not :) Someone else please edit or answer separately. Hah, I did know it at some point in time :)
Re. 4. You say "[the code] has lots of instructions like call far ptr 18DCh:78Ch". This implies one of three things:
Protected mode is used and the segment part of the address is a selector into the segment descriptor table.
There is code there that relocates those instructions without DOS having to do it.
There is code there that forcibly relocates the game to a constant position in the address space. If the game doesn't use DOS to access on-disk files, it can remove DOS completely and take over, gaining lots of memory in the process. I don't recall whether you could exit from the game back to the command prompt. Some games where "play until you reboot".
Re. 5. The .EXE header does not "point" to any stack, there is no stack section you imply, the concept of sections doesn't exist as far as the .EXE file is concerned. The SS register value is obtained by adding the segment the executable was loaded at with the SS value from the header.
It's true that the linker can arrange sections contiguously in the .EXE file, but such sections' properties are not included in the .EXE header. They often can be reverse-engineered by inspecting the executable.
Re. 6. The SS and SP values in the .EXE header are not file pointers. The EXE file might have a part that maps to the stack, but that's entirely optional.
Re. 7. This is already asked and answered here.
Re. 8. This looks like a debug symbol list. The cheat utility was linked with the debugging information left in. You can have completely arbitrary data there - often it'd various resources (graphics, music, etc.).
How to uniquely identify computer (mainboard) using C#(.Net/Mono, local application)?
Edition. We can identify mainboard in .Net using something like this (see Get Unique System Identifiers in C#):
using System.Management;
...
ManagementObjectSearcher searcher = new ManagementObjectSearcher("select * from Win32_MotherboardDevice");
...
But unfortunately Mono does not support System.Management. How to do it under Mono for Linux? - I don't know :(
Write a function that takes a few unique hardware parameters as input and generates a hash out of them.
For example, Windows activation looks at the following hardware characteristics:
Display Adapter
SCSI Adapter
IDE Adapter (effectively the motherboard)
Network Adapter (NIC) and its MAC Address
RAM Amount Range (i.e., 0-64mb, 64-128mb, etc.)
Processor Type
Processor Serial Number
Hard Drive Device
Hard Drive Volume Serial Number (VSN)
CD-ROM / CD-RW / DVD-ROM
You can pick up a few of them to generate your unique computer identifier.
Please see: Get Unique System Identifiers in C#
You realistically have MotherboardID, CPUID, Disk Serial and MAC address, from experience none of them are 100%.
Our stats show
Disk serial Is missing 0.1 %
MAC Is missing 1.3 %
Motherboard ID Is missing 30 %
CPUID Is missing 99 %
0.04% of machines tested yielded no information, we couldn't even read the computer name. It maybe that these were some kind of virtual PC, HyperV or VMWare instance, or maybe just very locked down? In any case your design has to be able to cope with these cases.
Disk serial is the most reliable, but easy to change, mac can be changed and depending on the filtering applied when reading it can change if device drivers are added (hyperv, wireshark etc).
Motherboard and CPUID sometimes return values that are invalid "NONE", "AAAA..", "XXXX..." etc.
You should also note that these functions can be very slow to call (they may take a few seconds even on a fast PC), so it may be worth kicking them off on a background thread as early as possible, you ideally don't want to be blocking on them.
Try this:
http://carso-owen.blogspot.com/2007/02/how-to-get-my-motherboard-serial-number.html
Personally though, I'd go with hard drive serial number. If a mainboard dies and is replaced, that PC isn't valid any more. If the HDD drive is replaced, it doesn't matter too much because the software was on it.
Of course, on the other hand, if the HDD is just moved elsewhere, the information goes with it, so you might want to look at a combination of serial numbers, depending what you want it for.
How about the MAC address of the network card?
Hi I know that this error which I'm going to show can't be fixed through code. I just want to know why and how is it caused and I also know its due to JVM trying to access address space of another program.
A fatal error has been detected by the Java Runtime Environment:
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6dcd422a, pid=4024, tid=3900
JRE version: 6.0_14-b08
Java VM: Java HotSpot(TM) Server VM (14.0-b16 mixed mode windows-x86 )
Problematic frame:
V [jvm.dll+0x17422a]
An error report file with more information is saved as:
C:\PServer\server\bin\hs_err_pid4024.log
If you would like to submit a bug report, please visit:
http://java.sun.com/webapps/bugreport/crash.jsp
The book "modern operating systems" from Tanenbaum, which is available online here:
http://lovingod.host.sk/tanenbaum/Unix-Linux-Windows.html
Covers the topic in depth. (Chapter 4 is in Memory Management and Chapter 4.8 is on Memory segmentation). The short version:
It would be very bad if several programs on your PC could access each other's memory. Actually even within one program, even in one thread you have multiple areas of memory that must not influence one other. Usually a process has at least one memory area called the "stack" and one area called the "heap" (commonly every process has one heap + one stack per thread. There MAY be more segments, but this is implementation dependent and it does not matter for the explanation here). On the stack things like you function's arguments and your local variables are saved. On the heap are variables saved that's size and lifetime cannot be determined by the compiler at compile time (that would be in Java everything that you use the "new"-Operator on. Example:
public void bar(String hi, int myInt)
{
String foo = new String("foobar");
}
in this example are two String objects: (referenced by "foo" and "hi"). Both these objects are on the heap (you know this, because at some point both Strings were allocated using "new". And in this example 3 values are on the stack. This would be the value of "myInt", "hi" and "foo". It is important to realize that "hi" and "foo" don't really contain Strings directly, but instead they contain some id that tells them were on the heap they can find the String. (This is not as easy to explain using java because java abstracts a lot. In C "hi" and "foo" would be a Pointer which is actually just an integer which represents the address in the heap where the actual value is stored).
You might ask yourself why there is a stack and a heap anyway. Why not put everything in the same place. Explaining that unfortunately exceeds the scope of this answer. Read the book I linked ;-). The short version is that stack and heap are differently managed and the separation is done for reasons of optimization.
The size of stack and heap are limited. (On Linux execute ulimit -a and you'll get a list including "data seg size" (heap) and "stack size" (yeah... stack :-)).).
The stack is something that just grows. Like an array that gets bigger and bigger if you append more and more data. Eventually you run out of space. In this case you may end up writing in the memory area that does not belong to you anymore. And that would be extremely bad. So the operating systems notices that and stops the program if that happens. On Linux you get a "Segmenation fault" and on Windows you get an "Access violation".
In other languages like in C, you need to manage your memory manually. A tiny error can easily cause you to accidentally write into some space that does not belong to you. In Java you have "automatic memory management" which means that the JVM does all this for you. You don't need to care and that takes loads from your shoulders as a developer (it usually does. I bet there are people out there who would disagree about the "loads" part ;-)). This means that it /should/ be impossible to produce segmentation faults with java. Unfortunatelly the JVM is not perfect. Sometimes it has bugs and screws up. And then you get what you got.
Consider a complex, memory hungry, multi threaded application running within a 32bit address space on windows XP.
Certain operations require n large buffers of fixed size, where only one buffer needs to be accessed at a time.
The application uses a pattern where some address space the size of one buffer is reserved early and is used to contain the currently needed buffer.
This follows the sequence:
(initial run) VirtualAlloc -> VirtualFree -> MapViewOfFileEx
(buffer changes) UnMapViewOfFile -> MapViewOfFileEx
Here the pointer to the buffer location is provided by the call to VirtualAlloc and then that same location is used on each call to MapViewOfFileEx.
The problem is that windows does not (as far as I know) provide any handshake type operation for passing the memory space between the different users.
Therefore there is a small opportunity (at each -> in my above sequence) where the memory is not locked and another thread can jump in and perform an allocation within the buffer.
The next call to MapViewOfFileEx is broken and the system can no longer guarantee that there will be a big enough space in the address space for a buffer.
Obviously refactoring to use smaller buffers reduces the rate of failures to reallocate space.
Some use of HeapLock has had some success but this still has issues - something still manages to steal some memory from within the address space.
(We tried Calling GetProcessHeaps then using HeapLock to lock all of the heaps)
What I'd like to know is there anyway to lock a specific block of address space that is compatible with MapViewOfFileEx?
Edit: I should add that ultimately this code lives in a library that gets called by an application outside of my control
You could brute force it; suspend every thread in the process that isn't the one performing the mapping, Unmap/Remap, unsuspend the suspended threads. It ain't elegant, but it's the only way I can think of off-hand to provide the kind of mutual exclusion you need.
Have you looked at creating your own private heap via HeapCreate? You could set the heap to your desired buffer size. The only remaining problem is then how to get MapViewOfFileto use your private heap instead of the default heap.
I'd assume that MapViewOfFile internally calls GetProcessHeap to get the default heap and then it requests a contiguous block of memory. You can surround the call to MapViewOfFile with a detour, i.e., you rewire the GetProcessHeap call by overwriting the method in memory effectively inserting a jump to your own code which can return your private heap.
Microsoft has published the Detour Library that I'm not directly familiar with however. I know that detouring is surprisingly common. Security software, virus scanners etc all use such frameworks. It's not pretty, but may work:
HANDLE g_hndPrivateHeap;
HANDLE WINAPI GetProcessHeapImpl() {
return g_hndPrivateHeap;
}
struct SDetourGetProcessHeap { // object for exception safety
SDetourGetProcessHeap() {
// put detour in place
}
~SDetourGetProcessHeap() {
// remove detour again
}
};
void MapFile() {
g_hndPrivateHeap = HeapCreate( ... );
{
SDetourGetProcessHeap d;
MapViewOfFile(...);
}
}
These may also help:
How to replace WinAPI functions calls in the MS VC++ project with my own implementation (name and parameters set are the same)?
How can I hook Windows functions in C/C++?
http://research.microsoft.com/pubs/68568/huntusenixnt99.pdf
Imagine if I came to you with a piece of code like this:
void *foo;
foo = malloc(n);
if (foo)
free(foo);
foo = malloc(n);
Then I came to you and said, help! foo does not have the same address on the second allocation!
I'd be crazy, right?
It seems to me like you've already demonstrated clear knowledge of why this doesn't work. There's a reason that the documention for any API that takes an explicit address to map into lets you know that the address is just a suggestion, and it can't be guaranteed. This also goes for mmap() on POSIX.
I would suggest you write the program in such a way that a change in address doesn't matter. That is, don't store too many pointers to quantities inside the buffer, or if you do, patch them up after reallocation. Similar to the way you'd treat a buffer that you were going to pass into realloc().
Even the documentation for MapViewOfFileEx() explicitly suggests this:
While it is possible to specify an address that is safe now (not used by the operating system), there is no guarantee that the address will remain safe over time. Therefore, it is better to let the operating system choose the address. In this case, you would not store pointers in the memory mapped file, you would store offsets from the base of the file mapping so that the mapping can be used at any address.
Update from your comments
In that case, I suppose you could:
Not map into contiguous blocks. Perhaps you could map in chunks and write some intermediate function to decide which to read from/write to?
Try porting to 64 bit.
As the earlier post suggests, you can suspend every thread in the process while you change the memory mappings. You can use SuspendThread()/ResumeThread() for that. This has the disadvantage that your code has to know about all the other threads and hold thread handles for them.
An alternative is to use the Windows debug API to suspend all threads. If a process has a debugger attached, then every time the process faults, Windows will suspend all of the process's threads until the debugger handles the fault and resumes the process.
Also see this question which is very similar, but phrased differently:
Replacing memory mappings atomically on Windows