I am trying to learn OS development by building a small kernel. I'm using physical memory map provided by GRUB. The problem is upto 3.5 GB memory, the results are fine but beyond that, the highest accessible memory is shown as 3gb no matter what the physical memory size is. The kernel is higher half and located at 3gb(virtual) and it is run on a single core CPU. Can someone point out why this is happening?
The classical "3.5GB issue" is a memory-mapped video card.
Related
I am trying to understand OS memory management by reading windows internals and some other tutorials in net but get confused on the topic. My doubts are, On what basis the OS allocate space in RAM for a process? When the OS allocate virtual memory for the process? Where the loader first loads image, into RAM or into the virtual memory?
As I know, first it creates the dummy file(Thunblines) in Ram means it get loads into the RAM and immediately shows on the screen. that's how the RAM works.
and when you want to perform any actions(Delete or move or any), it deals with the actual storage space.
hope you understand a bit... thanks
Suppose I open notepad ( not necessarily ) and write a text file of 6 GB ( again, suppose) . I have no running processes other than notepad itself, and the memory assigned to the user processes has a limit of less than 6 GB. My disc memory is sufficient though.
What happens to the file now? I know that writing is definitely possible and virtual memory may get involved , but I am not sure how. Does virtual memory actually get involved? Either way, can you please explain what happens from an OS point of view?
Thanks
From the memory point of view, the notepad allocates a 6Gb buffer in memory to store the text you're seeing. The process consists of data segment (which includes the buffer above, but not only) and a code segment (the notepad native code), so the total process space is going to be larger than 6Gb.
Now, it's all virtual memory as far the process is concerned (yes, it is involved). If I understand your case right, the process won't fit into physical memory, so it's going to crash due to insufficient memory.
the memory assigned to the user processes has a limit of less than 6 GB.
If this is a hard limit enforced by the operating system, it may at it's own discretion kill the process with some error message. It may also do anything else it wants, depending on it's implementation. This part of the answer disregards virtual memory or any other way of swapping RAM to disk.
My disc memory is sufficient though. What happens to the file now? I know that writing is definitely possible and virtual memory may get involved , but I am not sure how.
At this point, when your question starts involving the disk, we can start talking about virtual memory and swapping. If virtual memory is involved and the 6GB limit is in RAM usage, not total virtual memory usage, parts of the file can be moved to disk. This could be parts of the file currently out of view on screen or similar. The OS then manages what parts of the (more than 6GB of) data is available in RAM, and swaps in/out data depending on what the program needs (i.e. where in the file you are working).
Does virtual memory actually get involved?
Depends on weather it is enabled in the OS in question or not, and how it is configured.
Yes a lot of this depends on the OS in question and it's implementation and how it handles cases like this. If the OS is poorly written it may crash itself.
Before I give you a precise answer, let me explain few things.
I could suggest you to open Linux System Monitor or Windows Task Manager, and then open heavy softwares like a game, Android Studio, IntelliJ e.t.c
Go to the memory visualization tap. You will notice that each of the applications( processes) consume a certain amount of memory. Okey fine!
Some machines if not most, support memory virtualization.
It's a concept of allocating a certain amount of the hard disk space as a back up plan just in case some application( process) consumes a lot of memory and if it is not active at times, then it gets moved from the main memory to the virtual memory to create a priority for other tasks that are currently busy.
A virtual memory is slower that the main memory as it is located in the hard disk. However is it still faster than fetching data directly from the disk.
When launching any application, not all its application size will be loaded to memory, but only the necessary files that are required at that particular time will be loaded to memory. Thus why you can play a 60GB game in a machine that has a 4GB RAM.
To answer your question:
If you happen to launch a software that consumes all the memory resources of your machine, your machine will freeze. You will even hear the sounds made by its cooling system. It will be louder and faster.
I hope I clarified this well
What is the minimum amount of RAM required to run Linux kernel on an Embedded device? In Linux-0.11 for 80x86, the minimum RAM required was 2MB to load the kernel data structures and interrupt vectors.
How much is the minimum needed RAM for present Linux-3.18 kernel? Does different architectures like x86 and ARM have different requirements for minimum RAM required for booting? How does one calculates the same?
It's possible to shrink it down to ~600 KiB. Check the work done by Tom Zanussi from Intel.
Presentation from Tom and Wiki page about the topic.
UPDATE. Tom published interesting statistics about memory use by different subsystems in the kernel. He did research during that time when he was working on the project.
Yet another interesting project is Gray486linux.
This site suggests:
A minimal uClinux configuration could be run from 4MB RAM, although
the recommendation we are giving to our customers is that they should design
in at least 16 MB's worth of RAM.
If you are using SDRAM, the problem would be getting a part any smaller than 16Mb at reasonable volume cost and availability, so maybe it is a non-problem? For SRAM however, that is a large and relatively expensive part.
eLinux.org has a lot of information on embedded kernel size, how to determine it, and how to minimise it.
It depends how you define Linux. If you ask for current operating systems then we are talking about way above 100MByte, better 1000MByte of memory.
If we are talking about "Linux from Scratch" then we are also talking about how much pain you are willing to suffer. In the mid-1990 I build a Linux system by compiling every binary myself and made it run on a 386sx16, 1,5MByte of memory. While it had a 40MByte harddrive it was mostly empty. I compiled my own Kernel 1.0.9, my own libc5, my own base tools, SVGAlib. That system was somewhat useable for using textmode and SVGAlib applications. Increasing the memory to 2MByte did help a lot. And believe me, the system was extremely bare. Today all components need at least twice the memory but then there is also ulibc instead of libc and busybox.
At 8MByte of memory I can create a very basic system today from scratch. At 512MByte of memory you might have a somewhat modern looking but slow desktop system.
i have following question
i have RAM 2.5 GB in my computer what i want is if it is possible that in case of allocate totally memory to some process or for example
char * buffer=malloc(2.4GB) , no more process ( google chrome, microsoft games in computer..etc) can run?
Probably not. First, your operating system will have protections ie, malloc eventually becomes a system call in your OS so it will fail instead of killing everything.
Second, because of virtual memory you can have more allocated memory than RAM so that even if your OS were to let you allocate 2.5 gigs it will still be able to function and run processes.
While it is OS and compiler dependent, on Visual C++ under 32 bits windows, you will typically be unable to malloc more than 512MB at a time. This controlled by the preprocessor constant _HEAP_MAXREQ. For details of the approach I used to work around this limitatation, see the following thread If you go to 64 bits, this also ceases to be an issue, although you might end up using much more virtual memory than you would expect.
In a OS like Windows where each process gets a 4GB (assuming 32 bit OS) virtual address space, it doesn't matter how much RAM you are having. In such a case malloc(2.4GB) will surely fail as the user address space is limited to 2GB only. Even allocating 2GB will most probably fail as the system has to allocate 2GB of continuos virtual address space for malloc. This much continous free memory is nearly impossible due to fragmentation.
Computer works with virtual memory, this has no relation to a real size of RAM.
I am playing with an MSDN sample to do memory stress testing (see: http://msdn.microsoft.com/en-us/magazine/cc163613.aspx) and an extension of that tool that specifically eats physical memory (see http://www.donationcoder.com/Forums/bb/index.php?topic=14895.0;prev_next=next). I am obviously confused though on the differences between Virtual and Physical Memory. I thought each process has 2 GB of virtual memory (although I also read 1.5 GB because of "overhead". My understanding was that some/all/none of this virtual memory could be physical memory, and the amount of physical memory used by a process could change over time (memory could be swapped out to disc, etc.)I further thought that, in general, when you allocate memory, the operating system could use physical memory or virtual memory. From this, I conclude that dwAvailVirtual should always be equal to or greater than dwAvailPhys in the call GlobalMemoryStatus. However, I often (always?) see the opposite. What am I missing.
I apologize in advance if my question is not well formed. I'm still trying to get my head around the whole memory management system in Windows. Tutorials/Explanations/Book recs are most welcome!
Andrew
That was only true in the olden days, back when RAM was expensive. The operating system maps pages of virtual memory to RAM as needed. If there isn't enough RAM to satisfy a program's request, it starts unmapping pages to make room. If such a page contains data instead of code, it gets written to the paging file. Whenever the program accesses that page again, it generates a paging fault, letting the operating system read the page back from disk.
If the machine has little RAM and lots of processes consuming virtual memory pages, that can cause a very unpleasant effect called "thrashing". The operating system is constantly accessing the disk and machine performance slows down to a crawl.
More RAM means less disk access. There's very little reason not to use 3 or 4 GB of RAM on a 32-bit operating system, it's cheap. Even if you do not get to use all 4 GB, not all of it will be addressable due hardware devices taking space on the address bus (video, mostly). But that won't change the size of the virtual memory accessible by user code, it is still 2 Gigabytes.
Windows Internals is a good book.
The amount of virtual memory is limited by size of the address space - which is 4GB per process on a 32-bit system. And you have to subtract from this the size of regions reserved for system use and the amount of VM used already by your process (including all the libraries mapped to its address space).
On the other hand, the total amount of physical memory may be higher than the amount of virtual memory space the system has left free for your process to use (and these days it often is).
This means that if you have more than ~2GB or RAM, you can't use all your physical memory in one process (since there's not enough virtual memory space to map it to), but it can be used by many processes. Note that this limitation is removed in a 64-bit system.
I don't know if this is your issue, but the MSDN page for the GlobalMemoryStatus function contains the following warning:
On computers with more than 4 GB of memory, the GlobalMemoryStatus function can return incorrect information, reporting a value of –1 to indicate an overflow. For this reason, applications should use the GlobalMemoryStatusEx function instead.
Additionally, that page says:
On Intel x86 computers with more than 2 GB and less than 4 GB of memory, the GlobalMemoryStatus function will always return 2 GB in the dwTotalPhys member of the MEMORYSTATUS structure. Similarly, if the total available memory is between 2 and 4 GB, the dwAvailPhys member of the MEMORYSTATUS structure will be rounded down to 2 GB. If the executable is linked using the /LARGEADDRESSAWARE linker option, then the GlobalMemoryStatus function will return the correct amount of physical memory in both members.
Since you're referring to members like dwAvailPhys instead of ullAvailPhys, it sounds like you're using a MEMORYSTATUS structure instead of a MEMORYSTATUSEX structure. I don't know the consequences of that on a 64-bit platform, but on a 32-bit platform that definitely could cause incorrect memory sizes to be reported.