Understanding Negative Virtual Memory Pressure - virtual-memory

I was re-reading Poul-Henning Kamp's paper entitled, "You're Doing It Wrong" and one of the diagrams confused me.
The x-axis of Figure 1 is labeled as "VM pressure in megabytes". The author clarifies the x-axis as being "measured in the amount of address space not resident in primary memory, because the kernel paged it out to secondary storage".
I can understand zero MB of VM pressure (all of the address space is resident in primary memory).
I can understand a positive VM pressure but I'm having a tough time picturing what negative 8 megabytes of VM pressure looks like (see the left of the x-axis of Figure 1). Putting negative 8 in the author's description leaves me with, "- 8 MB of address space not resident in primary memory". That doesn't make sense to me.
If I just conclude that the author accidentally negated positive numbers, the chart makes more sense but I'm not ready to conclude that the author has made the mistake. It's more likely that I have. But then as the pressure decreases, the runtime increases? That sounds counterintuitive.
I'm also not sure why there is a drastic change to the curves around -8 MB of VM memory pressure.
Thanks in advance!

Read "measured in the difference between amount of address space resident in primary memory and total required amount".
Word "not" somehow represents that minus sign.

Related

Where is the early warning on the LTO tape?

SSC5 says:
4.2.5 Early-warning If writing, the application client needs an indication that it is approaching the end of the permissible recording
area (i.e., end of the partition (see 4.2.7)). This position, called
early-warning (EW), is typically reported to the application client at
a position early enough for the device to write any buffered logical
objects to the medium while still leaving enough room for additional
recorded logical objects (see figure 10 and figure 11). Some American National Standards include physical requirements for a marker placed on the medium to be detected by the device
as early-warning.
Can anyone tell me where EW is on the LTO tape, e.g LTO-5 or LTO-6?
Whether it depends on the vendor of the tape?
Whether they are tens or hundreds of MB's from EW to EOP?
I can't find the reference...
Here is a direct quote from HPE LTO-6 White Paper. Note: EWEOM stands for "Early Warning End Of Media".
The EWEOM is set to be slightly less than the native capacity of 2.5 TB for LTO-6 cartridges, as required by the LTO format.
Crucially, however, the EWEOM is slightly before the actual physical end of tape which means every LTO Ultrium format cartridge has a little bit more capacity than the stated headline figure. For LTO-6, this additional space is the equivalent of an additional 5% of capacity, although it is reserved exclusively for the system and cannot be accessed
via the backup software. The excess tape is the first section of the media that is used when there are higher than expected errors so that any rewrite and error correction takes place without losing the stated capacity of the tape.
Going back to your questions:
Can anyone tell me where EW is on the LTO tape, e.g LTO-5 or LTO-6?
5% of additional capacity corresponds to 125GB in LTO-6 media (2500GB * 5% = 125GB). This number means that the position of EW (EWEOM) in LTO-6 should be located at roughly 7 wraps before EOP. Note: 1wrap = 18GB in LTO-6. Please note that this location depends on generations. As an example, if we assume that LTO-5 media also has 5% of additional capacity, there should be 75GB for this region, and this capacity corresponds to roughly 4 wraps. This is just an example - I could not find the exact spare capacity of LTO-5.
Whether it depends on the vendor of the tape?
Since this spare capacity is required by LTO format, I believe that the location is independent on tape manufacturers.
Whether they are tens or hundreds of MB's from EW to EOP?
Once again, LTO-6 has 5% of spare capacity which corresponds to 125GB. I guess that this margin depends on generations, but it should be roughly a few percentages. This is my best guess.

Logical block size on SSD

I'm currently working on a custom test/benchmark for SSD (CFast card) that runs on Win10 (written in C++). Part of the job is to read and interpret the S.M.A.R.T. attributes reported by the SSD. The one I'm interested in now is called "Total Host LBAs written", i.e. the number of LBAs written by the host system. The information I'm missing is "what is the size of memory one LBA refers to, in bytes?".
I have done some homeworks on how SSDs internally work, but I'm a bit confused in here and would hope somebody could shed some light on this, I am obviously missing something:
The FTL (Flash Translation Layer) in the SSD performs, amongst other operations (wear-leveling, garbage-collection etc.), LBA-to-physical address mapping.
The smallest memory unit that is individually readable/writable in SSD is a page. In my case, the page is said to have 16KiB of size. From this I would naively conclude that the LBA size will be the same as page size, i.e. 16KiB (or its integer multiple).
On the other hand, I would expect that the LBA will have the size of "sector" reported by GetDiskFreeSpace() from WinAPI, which reports 512B (with "SectorsPerCluster" = 8).
So, where am I thinking wrong and what is the real LBA size I can count with (or how can I get its value)? If the LBA size would be 512B (or 8*512 = 4KiB), the LBA would refer to 1/32 (1/4) of my flash page, which seems inefficient. I understand there's a need of emulation of older storages, but if it's allowed to write a single LBA, what does the SSD do then? Does it cache the whole page, rewrite the 1/32 part corresponding to the LBA, write it to empty block and update the LBA-physical address table?
Edit: sorry for using "LBA size", I know it's not semantically 100% correct, hopefully it's understandable...

XCode 5: What does the blue line in memory tab stand for?

What does the blue line in the memory tab gauge represent?
IIRC, It represents the range of the last few minutes of memory utilisation in Memory gauge and CPU utilisation in CPU gauge. So, you know whether things have been wildly swinging back and forth or is it just been picked at a narrow range.
Here in your case, 43MB is what currently being utilised but the inner blue arc represents the range of memory utilised by your app over a certain period.
For example, CPU is utilised in the range of 60% -100% and memory is 30-120MB.

What makes a CPU architecture "X-bit"?

Warning: I'm not sure where this type of question belongs. If you know a better place for it, drop a link.
Background: Imagine you heard a sentence like this: "this computer/processor has X-bit architecture". Now, if that computer is standard, you get a lot of information, like maximum RAM capacity, maximum unsigned/signed integer value and so on... But what if computer is not standard?
The mystery: back to 70's and 80's, the period referred as "8-bit era". Wait, 8-bit? Yes. So, if a CPU architecture is 8-bit, then:
The maximum RAM capacity of computer is exactly 256 bytes.
The maximum UInt range is from 0 to 256 and the maximum signed integer range is -128 to 127.
The maximum ROM capacity is also 256 bytes, because you have to be able to jump around?
However, it's clearly not like that. Look at some technical characteristics of game consoles of that time and you will see that those exceed the 256 limit.
Quotes (http://www.8bitcomputers.co.uk/whatbasics.html):
The Sharp PC1211 is actually a 4-bit computer but cleverly glues two together to look like 8 (a computer able to add up to 16 would not be very useful!)
So if it's a 4-bit computer, why can manipulate 8-bit integers? And another one...
The Sinclair QL is one of those computers that actually leaves the experts arguing. In parts, it is a 16 bit computer, in some ways it is even like a 32 bit computer but it holds its memory in 8 bits.
What? So why is this mess in www.8bitcomputers.co.uk?
Generally: how is an X-bit computer defined?
The biggest data bus that it has is X bits long (then Sinclair QL is a 32-bit computer)?
The CU functions of that computer are X bits long?
It holds its memory (in registers, ROM, RAM, whatever) in 8 bits?
Other definitions?
Purpose: I think that what I am designing is a 4-bit CPU. I don't really know if it has a 4-bit architecture, because it uses double ROM address, and includes functions like "activate ALU" that take another 4 bits from register Y. I want to know if I can still call it a 4-bit CPU. That's it!
Thank you very much in advance :)
An X-bit computer (or CPU) is defined whether the central unites and registers, such as CPU and ALU, are in X-bit. The addressing doesn't matter in defining the number X. As you have mentioned, an 8-bit computer (e.g. Motorola 68HC11 even tough it is a MCU, still it can be counted as a computer with CPU, I/O and Memory) can have 16-bit addressing in order to increase the RAM or memory size.
The data-bus size and the register sizes of CPU and ALU is the limiting factor in defining the X number in an X-bit computer architecture. You can get more information from http://en.wikipedia.org/wiki/Word_(computer_architecture)
An answer to your question will be "Yes, you are designing a 4-bit CPU if the registers and data bus size are in 4-bit.

Calculating the maximal time to access consecutive values in a DRAM in page mode

From a 16MB DRAM, I have to calculate the maximum time it can take to read 8300 consecutive values. Here are the specifications that I have:
-the DRAM is structured as a table of 4096 x 4096 cell.
-it has a time cycle (Tc) of 65 ns.
-in page mode it has a time cycle (Tpm) of only 45 ns.
I thought it was simply done by calculating the number of cells in the DRAM and then calculating the percentage that 8300 represents from the total (4096 x 4096) and then taking that same percentage and multiplying it to the time access. Unfortunately it did not give me the right answer... Any help would be greatly appreciated! Thanks guys
There are many variables into account (e.g., Using open- or close-page mode, number of ranks), thus is memory dependent. In order for you to have a better understanding and re-state your question, please read this paper, which helped me to understand RAM better.
Power and Performance Trade-Offs in Contemporary DRAM System Designs for Multicore Processors
You can search for it in Google Scholar.
Thanks.

Resources