subsector erase would increase endurance of flash memory more than whole-sector erase? - flash-memory

I'm planning to use Spansion (now Cypress) S25FL032P flash memory. ( datasheet : http://www.cypress.com/file/196861/download )
On the datasheet, it says,
 Memory architecture
– Uniform 64 KB sectors
– Top or bottom parameter block (Two 64-KB sectors (top or
bottom) broken down into sixteen 4-KB sub-sectors each)
– 256-byte page size
...
 Erase
– Bulk erase function
– Sector erase (SE) command (D8h) for 64 KB sectors
– Sub-sector erase (P4E) command (20h) for 4 KB sectors
– Sub-sector erase (P8E) command (40h) for 8 KB sectors
So, it can perform erase operation on 1. chip, 2. 64k sector, 3. 8k sector, 4k sector level.
My question is, In both cases equivalently consume single P/E cycle on the perspective of 64kb sector?
Single command for 64kb sector erase.
16 commands for 64kb sector erase using 4k sector erase command (P4E)
Otherwise, second method consumes 16x P/E cycle?
I'm asking this because I want to issue erase command on as small as possible size while maximizing cell endurance as long as possible.

Related

Question about memory space in microprocessor

My teacher has given me the question to differentiate the maximum memory space of 1MB and 4GB microprocessor. Does anyone know how to answer this question apart from size mentioned difference ?
https://i.stack.imgur.com/Q4Ih7.png
A 32-bit microprocessor can address up to 4 GB of memory, because its registers can contain an address that is 32 bits in size. (A 32-bit number ranges from 0 to 4,294,967,295‬). Each of those values can represent a unique memory location.
The 16-bit 8086, on the other hand, has 16-bit registers which only range from 0 to 65,535. However, the 8086 has a trick up its sleeve- it can use memory segments to increase this range up to one megabyte (20 bits). There are segment registers whose values are automatically bit-shifted left by 4 then added to the regular registers to form the final address.
For example, let's look at video mode 13h on the 8086. This is the 256-color VGA standard with a resolution of 320x200 pixels. Each pixel is represented by a single byte and the desired color is stored in that byte. The video memory is located at address 0xA0000, but since this value is greater than 16 bits, typically the programmer will load 0xA000 into a segment register like ds or es, then load 0000 into si or di. Once that is done, the program can read from [ds:si] and write to [es:di] to access the video memory. It's important to keep in mind that with this memory addressing scheme, not all combinations of segment and offset represent a unique memory location. Having es = A100/di = 0000 is the same as es=A000/di=1000.

Addressing more bytes than virtual address space (PTE size)

A computer system has a 36-bit virtual address space with a page size of 4K (small modification for hex representation), and 4 bytes per page table entry. (example found here, 2nd problem)
PTE->0x11223344 (32 bits)
FullAddress(PTE<<12+PageOffset)->0x11223344AAA (44 bits)
But the offset in the page table cannot be bigger than 2^24 (36-PAGE_SIZE which is 12=24)
So, let's say there is a function f that generates the PTE address, f: {0,1}^24->{0,1}^32, which effectively allows access to 2^24 pages per process.
Bottom line, i would say that one process cannot address the full 2^44 bytes but only 2^36 and it could be potentially beneficial when there are multiple processes.
e.g. The system could allocate to 2^8 processes different chunks of 2^36 memory.
This is the potential benefit?
(This is for single level of page table, for multilevel it will grow even bigger)
I guess the question is similar with: Does Physical Address Extension (PAE) allows a process to utilize more than 4GB or does it just allows a number of processes to utilize more than 4GB?

How is byte addressing implemented in modern computers?

I have trouble understanding how in say a 32-bit computer byte addressing is achieved:
Is the ram itself byte addressable meaning the first byte has address 0 and the second 1 etc? In this case, wouldn't is take 4 read cycles to read a 32-bit word and waste the width of the data bus?
Or does the ram consist of 32-bit words meaning address 0 points to the first 4 bytes and address 2 points to bytes 5 to 8? In this case I would expect the ram interface to make byte addressing possible (from the cpu's point of view)
Think of RAM as 8 bit wide structure with N entries. N is often the size quoted when referring to memory (256 MB - 256M entries, 2GB - 2G entries etc, B is for bytes). When you access this memory, the smallest unit you can address is one of these entries which is 8 bits (1 byte). Since you can only access it at byte level, we call it byte addressable memory.
Now coming to your question about accessing this memory, we do not just access a byte. Most of the time, memory accesses are sent through caches which are there to reduce memory access latency. Caches store data at a higher granularity than a byte or word, normally it is multiple of words. In doing so, caches explore a property called "locality". Locality means, there is a high chance that we either access this data item or a near by data item very soon. So fetching not just the byte, but all the adjacent bytes is not a waste. Think of it as an investment for future, saves you multiple data fetches that you would have done otherwise.
Memory addresses in RAM start with 0th address and they are accessed using the registers with capacity of 8 bit register or 32 bit registers. Based on these registers the value from specific address is accessed by the CPU. If you really need to understand how it works, you will need to run couple of programs using Assembly language to navigate in the physical memory by reading the values directly using registers and register move commands.

How can 8086 processors access harddrives larger than 1 MB?

How can 8086 processors (or real mode on later processors) access harddrives larger than 1 MB, when they can only access 1 MB (without expanded memory) of RAM?
Access is not linear (by byte) but by sector. Sector size may be for example 512 bytes. The computer reads sectors to memory as needed.

Operating System Logical and Physical Address Mapping

Question is here:
Consider a logical-address space of 32 pages with page size 512 words,
mapped onto a physical memory of 128 frames.
I want to know if my attempting calculation below is correct:
so far I have come the:
**
32 pages = 2^5 bits
512 words = 2^9 bits
128 frames = 2^7 bits
**
How to calculate the logical address and physical address if i do not know the word size?
Word size depends on the computer architecture. Generally for a 32 bit CPU the word size is 32 bits(4 bytes) and for 64 bit CPU, it is 64 bits(8 bytes).
* Logical address will be generated by the CPU for a particular proceess, you don't need to calculate anything. As the CPU generates the logical address it will be mapped to physical address by Page Map Table or a fast Cache in Memory management unit(MMU).
* With respect to the details given above, your CPU generates the logical address of 14 bits, so it can address (2^14 words in memory). Assuming your processor is 32 bit, then it can access 2^16 bytes.
* Given the logical address of 14 bits, it looks in the page map table by using the first 9 bits for page. Then it finds the address where the page is actually located in the physical memory and it adds the offset to the physical address to find memory location in the Main memory.

Resources