What operations on flash(NOR/NAND) effect Flash Program/Erase(P/E) cycles - flash-memory

want to implement a counter which can save values through power cycles, so I should use flash memory(I have option to choose NOR or NAND) but as my counter values will be increased frequently. I want to optimize number of erases(considering only erases i.e making bits 0 to 1 will effect the flash life span).
For that I want to implement tick counter In which sequence of bytes(around KBytes, depends on my counter maximum value, usually equal to block size) allocated to counter for each increment successive bits will be set to 1 to 0 starting from MSB. I will write custom flash driver to take care of counter operations.
Ex:
Val0: 1111 1111 1111 1111 ....
Val1: 0111 1111 1111 1111 ....
Val2: 0011 1111 1111 1111 ....
Advantages tick counter:
Erase required only when we want to make counter to zero.
But is it possible to program a bit from 1 to 0 without erasing(NAND/NOR), if yes will that effect P/E cycles count?

It depends on the part. I have seen some NOR flash parts that support overwriting a single cell up to four times before erasing. Exceeding the overwrite capability can damage the cell. I've seen NAND flash parts where the manufacturer requires that a page be written only once before erasing. I've seen some older NAND and NOR flash parts where it wasn't specified. Best to check your datasheet.

Related

Basic Understanding: Can a computer skip bits/Bytes?

the last few days I was interested in how a computer works and I want to understand what the PC with CPU, RAM, GPU etc. does (on a low-level) with 0 and 1.
So 8 Bits equals 1 Byte, (e.g. 10100110). My question is: Can a computer "skip" Bits, meaning, jump to the next Byte, based on the first (signed) Bit? I mean, usually a PC processes all Bits of a Byte. If a PC could skip a Byte based on its signed Bit, it would not have to read and process the next seven Bits, and would be in theory faster than if it would process every Bit, expecially with enormous data amounts. I hope you can understand my thought.
For example, if the first Bit is a 0, then process this Byte, if it is a 1, skip this Byte. Or a more useful example, when I have a table with two columns, I could mark the left column with 0s and the right with 1s, and if the left column is not equal to what I search, I skip the remaining 0s, the following 1s and read the next 0s then (the next row of my table).
Is this somehow possible, with a normal PC or a custom self-build? Would that make my processes faster?

When does this self-modifying program for an accumulator architecture terminate?

I have this machine code for an accumulator architecture.
The architecture is eight-bit; the instruction encoding looks like
the real machine code is for instruction one for example is : 001 1 0001, 001 means LOAD, 1 tells us that is a value, and 001 is the decimal 1 so its LOAD #1
0---LOAD #1
1---STORE 15
2---LOAD #0
3---EQUAL #4
4---JUMP #6
5---HALT
6---LOAD 3
7---SUB #1
8---STORE 3
9---LOAD 15
10--ADD 15
11--STORE 15
12--JUMP #2
13-- 000 0 0000
14-- 000 0 0000
15-- 000 0 0000
I have to find what will be in memory cell 15 when the program stops.
But if you jump to instruction 2, this means that accumulator would have the value 0 which will never be equal to 4 and the program will just run as endless loop, right?
And what does the STORE 3 do, if memory cell 3 is empty? Does it mean that when a memory cell is empty its value is number 0?
I cannot proceed more without answering these two questions
I'm assuming that this is for a n accumulator architecture, and I'm having to make a number of assumptions about that architecture. You really need to describe more of how your CPU works to make this an answerable question.
Yes, at #3, the accumulator will always be 0.
And yes, if instruction #3 never changes, then 0 will never equal 4 and the program will loop for ever.
However, when you store to memory cell 3, I wthink that you end up replacing the instruction at cell 3 with what is in the accumulator now.
So, the interesting question is what happens when you subtract 1 from the instruction representation of equal #4.
That depends on yoru specific architecture, but my strong guess is that you get equal #3 and store that in cell 3.
That should be enough for you to walk through and figure out when your loop terminates and what is in cell 15.

Erase/Write block size of EEPROM of PIC chips

First of all, sorry for bad English since my English skill is not that good...
Before the question, I want to explain my situation to help understanding.
I want to use EEPROM as a kind of counter.
The value of that counter would be increased very frequenty so I should consider endurance problem.
My idea is, write counter value on multiple address alternatively so cell wearing is reduced by N.
for example, if I use 5x area for counting,
Count 1 -> 1 0 0 0 0
Count 2 -> 1 2 0 0 0
Count 3 -> 1 2 3 0 0
Count 4 -> 1 2 3 4 0
Count 5 -> 1 2 3 4 5
Count 6 -> 6 2 3 4 5
...
So cell endurance can be extended by a factor of N.
However, AFAIK, for current NAND flash, data erase/write is done by a group of bytes, called block. So, if all the bytes are within single write/erase block, my method would not work.
So, my main question : Does erase/write operation of EEPROM of PIC is done by a group of bytes? or done by a single word or byte?
For example, if it is done by a group of 8-bytes, then I should make 8-byte offset between each counter value to make my method properly work.
Otherwise, if it is done by a byte or a word, I don't have to consider about spacing/offset.
From datasheet PIC24FJ256GB110 section 5.0:
The user may write program memory data in blocks of 64 instructions
(192 bytes) at a time, and erase program memory in blocks of 512
instructions (1536 bytes) at a time.
However you can overwrite individual block several times if you left the rest of block erased (bits are one) and the privius content rest the same. Remember: you can clear single bit in block only ones.
How much will decerease the data retention after 8 writes in to single FLASH block I don't know!

Need Clarification on Memory Accessing (ISA/MIPS)

I'm doing a theoretical assignment where I design my own ISA. I'm doing a Memory-Memory design where the ALU receives inputs from memory and outputs back to memory without using any registers. This is an outdated method and registers are more effective now, but that doesn't matter for my assignment.
My question:
If the encoding of one of my instructions looks like this
opcode|destination|value1|value2|function
00 0001 0011 1100 00
the function "00" stands for addition and the opcode 00 stands for an ALU operation.
My RTN looks like this for that function:
Mem[0001] <--- Mem[0011] + Mem[1100]
0001, 0011, 1100 are memory addresses, what I'm trying the accomplish is to sum the values INSIDE those memory addresses and then store it in the memory address of 0001 (overwriting it).
So if the value in memory address 0011 was '2' and the value in memory address 1100 was '3', my instruction would store '5' in memory address 0001.
Also lets say I want to overwrite the value '3' that's in address 1100 with '4'. I can just do Mem[1100] <--- 0100(binary for 4) ?
Is what I'm implementing correct? Or am I approaching memory addressing completely wrong?
These architectures usually have one accumulator. Otherwise you'd need a dual port ram to access two operands at the same time.
You could latch one memory value, but that's just a less versatile accumulator.
Memory writes are done on a different clock/ clock flank than reads.
Memory-const operations use a different opcode than memory-memory operations of the same type.
Finally, if your const is too big for your instruction size, you need to first copy the const to a memory address, then use it on a memory-memory operation.

Why do bytes exist? Why don't we just use bits?

A byte consists of 8 bits on most systems.
A byte typically represents the smallest data type a programmer may use. Depending on language, the data types might be called char or byte.
There are some types of data (booleans, small integers, etc) that could be stored in fewer bits than a byte. Yet using less than a byte is not supported by any programming language I know of (natively).
Why does this minimum of using 8 bits to store data exist? Why do we even need bytes? Why don't computers just use increments of bits (1 or more bits) rather than increments of bytes (multiples of 8 bits)?
Just in case anyone asks: I'm not worried about it. I do not have any specific needs. I'm just curious.
because at the hardware level memory is naturally organized into addressable chunks. Small chunks means that you can have fine grained things like 4 bit numbers; large chunks allow for more efficient operation (typically a CPU moves things around in 'chunks' or multiple thereof). IN particular larger addressable chunks make for bigger address spaces. If I have chunks that are 1 bit then an address range of 1 - 500 only covers 500 bits whereas 500 8 bit chunks cover 4000 bits.
Note - it was not always 8 bits. I worked on a machine that thought in 6 bits. (good old octal)
Paper tape (~1950's) was 5 or 6 holes (bits) wide, maybe other widths.
Punched cards (the newer kind) were 12 rows of 80 columns.
1960s:
B-5000 - 48-bit "words" with 6-bit characters
CDC-6600 -- 60-bit words with 6-bit characters
IBM 7090 -- 36-bit words with 6-bit characters
There were 12-bit machines; etc.
1970-1980s, "micros" enter the picture:
Intel 4004 - 4-bit chunks
8008, 8086, Z80, 6502, etc - 8 bit chunks
68000 - 16-bit words, but still 8-bit bytes
486 - 32-bit words, but still 8-bit bytes
today - 64-bit words, but still 8-bit bytes
future - 128, etc, but still 8-bit bytes
Get the picture? Americans figured that characters could be stored in only 6 bits.
Then we discovered that there was more in the world than just English.
So we floundered around with 7-bit ascii and 8-bit EBCDIC.
Eventually, we decided that 8 bits was good enough for all the characters we would ever need. ("We" were not Chinese.)
The IBM-360 came out as the dominant machine in the '60s-70's; it was based on an 8-bit byte. (It sort of had 32-bit words, but that became less important than the all-mighty byte.
It seemed such a waste to use 8 bits when all you really needed 7 bits to store all the characters you ever needed.
IBM, in the mid-20th century "owned" the computer market with 70% of the hardware and software sales. With the 360 being their main machine, 8-bit bytes was the thing for all the competitors to copy.
Eventually, we realized that other languages existed and came up with Unicode/utf8 and its variants. But that's another story.
Good way for me to write something late on night!
Your points are perfectly valid, however, history will always be that insane intruder how would have ruined your plans long before you were born.
For the purposes of explanation, let's imagine a ficticious machine with an architecture of the name of Bitel(TM) Inside or something of the like. The Bitel specifications mandate that the Central Processing Unit (CPU, i.e, microprocessor) shall access memory in one-bit units. Now, let's say a given instance of a Bitel-operated machine has a memory unit holding 32 billion bits (our ficticious equivalent of a 4GB RAM unit).
Now, let's see why Bitel, Inc. got into bankruptcy:
The binary code of any given program would be gigantic (the compiler would have to manipulate every single bit!)
32-bit addresses would be (even more) limited to hold just 512MB of memory. 64-bit systems would be safe (for now...)
Memory accesses would be literally a deadlock. When the CPU has got all of those 48 bits it needs to process a single ADD instruction, the floppy would have already spinned for too long, and you know what happens next...
Who the **** really needs to optimize a single bit? (See previous bankruptcy justification).
If you need to handle single bits, learn to use bitwise operators!
Programmers would go crazy as both coffee and RAM get too expensive. At the moment, this is a perfect synonym of apocalypse.
The C standard is holy and sacred, and it mandates that the minimum addressable unit (i.e, char) shall be at least 8 bits wide.
8 is a perfect power of 2. (1 is another one, but meh...)
In my opinion, it's an issue of addressing. To access individual bits of data, you would need eight times as many addresses (adding 3 bits to each address) compared to using accessing individual bytes. The byte is generally going to be the smallest practical unit to hold a number in a program (with only 256 possible values).
Some CPUs use words to address memory instead of bytes. That's their natural data type, so 16 or 32 bits. If Intel CPUs did that it would be 64 bits.
8 bit bytes are traditional because the first popular home computers used 8 bits. 256 values are enough to do a lot of useful things, while 16 (4 bits) are not quite enough.
And, once a thing goes on for long enough it becomes terribly hard to change. This is also why your hard drive or SSD likely still pretends to use 512 byte blocks. Even though the disk hardware does not use a 512 byte block and the OS doesn't either. (Advanced Format drives have a software switch to disable 512 byte emulation but generally only servers with RAID controllers turn it off.)
Also, Intel/AMD CPUs have so much extra silicon doing so much extra decoding work that the slight difference in 8 bit vs 64 bit addressing does not add any noticeable overhead. The CPU's memory controller is certainly not using 8 bits. It pulls data into cache in long streams and the minimum size is the cache line, often 64 bytes aka 512 bits. Often RAM hardware is slow to start but fast to stream so the CPU reads kilobytes into L3 cache, much like how hard drives read an entire track into their caches because the drive head is already there so why not?
First of all, C and C++ do have native support for bit-fields.
#include <iostream>
struct S {
// will usually occupy 2 bytes:
// 3 bits: value of b1
// 2 bits: unused
// 6 bits: value of b2
// 2 bits: value of b3
// 3 bits: unused
unsigned char b1 : 3, : 2, b2 : 6, b3 : 2;
};
int main()
{
std::cout << sizeof(S) << '\n'; // usually prints 2
}
Probably an answer lies in performance and memory alignment, and the fact that (I reckon partly because byte is called char in C) byte is the smallest part of machine word that can hold a 7-bit ASCII. Text operations are common, so special type for plain text have its gain for programming language.
Why bytes?
What is so special about 8 bits that it deserves its own name?
Computers do process all data as bits, but they prefer to process bits in byte-sized groupings. Or to put it another way: a byte is how much a computer likes to "bite" at once.
The byte is also the smallest addressable unit of memory in most modern computers. A computer with byte-addressable memory can not store an individual piece of data that is smaller than a byte.
What's in a byte?
A byte represents different types of information depending on the context. It might represent a number, a letter, or a program instruction. It might even represent part of an audio recording or a pixel in an image.
Source

Resources