Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Possibly the most basic computer question of all time, but I have not been able to find a straightforward answer and it is driving me crazy. When a computer 'reads' a byte, does it read it as a sequential series of ones and zeros one after the other, or does it somehow read all 8 ones and zeros at once?
A computer system reads the data in both ways depending upon type of operation and the how the digital system is designed.I'll explain this with very simple example of a Full adder circuit.
A full adder adds binary numbers and accounts for values carried in as well as out (Wikipedia)
Example of Parallel operation
Suppose in some task we need to add two 8 bit(1 byte) numbers such that all bits are available at the time of addition.
Then in that case we can design a digital system with 8 full-adders(1 for each bit).
Example of Serial Operation
In some other task you observe that all 8 bits will not be simultaneously available.
Or you think having 8 separate adders is costly as you need to implement other mathematical operations (like subtraction,multiplication and division). So instead of having 8 separate units you have 1 unit which will individually process bits. In this scenario we will need three storage units ( Shift Registers) such that two storage units will store two 8-bit numbers and one storage units will store the result .At a given clock pulse single bit will be transmitted from each of two registers to the full adder which will perform the addition process and transfer 1 bit result to the result shift register in single clock pulse.
This figure contains some additional stuff which is not useful for this thread but you can
study digital logic design and computer architecture if you want to go more deep in this stuff.
Shift register
Shift register operations demo
This is really kind of outside the scope of Stackoverflow, but it brings back such fond memories from college.
It depends. Some times a computer reads bits one at a time. For example over older ethernet manchester code is used. However over old parallel printer cables, there were 8 pins each one signaling a bit, and an entireoctet (byte) is sent at once.
In serial (one-bit-at-a-time) encodings, you're typically measuring transitions in the line or transitions against some well-defined clock source.
In parallel encodings, you're typically reading all the bits into a register at a time and latching the register.
Look up flipflops, registers, and logic gates for information on the low-level parts of this.
Bits are transmitted one at a time in serial transmission, and
multiple numbers of bits in parallel transmission. A bitwise operation
optionally process bits one at a time. Data transfer rates are usually
measured in decimal SI multiples of the unit bit per second (bit/s),
such as kbit/s.
Wikipedia's article on Bit
the processor works with a defined number of registerlength. 8, 16, 32, 64 ... think about a register as an amount of connection, one for each bit... thats the amount of bits that will be processed at once in one processor core, one register at once ... the processor hat different kinds of register, examples are the private instruction register or the public data or adress register
Think of it this way, at least at a physical level: In a transmission cable from point A to B (A and B can be anything, hard drive, CPU, RAM, USB, etc.) each wire in that cable can transmit one bit at a time. Both A and B have a clock pulsing at the same rate. On each pulse, the sender changes the amount of power going down each wire to signify the value of the new bit(s). So, the # of wires in the cable = the # of bits that can be transmitted each "pulse". (Note: This is a very simplified and theoretical explanation).
At a software level, in the CPU, you can never address anything smaller than a byte. You can "access" and manipulate specific bytes by using the bitwise operators (& (AND), | (OR), << (Left Shift), >> (Right Shift), ^ (XOR)).
In hardware, the number of bits being sent each pulse is completely dependent of the hardware itself.
Related
what is the best practice to access a changing 32bit register (like a counter) through a 16bit databus ?
I suppose i have to 'freeze' or copy the 32bit value on a read of the LSB until the MSB is also read and vise versa on a write to avoid data corruption if the LSB overflows to the MSB between the 2 accesses.
Is there a standard approach to this ?
As suggested in both the question and Morten's answer, a second register to hold the value at the time of the read of the first half is a common method. In some MCUs this register is common to multiple devices, meaning you need to either disable interrupts across the two accesses or ensure ISRs don't touch the extra register. Writes are similarly handled, frequently in the opposite order (write second word temporary storage, then write first word on device thus triggering the device to read the second word simultaneously).
There have also been cases where you just can't access the register atomically. In such cases, you might need to implement additional logic to figure out the true value. An example of such an algorithm assuming three reads take much less than 1<<15 counter ticks might be:
earlyMSB = highreg;
midLSB = lowreg;
lateMSB = highreg;
fullword = ((midLSB<0x8000 ? lateMSB : earlyMSB)<<16) | midLSB;
Other variants might use an overflow flag to signal the more significant word needs an increment (frequently used to implement that part of the counter in software).
There is no standard way, but an often used approach is to make read one address return the first 16 bits, while the remaining 16 bits are captured at the same time, and read later at another address.
I had designed a simple ALU, and I generated "operation codes" using a decoder. Now, I'm studying about Multiplexers, but I couldn't understand what they do in a CPU or ALU?
A really simple example: If you want to fetch a data bit from memory, a multiplexer allows you to specify an address (the input code), and the memory bit will be connected to another "pin".
So say you have 256 bits of memory, and you want to connect this to an output pin, the multiplexer has 8 bits for input codes. You proved a code say N, and and bit N is connected through the logic gates to the output of the multiplexer. This multiplexer would have a total of 256 + 8 input lines.
I'm not sure how this would be implemented in more modern CPUs but you can probably see how several bit multiplexers could be stacked together and be used to fetch a byte from memory in parallel as well, and connected to say an arithmetic register to perform computations.
Fun right?!
I'm not even sure if this is possible but I think it's worth asking anyway.
Say we have 100 devices in a network. Each device has a unique ID.
I want to tell a group of these devices to do something by broadcasting only one packet (A packet that all the devices receive).
For example, if I wanted to tell devices 2,5,75,116 and 530 to do something, I have to broadcast this : 2-5-75-116-530
But this packet can get pretty long if I wanted (for example) 95 of the devices to do something!!!
So I need a method to reduce the length of this packet.
After thinking for a while, I came up with an idea:
what if I used only prime numbers as device IDs? Then I could send the product of device IDs of the group I need, as the packet and every device will check if the remainder of the received number and its device ID is 0.
For example if I wanted devices 2,3,5 and 7 to do something, I would broadcast 2*3*5*7 = 210 and then each device will calculate "210 mod self ID" and only devices with IDs 2,3,5 and 7 will get 0 so they know that they should do something.
But this method is not efficient because the 100th prime numbers is 541 and the broadcasted number may get really big and the "mod" calculation may get really hard.(the devices have 8bit processors).
So I just need a method for the devices to determine if they should do something or ignore the received packet. And I need the packet to be as short as possible.
I tried my best to explain the question, If its still vague, please tell me to explain more.
You can just use a bit string in which every bit represents a device. Then, you just need a bitwise AND to tell if a given machine should react.
You'd need one bit per device, which would be, for example, 32 bytes for 256 devices. Admittedly, that's a little wasteful if you only need one machine to react, but it's pretty compact if you need, say, 95 devices to respond.
You mentioned that you need the device id to be <= 4 bytes, but that's no problem: 4 bytes = 32 bits = enough space to store 2^32 device ids. For example, the device id for the 101st machine (if you start at 0) could just be 100 (0b01100100) = 1 byte. You would just need to use that to figure out which byte of the packet to use (ceil(100 / 8) = the 13th) and bitwise AND that byte against 100 % 8 = 4 = 0b00000100.
As cobarzan said, you also can use a hybrid scheme allowing for individual addressing. In that scenario, you could use the first bit as a signal to indicate multiple- or single-machine addressing. As cobarzan said, that requires more processing, and it means the first byte can only store 7 machine signals, rather than 8.
Like Ed Cottrell suggested, a bit string would do the job. If the machines are labeled {1,..,n}, there are 2n-1 possible subsets (assuming you do not send requests with no intended target). So you need a data structure able to hold every possible signature of such a subset, whatever you decide the signature to be. And n bits (one for each machine) is the best one can do regarding the size of such a data structure. The evaluation performed on the machines takes constant time (on machine with label l just look at the lth bit).
But one could go for some hybrid scheme. Say you have a task for one device only, then it would be a pity to send n bits (all 0s, except one). So you can take one additional bit T which indicates the type of packet. The value of T is set to 0 if you are sending a bit string of length n as described above or set to 1 if you are using a more appropriate scheme (i.e. less bits). In the case of just one machine that needs to perform the task, you could send directly the label of the machine (which is O(log n) bits long). This approach reduces the size of the packet if you have less than O(n/log n) machines you need to perform the task. Evaluation on the machines is more expensive though.
I ran into a question:
in digital system with micro-programmed control circuit, total of distinct operation pattern of 32 signal is 450. if the micro-programmed memory contains 1K micro instruction, by using Nano memory, how many bits is reduced from micro-programmed memory?
1) 22 Kbits
2) 23 Kbits
3) 450 Kbits
4) 450*32 Kbits
I read in my notes, that (1) is true, but i couldn't understand how we get this?
Edit: Micro instructions are stored in the micro memory (control memory). There is a chance that a group of micro instructions may occur several times in a micro program. As a result the more memory space isneeded.By making use of the nano memory we can have significant saving in the memory when a group of micro operations occur several times in a micro program. Please see for nano technique ref:
Control Units
Back in the day, before .NET, when you actually had to know what a computer was, before you could make it do stuff. This question would have gotten a ton of answers.
Except, back then, the internet wasn't really a thing, and Stack overflow was not really a problem, as the concept of a stack and a heap, wasn't really a standard..
So just to make sure that we are in fact talking about the same thing, I will just tr to explain this..
The control unit in a digital computer initiates sequences of microoperations. In a bus-oriented system, the control signals that specify microoperations are
groups of bits that select the paths in multiplexers, decoders, and ALUs.
So we are looking at the control unit, and the instruction set for making it capable of actually doing stuff.
We are dealing with what steps should happen, when the compiled assembly requests a bit shift, clear a register, or similar "low level" stuff.
Some of theese instructions may be hardwired, but usually not all of them.
Micro-programs
Quote: "Microprogramming is an orderly method of designing the control unit
of a conventional computer"
(http://www2.informatik.hu-berlin.de/rok/ca/data/slides/english/ca9.pdf)
The control variables, for the control unit can be represented by a string of 1’s and 0’s called a "control word". A microprogrammed control unit is a control unit whose binary control variables are not hardwired, but are stored in a memory. Before we optimized stuff we called this memory the micro memory ;)
Typically we would actually be looking at two "memories" a control memory, and a main memory.
the control memory is for the microprogram,
and the main memory is for instructions and data
The process of code generation for the control memory is called
microprogramming.
... ok?
Transfer of information among registers in the processor is through MUXs rather
than a bus, we typically have a few register, some of which are familiar to programmers, some are not. The ones that should ring a bell for most in here, is the processor registers. The most common 4 Processor registers are:
Program counter – PC
Address register – AR
Data register – DR
Accumulator register - AC
Examples where microcode uses processor registers to do stuff
Assembly instruction "ADD"
pseudo micro code: " AC ← AC + M[EA] " where M[EA] is data from main memory register
control word: 0000
Assembly instruction "BRANCH"
pseudo micro code "If (AC < 0) then (PC ← EA) "
control word: 0001
Micro-memory
The micro memory only concerns how we organize whats in the control memory.
However when we have big instruction sets, we can do better than simply storing all the instructions. We can subdivide the control memory into "control memory" and "nano memory" (since nano is smaller than micro right ;) )
This is good as we don't waste a lot of valuable space (chip area) on microcode.
The concept of nano memory is derived from a combination of vertical and horizontal instructions, but also provides trade-offs between them.
The motorola M68k microcomputer is one the earlier and popular µComputers with this nano memory control design. Here it was shown that a significant saving of memory could be achieved when a group of micro instructions occur often in a microprogram.
Here it was shown that by structuring the memory properly, that a few bits could be used to address the instructions, without a significant cost to speed.
The reduction was so that only the upper log_2(n) bits are required to specify the nano-address, when compared to the micro-address.
what does this mean?
Well let's stay with the M68K example a bit longer:
It had 640 instructions, out of which only 280 where unique.
had the instructions been coded as simple micro memory, it would have taken up:
640x70 bits. or 44800 bits
however, as only the 280 unique instructions where required to fill all 70 bits, we could apply the nano memory technique to the remaining instructions, and get:
8 < log_2(640-280) < 9 = 9
640*9 bit micro control store, and 280x70 bit nano memory store
total of 25360 bits
or a memory savings of 19440 bits.. which could be laid out as main memory for programmers :)
this shows that the equation:
S = Hm x Wm + Hn x Wn
where:
Hm = Number of words High Level
Wm = Length of words in High Level
Hn = Number of Low Level words
Wn = Length of low level words
S = Control Memory Size (with Nano memory technique)
holds in real life.
note that, micro memory is usually designed vertically (Hm is large, Wm is small) and nano programs are usually opposite Hn small, Wn Large.
Back to the question
I had a few problems understanding the wording of the problem, - that may because my first language is Danish, but still I tried to make some sense of it and got to:
proposition 1:
1000 instructions
32 bits
450 uniques
µCode:
1000 * 32 = 32.000 bits
bit width required for nano memory:
log2(1000-450) > 9 => 10
450 * 32 = 14400
(1000-450) * 10 = 5500
32000 - (14400 + 5500) = 12.100 bits saved
Which is not any of your answers.
please provide clarification?
UPDATE:
"the control word is 32 bit. we can code the 450 pattern with 9 bit and we use these 9 bits instead of 32 bit control word. reduce memory from 1000*(32+x) to 1000*(9+x) is equal to 23kbits. – Ali Movagher"
There is your problem, we cannot code the 450 pattern with 9 bits, as far as I can see we need 10..
Warning: I'm not sure where this type of question belongs. If you know a better place for it, drop a link.
Background: Imagine you heard a sentence like this: "this computer/processor has X-bit architecture". Now, if that computer is standard, you get a lot of information, like maximum RAM capacity, maximum unsigned/signed integer value and so on... But what if computer is not standard?
The mystery: back to 70's and 80's, the period referred as "8-bit era". Wait, 8-bit? Yes. So, if a CPU architecture is 8-bit, then:
The maximum RAM capacity of computer is exactly 256 bytes.
The maximum UInt range is from 0 to 256 and the maximum signed integer range is -128 to 127.
The maximum ROM capacity is also 256 bytes, because you have to be able to jump around?
However, it's clearly not like that. Look at some technical characteristics of game consoles of that time and you will see that those exceed the 256 limit.
Quotes (http://www.8bitcomputers.co.uk/whatbasics.html):
The Sharp PC1211 is actually a 4-bit computer but cleverly glues two together to look like 8 (a computer able to add up to 16 would not be very useful!)
So if it's a 4-bit computer, why can manipulate 8-bit integers? And another one...
The Sinclair QL is one of those computers that actually leaves the experts arguing. In parts, it is a 16 bit computer, in some ways it is even like a 32 bit computer but it holds its memory in 8 bits.
What? So why is this mess in www.8bitcomputers.co.uk?
Generally: how is an X-bit computer defined?
The biggest data bus that it has is X bits long (then Sinclair QL is a 32-bit computer)?
The CU functions of that computer are X bits long?
It holds its memory (in registers, ROM, RAM, whatever) in 8 bits?
Other definitions?
Purpose: I think that what I am designing is a 4-bit CPU. I don't really know if it has a 4-bit architecture, because it uses double ROM address, and includes functions like "activate ALU" that take another 4 bits from register Y. I want to know if I can still call it a 4-bit CPU. That's it!
Thank you very much in advance :)
An X-bit computer (or CPU) is defined whether the central unites and registers, such as CPU and ALU, are in X-bit. The addressing doesn't matter in defining the number X. As you have mentioned, an 8-bit computer (e.g. Motorola 68HC11 even tough it is a MCU, still it can be counted as a computer with CPU, I/O and Memory) can have 16-bit addressing in order to increase the RAM or memory size.
The data-bus size and the register sizes of CPU and ALU is the limiting factor in defining the X number in an X-bit computer architecture. You can get more information from http://en.wikipedia.org/wiki/Word_(computer_architecture)
An answer to your question will be "Yes, you are designing a 4-bit CPU if the registers and data bus size are in 4-bit.