8085 microprocessor connection of CPU data bus with RAM data bus - cpu

What would happen if the CPU data-bus bit 2 is connected to the RAM data-bit 5 and CPU data-bus bit 5 is connected to RAM data bit 2? Assume the rest of the connections are all right – explain.
My thoughts -
I think that the second and the fifth bit of the data would be swapped in the CPU in comparison to the data coming from the RAM.
I would be very grateful if you can give some more insights and ways to think about this question.

If swapped bits were always swapped, then we'd actually never know: when the CPU stores data in RAM and reads it back, it will read the same numeric value it wrote, which is how RAM is supposed to work!  We don't actually care which bit of RAM is used to store which bit of a byte — only if the system were bit addressable (it's not), could we even inspect which bit went where.
But if there are ways for content to get into the CPU or into the RAM without being swapped, i.e. without traversing that miswired data bus, then bad things would happen.  For example, if there's a ROM, that is burned un-swapped, then when read via the miswired bus, it will deliver bit-crossed instructions & data, so things won't work properly, or if there's a DMA system that reads from the hard drive into memory bypassing that particular miswiring initially, but later seen by the CPU using that miswired bus.
If there's no ROM and all I/O is done via a separate properly wired I/O bus, and written by the CPU into RAM over that miswired bus, then we'd never know that the wrong bit was being used in each byte of RAM.

Related

How are cache blocks fetched from RAM into the cpu?

I'm learning more about the theoretical side of CPUs, and I read about how cache can be used to fetch a line/block of memory from RAM into an area closer to the CPU that can be accessed more quickly (I think it takes less clock cycles because the CPU doesn't need to move the entire address of the next word into a register, also it's closer to the CPU physically).
But now I'm not clear on the implementation exactly. The CPU is connected to RAM through a data bus that could be 32 or 64 bits wide in modern machines. But L3 cache can in some cases be as large as 32MB in size, and I am pretty convinced there aren't millions of data lines going from RAM to the CPU's cache. Even the tiny-in-comparison L1 cache of only a few KB will take hundreds or even thousands of clock cycles to fetch from RAM only through that tiny data bus.
So what I'm trying to understand is, how exactly is CPU cache implemented to transfer so much infortmation while still being efficient? Are there any examples of simple (relatively) CPUs from the last decades at which I can look to see and learn how they implemented that part of the architecture?
As it turns out, there actually is a very wide bus to move info between levels of cache. Thanks to Peter for pointing it out to me in the comments and providing useful links for further reading.
Since you want the implementation of the CPU cache and RAM(main memory) here's a helpful simulation link where you can give your size of RAM and cache and see how they work.
https://www3.ntu.edu.sg/home/smitha/ParaCache/Paracache/dmc.html

Cache and scratchpad memories

Could someone explain what is the difference between cache memory and scratchpad memory? I'm currently learning about computer architecture.
A scratchpad is just that a place to keep some stuff. Cache, is memory you talk through normally not talk at. Scratchpad is like a post it note, something you write something on and keep with you. Cache is paper you send off to someone else with instructions like a memo.
Cache can be in various places, layers (L1, L2, L3...). both scratchpad and cache are just sram in some chip, with an address and data bus and read/write/etc control signals. (as are many other things in a computer which may or may not be used for addressable ram). During boot, before the ram on the far side (slower ram side, processor being the near side) is initialized (eventually dram typically if you have a cache otherwise why have a cache) it may be possible to access the cache as addressable ram. It depends very much on the system/design though, there may be a control register that enables it to behave as a simple ram, or there may be a mode, or its normal mode may be such that so long as you dont address more than the size of the ram based on its alignment (perhaps a 32K ram between 32K boundaries) then it may not try to evict anything and generate bus cycles on the dram/slow/far side of the cache allowing you to use it as ram just like a scratchpad.
BUT, the normal use case for a cache is as an ideally invisible pathway to ram. You dont access the cache ram using cache addressing you use the address space of the ram beyond and the cache simply allows the processor to continue without waiting for the slow ram.
Talking about booting again, think about the kinds of things you need to do when booting, namely bringing up the dram controller, which is most definitely a non-trivial thing. Having some on chip memory allows you to if nothing else temporarily have some ram for a small stack and for some variables. You can for example us a compiler on a compiled language like C which needs at a minimum some ram for stack and variables. Depending on space you can put some program there too, likely running there much faster than from flash. The alternative to having no ram is likely having to write the dram init in assembly using only general purpose or other registers in the processor, taking a complicated task and making it that much more difficult. Once the main system ram is up, then you may or may not choose to not use the on chip (scratchpad) ram.
I would and do argue that if you want to test the dram to see if it is working then you need to not use that ram to test that ram, the test program should not run in nor use the ram under test. Having scratchpad ram on chip (or some other ram in the address space, perhaps video card ram for example) could be used for the dram test program. Unfortunately lots of folks will use the ram under test to hold the stack and program and variables and heap from the program doing the test, leaving important parts of the ram untested other than one or a small number of patterns.

DMA vs Cache difference

Probably a stupid question for most that know DMA and caches... I just know cache stores memory to somewhere closer to where you can access so you don't have to spend as much time for the I/O.
But what about DMA? It lets you access that main memory with less delay?
Could someone explain the differences, both, or why I'm just confused?
DMA is a hardware device that can move to/from memory without using CPU instructions.
For instance, a hardware device (lets say, your PCI sound device) wants audio to play back. You can either:
Write a word at a time via a CPU mov instructions.
Configure the DMA device. You give it a start address, a destination, and the number of bytes to copy. The transfer now occurs while the CPU does something else instead of spoon feeding the audio device.
DMA can be very complex (scatter gather, etc), and varies by bus type and system.
I agree fully with the first answer, and there are some common additions...
On most DMA hardwares you can also set it up to do memory to memory transfers - there are not always external devices involved. Also depending on the system you may or may not need to sync the CPU-cache in software before (or after the transfer), since the data the DMA transfers into/from memory may be done without the knowledge of the CPU-cache.
The benefit of doing any DMA is that the CPU(s) is/are able to do other things simultaneously.
Of course when the CPU also needs to access the memory, only one can gain access and the other must wait.
Mem to mem DMA is often used in embedded systems to increase performance, or may be vital to be able to access some parts of the memory at all.
To answer the question, DMA and CPU-cache are totally different things and not comparable.
I know its a bit late but answering this question will help someone like me I guess, Agreeing with the above answers, I think the question was in relation to cache.
So Yes a cache does store information somewhere closer to the memory, this could be the results of earlier computations. Moreover, whenever a data is found in cache (called a cache hit) the value is used directly. when its not found (called a cache-miss), the processor goes on to calculate the required value. Peripheral Devices (SD cards, USBs etc) can also access this data, which is why on startup we usually invalidate cache data so that the cache line is clean. We also flush cache data on startup so that all the cache data is written back to the main memory for cpu to use, after which we proceed to reset or initialize the cache.
DMA (Direct Memory Access), yes it does let you access the main memory. But I think the better definition is, it lets you access the system register, which can only be accessed by the processor. #Ronnie and #Yann Ramin were both correct in that DMA can be a device hardware, so it can be used by your serial peripheral to access system registers, but it can also be used for memory to memory transfers between two cores.
You can read up further on DMA from wikipedia, about the modes in which DMA can access the system memory. I ll explain it simply
Burst mode: DMA takes full control of the bus, CPU is idle during this time. Data is transferred in burst (as a whole) without interruption.
Cycle stealing mode: In this data is transfered one byte at a time, transfer is slow, but CPU is not idle.

2 basic computer questions

Question 1:
Where exactly does the internal register and internal cache exist? I understand that when a program is loaded into main memory it contains a text section, a stack, a heap and so on. However is the register located in a fixed area of main memory, or is it physically on the CPU and doesn't reside in main memory? Does this apply to the cache as well?
Questions 2:
How exactly does a device controller use direct memory access without using the CPU to schedule/move datum between the local buffer and main memory?
Basic answer:
The CPU registers are directly on the CPU. The L1, L2, and L3 caches are often on-chip; however, they may be shared between multiple cores or processors, so they're not always "physically on the CPU." However, they're never part of main memory either. The general principle is that the closer memory is to the CPU, the faster and more expensive (and thus smaller) it is. Every item in the cache has a particular main memory address associated with it (however, the same slot can be associated with different addresses at different times). However, there is no direct association between registers and main memory. That is why if you use the register keyword in C (not that it's often necessary, since the compiler is usually a better optimizer), you can not use the & operator.
The DMA controller executes the transfer directly. The CPU watches the bus so it knows when changes are made "behind its back", which invalidate its cache(s).
Even though the CPU is the central processing unit, it's not the sole "mover and shaker". Devices live on buses, along with CPUs, and also RAM. Modern buses allow devices to communicate with RAM without involving the CPU. Some devices are programmed simply by making changes to pieces of RAM which devices poll. Device drivers may poll pieces of RAM that a device is writing into, but usually a CPU receives an interrupt from the device telling it that there's something in a piece of RAM that's ready to read.
So, in answer to your question 2, the CPU isn't involved in memory transfers across the bus, except inasmuch as cache coherence messages about the invalidation of cache lines are involved. Bear in mind that the scenarios are tricky. The CPU might have modified byte 1 on a cache line when a device decides to modify byte 40. Getting that dirty cache line out of the CPU needs to happen before the device can modify data, but on x86 anyway, that activity is initiated by the bus, not the CPU.

What are the different areas of Memory & Disk?

I'm neither sure about if this is a right place to ask nor sure about how to put my query.
Let me put it this way:
Main Memory starting at 0x00000 to 0xFFFFF.
Diskspace starting at 0x00000000 to 0xFFFFFFFF.
But what we'll be able to access will not be from 0th byte till last byte right?
On hardisk I guess at the 0th byte we have MBR. & at someplace we have Filesystem (we are able to acess only this). What else?
Similarly with the Main memory. We have some Kernel Memory & User Memory(in which each processes live). What else?
My question is what are all the regions from 0th byte till the last byte? I don't know what to search for or where to find such information? If any one can post some links, that would be great.
EDIT:
I'm using x86 32Bit on Windows. Actually I was reading a book on Computer security where author mentions that a malware can either live on the disk or in the memory.(which is very true). But when we say computer is infected that doesn't mean only files (which are part of filesystem) is infected. There are other area's which are not mean't for user, like MBR. or Kernel Memory.
So, the question popped up in my mind. What are all such areas that I may not be aware about?
Apart from the fact that the answer to this question is highly dependent on the OS, disk space is not at all part of the main memory. On Intel architectures, disk access takes some I/O address space (which is different from memory address) per channel. And the exact number of words depends on what channel: IDE/ATA/SATA/SCSI. On other architectures which are memory mapped like the PowerPC disk access do take some memory address space, but still not much.
To illustrate (and be warned that this is a very simplified example, not the real world), assume a memory mapped CPU* like the PowerPC trying to access a disk with LBA addressing. The disk really only need 2 to 3 words of memory to hold multiple Gigabytes of data. That is, we only need 12 bytes to store and retrieve Gigabytes of data:
2 words (8 bytes) to tell the disk where to seek to, that is, at what address do we want to read form or write to.
1 word (4 bytes) to actually do the read and write. Every time you read from this address, the 2 word pointer automagically increment by 1 character (or 4 if you read in 32 bits).
But the above is an abstracted view of what really happens. Most disk controllers have several more registers to control power management, disk spin speed, enter and exit sleep modes etc.
And what are the addresses of these memory locations? Well, it depends on what I/O channel you're talking about. The old-school ISA bus depends on the user setting jumpers on cards to set the addresses. So for those you need to ask the user. The PCI bus auto-negotiates the addresses with the disk controllers at boot time and then, depending on architecture, either tells your bios what devices exist or pass them as parameters to the bootloader or store them in some temporary registers on the system bus. USB works like PCI but negotiates with the OS instead of the BIOS... etc.
As you can see, there is no simple answer to this even if you limit it to only specific cases like Windows7 running on 64 bit AMD CPU running on Dell motherboards.
*note: since you're worried about memory locations.
Your question is complex, and hard to answer without knowing the scope of where the view of memory is.
Pretending we're in ring-0 with direct mapped memory, a PC-compatible has multiple memory regions. Lower memory, BIOS mapped code, IO ports, video memory, etc. They all live in the same memory space. You communicate with peripherals by reading and writing from specific memory addresses (which are mapped to those components). These addresses are setup by the hardware in question and the drivers in use.
Once we enter user mode, you have to deal with virtual memory. Addresses are symbolic, and may or may not map to any particular part of physical memory. I'd suggest reading up on Virtual memory

Resources