How are PCIe lanes distributed between CPU and peripherals [closed] - pci-e

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm about to build a desktop computer and I'm trying to understand how are this PCIe lanes distributed. The goal is being able to calculate how many lanes do I need for a certain setup. I'm looking at the Asus Z170-P motherboard, which according the specifications [1]:
It contains the Z170 chipset.
You can read on the board that it is "CrossfireX Ready" which I believe implies you could plug in 2 graphic cards.
The specs say it has two PCIe x16 slots, one that works at x16 mode and another one that only works at x4 mode.
First, according to the Z170 chipset specifications, it supports up to 20 PCIe lanes. However, there is no single processor that fits into the LGA1151 socket with support for 20 or more PCIe lanes [2]. Why have a chipset with support for 20 lanes when the processor will only be able to handle up to 16?
Second, supported PCIe port configurations by the chipset are "1x16, 2x8, 1x8+2x4". If I were to plug in two graphic cards, would they both work at x4 mode or x8/x4 modes? Shouldn't a motherboard designed for using two graphic cards be able to handle 32+ PCIe lanes so both graphic cards work at x16 mode?

The (up to) 20 PCIe lanes from the Z170 are in addition to the 16 lanes that come directly out of the CPU.
I don't see any reason that it wouldn't run one graphics card at 16x and one at 4x. But it does seem odd to me that they call it "Crossfire-ready" without 2 x16 slots.
More info on the Z170 here:
http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-2.html

Related

What is socket, core, threads, CPU? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I am currently volunteering to learn about linux servers and also I am interested in learning about cluster computing techniques.
In this lab, they have a small cluster with one head node and two compute nodes.
When I tried the lscpu command on head node, compute node1,node2. Click the link to view the details.
CPUs - 24 in head, computenode1 and computenode2. Is it referring to 24 physical CPUs in the motherboard?
Sockets - 2 in head, computenode1 and computenode2.Can anyone explain it?
Cores per socket - 6 in head, computenode1 and computenode2.Can anyone explain it?
Threads per core - 2 in head, computenode1 and computenode2.Can anyone explain it?
A socket is the physical socket where the physical CPU capsules are placed. A normal PC only have one socket.
Cores are the number of CPU-cores per CPU capsule. A modern standard CPU for a standard PC usually have two or four cores.
And some CPUs can run more than one parallel thread per CPU-core. Intel (the most common CPU manufacturer for standard PCs) have either one or two threads per core depending on CPU model.
If you multiply the number of socket, cores and threads, i.e. 2*6*2, then you get the number of "CPUs": 24. These aren't real CPUs, but the number of possible parallel threads of execution your system can do.
Just the fact that you have 6 cores is a sign you have a high-end workstation or server computer. The fact that you have two sockets makes it a very high-end computer. Usually not even high-end workstations have that these days, only servers.

How are instructions embedded in the processor and boards of a computer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So, the processor has a clock that turns on and off which is a predetermined instruction. How are these instructions loaded into the processor as instructions? I am just visualizing a clean slate with the CPU how do we teach or tell the CPU to do what it does?
Also, if we are at a clean slate how do we load the data into a computer to recognize binary?
I'm sorry if this is an overload on questions, I'm just super curious..
Instruction execution starts at a hardwired address known as the reset vector. The instructions are programmed in memory; the means by which that is done varies depending on the memory technology used and the type of processor.
For standalone CPUs with no on-chip memory, the initial code will normally be in some kind of external read-only random-access memory (ROM) often called a bootrom - this for example is the BIOS on a PC motherboard. On modern platforms these ROMs normally use a technology known as NOR Flash which can be electrically erased and reprogrammed, either by loading them on a dedicated programmer or in-circuit (so for example a PC can rewrite its own BIOS flash).
For microcontrollers (MCU) with on-chip memory, these often have on-chip flash ROM and can also be electrically programmed typically using an on-chip programming and debug interface known as a JTAG interface (proprietary interfaces also exist on some devices). Some MCUs include mask ROM that is not rewritable which contains a simple (or primary) bootloader. Usually you can select how an MCU boots.
The job of the bootloader is to load more complex code into memory (electrically programmable ROM or RAM) and execute it. Often a primary bootloader in mask ROM is too simple and restricted to do very much, and will load a more complete bootloader that then loads an OS. For example a common scenario is for an processor to have a primary bootloader that loads code from a simple non-random access memory such as NAND flash or SD card; this may then load a more fully featured bootloader such as UBOOT typically used to load Linux for example. This secondary bootloader can support more complex devices boot sources such as a hard-disk, network, CD, USB.
Typically CPU's used to read either address zero or 0xFFFF-(address length-1) (add F's if your address room is larger) and take that as the start address of the boot loader.
There is also the possibility that the CPU starts by loading a micro code into the CPU and then starts the real assembler code from a predetermined address.

What are the ASM options for hardware-monitoring for ARM devices? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm having trouble identifying what could possibly be a good instruction set that is granted to work on ARMv5 SoC and above, I'm also having some issues with the syntax since I'm used to a much simpler gcc asm syntax for X86 and the ARM one looks more complicated, but this is another topic ... I guess.
What I need to do is to check the feature of the SoC, like the frequency, the temperatures and the main features for computation purposes like Thumb or NEON support.
I know that ARM basically just designs and sells the blueprints for the CPU and the companies that buy the license are free to move bits around and make modifications, but I don't think that things are that bad in terms of entropy in the ARM world, or at least this kind of registers ( hardware monitoring and security features like the temperature ) are usually quite standard across the board, at least this is true in the X86 world where some CPUID instructions are probably complex, but you can check the main features of your CPU quite easily and most importantly you can code an application that works on both Intel and AMD with about the same code base.
What is a good set of register for this and if I pick 1 given register there are implication of the ASM syntax that I should use ?
Arm is simpler than x86, give it some time with an open mind and you will see that.
Intel uses different foundries and design teams and technologies so there is no consistency either with temperature at least every other family is a different design team and they often change technologies/size every year or two.
Most of the arm cores provide on the minimal side registers that tell you everything from what processor core it is, what version on up to tons of registers describing which instructions are supported or not.
Your arm is going to run colder and/or faster than an x86 if you could apples to apples.
Unless something has changed if you want to put ARM's name on your chip or associate with your chip you cant go in and muck with the logic. If you look in the TRM for a particular architecture you will see the strap options available. Boot from 0x00000000 or 0xFFFF0000, start big or little endian, etc.
All arm cores from armv4t (ARM7TDMI) to the present support thumb, it is the only universal ARM instruction set. One length neon and such are available in some of the cortex-m cores (cortex-m4) as well as different levels of support for thumb2 extensions. As well as low power consumption, mips to watts while keeping mips to mhz. The cortex-ms are microcontrollers so they will have options to turn items off or not turn them on to help conserve power. but you can also implement that yourself on your on chip peripherals.
The cortex-m's wont give you ARM instructions, only thumb with thumb2 extensions. All of the TRMs (Technical Reference Manuals) for the various arm cores are available at arms website (infocenter.arm.com) which will describe the features, strap options, axi/amba choices or sizes, etc.
Mips is your other primary choice for an soc core, I dont think your mips to watts will be as good. You can of course go with an open core as well the openrisc or altor or mpx or amber or others that are there, but it is all on you for performance, temperature, etc. (and floating point).
Not sure what you mean by hardware monitoring, but you have jtag and other typical debug options available. If it is temperature you are after you have to work with your cell library provider and see what is available for the target foundry/process and then implement that peripheral and connect it to the arm. or outside world or both.
Bottom line you need to do more research, the info you need is available from arm for free or at the cost of an email address.

what is dominate factor for disk price? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What is dominate factor of the disk price, capacity or IOPS? I think the answer of this question should also answer this one in which i asked why cost of disc I/O per access is PricePerDiskDrive/AccessesPerSecondPerDisk.
Thanks
Chang
The factor dominating the price is the market segment: Home disks are cheapest, server disks most expensive.
It depends on several factors, as stated in the previous answer, you have the segment, home or business.
Then there is the architecture:
SCSI (bus controller with high speeds)
SSD (flash)
SATA (regular drive)
SAS (serial attached scsi, backwards compatible with SATA)
SAS and SCSI are mostly disks running at high speeds, this makes them more expensive.
SATA disks for home use at normal speeds (5400 or 7200 rpm) are expensive based on capacity and brand. If a company has the first 3 TB disk it will be very expensive, when 3 companies have those disks, prices will decrease because of competition
SSD is a technology that got affordable, but still a lot more expensive than regular SATA (with platters). This is because there are no turning parts and it uses faster memory.
Also a very nice thing to remember :
The faster the drive, the more expensive, there for it is normal that the better your IOPS are the more expensive it is.
Capacity has a price, but it is linked to the drives speed and the recent evolution in technology.

memory of the USB devices in power 2? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Why the memory of the USB devices is always in the powers of 2?
Because all memory devices are essentially an array of bytes or words. As such there is an address (index) and data which are both binary numbers. So a 1 Megabyte memory will have a 20 bit address "bus" and an 8 bit data bus. These buses are physically constructed with one electrical conductor per bit, so the 1Meg device will have 20 address pins and 8 data pins. In DRAM, there may be multiplexing, where half of the address is sent on one clock cycle and the other half on another clock cycle - this can reduce the number of physical pins and traces on a circuit board. Making a 2Meg memory out of such chips is easy, you connect the address and data pins together and then use the "chip select" pin to determine which one is being accessed via the 21st address bit. Partitioning memory in a non-power-of-2 scheme requires a LOT more circuitry and interconnection complexity to figure out which chip your data is in, and it means not using every bit combination of the address lines which all mean less efficient use of circuitry.
Hope that helps.
Because flash memory chips are always manufactured with capacities in powers of two, as it doesn't waste address space. Because addressing is done in binary, the maximum number is always a power of two (minus 1).

Resources