Approximate cost to access various caches and main memory? - performance

Can anyone give me the approximate time (in nanoseconds) to access L1, L2 and L3 caches, as well as main memory on Intel i7 processors?
While this isn't specifically a programming question, knowing these kinds of speed details is neccessary for some low-latency programming challenges.

Numbers everyone should know
0.5 ns - CPU L1 dCACHE reference
1 ns - speed-of-light (a photon) travel a 1 ft (30.5cm) distance
5 ns - CPU L1 iCACHE Branch mispredict
7 ns - CPU L2 CACHE reference
71 ns - CPU cross-QPI/NUMA best case on XEON E5-46*
100 ns - MUTEX lock/unlock
100 ns - own DDR MEMORY reference
135 ns - CPU cross-QPI/NUMA best case on XEON E7-*
202 ns - CPU cross-QPI/NUMA worst case on XEON E7-*
325 ns - CPU cross-QPI/NUMA worst case on XEON E5-46*
10,000 ns - Compress 1K bytes with Zippy PROCESS
20,000 ns - Send 2K bytes over 1 Gbps NETWORK
250,000 ns - Read 1 MB sequentially from MEMORY
500,000 ns - Round trip within a same DataCenter
10,000,000 ns - DISK seek
10,000,000 ns - Read 1 MB sequentially from NETWORK
30,000,000 ns - Read 1 MB sequentially from DISK
150,000,000 ns - Send a NETWORK packet CA -> Netherlands
| | | |
| | | ns|
| | us|
| ms|
From:
Originally by Peter Norvig:
- http://norvig.com/21-days.html#answers- http://surana.wordpress.com/2009/01/01/numbers-everyone-should-know/,- http://sites.google.com/site/io/building-scalable-web-applications-with-google-app-engine

Here is a Performance Analysis Guide for the i7 and Xeon range of processors. I should stress, this has what you need and more (for example, check page 22 for some timings & cycles for example).
Additionally, this page has some details on clock cycles etc. The second link served the following numbers:
Core i7 Xeon 5500 Series Data Source Latency (approximate) [Pg. 22]
local L1 CACHE hit, ~4 cycles ( 2.1 - 1.2 ns )
local L2 CACHE hit, ~10 cycles ( 5.3 - 3.0 ns )
local L3 CACHE hit, line unshared ~40 cycles ( 21.4 - 12.0 ns )
local L3 CACHE hit, shared line in another core ~65 cycles ( 34.8 - 19.5 ns )
local L3 CACHE hit, modified in another core ~75 cycles ( 40.2 - 22.5 ns )
remote L3 CACHE (Ref: Fig.1 [Pg. 5]) ~100-300 cycles ( 160.7 - 30.0 ns )
local DRAM ~60 ns
remote DRAM ~100 ns
EDIT2:
The most important is the notice under the cited table, saying:
"NOTE: THESE VALUES ARE ROUGH APPROXIMATIONS. THEY DEPEND ON
CORE AND UNCORE FREQUENCIES, MEMORY SPEEDS, BIOS SETTINGS,
NUMBERS OF DIMMS, ETC,ETC..YOUR MILEAGE MAY VARY."
EDIT: I should highlight that, as well as timing/cycle information, the above intel document addresses much more (extremely) useful details of the i7 and Xeon range of processors (from a performance point of view).

Cost to access various memories in a pretty page
See this page presenting the memory latency decrease from 1990 to 2020.
Summary
Values having decreased but are stabilized since 2005
1 ns L1 cache
3 ns Branch mispredict
4 ns L2 cache
17 ns Mutex lock/unlock
100 ns Main memory (RAM)
2 000 ns (2µs) 1KB Zippy-compress
Still some improvements, prediction for 2020
16 000 ns (16µs) SSD random read (olibre's note: should be less)
500 000 ns (½ms) Round trip in datacenter
2 000 000 ns (2ms) HDD random read (seek)
See also other sources
What every programmer should know about memory from Ulrich Drepper (2007)
Old but still an excellent deep explanation about memory hardware and software interaction.
Full PDF (114 pages)
Comments on LWN about PDF version
Another ones
Seven posts on LWN + Comments
Part 1 - Introduction
Part 2 - Cache
Part 3 - Virtual Memory
Part 4 - NUMA support
Part 5 - What programmers can do
Part 6 - More things programmers can do
Part 7 - Memory performance tools
Post The Infinite Space Between Words in codinghorror.com based on book Systems Performance: Enterprise and the Cloud
Click to each processor listed on http://www.7-cpu.com/ to see the L1/L2/L3/RAM/... latencies (e.g. Haswell i7-4770 has L1=1ns, L2=3ns, L3=10ns, RAM=67ns, BranchMisprediction=4ns)
http://idarkside.org/posts/numbers-you-should-know/
See also
For further understanding, I recommend the excellent presentation of modern cache architectures (June 2014) from Gerhard Wellein, Hannes Hofmann and Dietmar Fey at University Erlangen-Nürnberg.
French speaking people may appreciate an article by SpaceFox comparing a processor with a developer both waiting for information required to continue to work.

Just for a sake of 2020's review of the predictions for 2025:
The last about 44 years of the integrated circuit technology, the classical (non-quantum) processors evolved, literally and physically "Per Aspera ad Astra". The last decade has evidenced, the classical process has got close to some hurdles, that do not have an achievable physical path forward.
Number of logical cores can and may grow, yet not more than O(n^2~3)
Frequency [MHz] has hard if not impossible to circumvent physics-based ceiling already hit
Transistor Count can and may grow, yet less than O(n^2~3) ( power, noise, "clock")
Power [W] can grow, yet problems with power distribution & heat dissipation will increase
Single Thread Perf may grow, having direct benefits from large cache-footprints and faster and wider memory-I/O & indirect benefits from less often system forced context-switching as we can have more cores to split other threads/processes among
( Credits go to Leonardo Suriano & Karl Rupp )
2022: Still some improvements, prediction for 2025+
--------------------------------------------------------------------------------
0.001 ns light transfer in Gemmatimonas phototrophica bacteriae
| | | | |
| | | | ps|
| | | ns|
| | us| reminding us what Richard FEYNMAN told us:
| ms| "There's a plenty of space
s| down there"
-----s.-ms.-us.-ns|----------------------------------------------------------
0.1 ns - NOP
0.3 ns - XOR, ADD, SUB
0.5 ns - CPU L1 dCACHE reference (1st introduced in late 80-ies )
0.9 ns - JMP SHORT
1 ns - speed-of-light (a photon) travel a 1 ft (30.5cm) distance -- will stay, throughout any foreseeable future :o)
?~~~~~~~~~~~ 1 ns - MUL ( i**2 = MUL i, i )~~~~~~~~~ doing this 1,000 x is 1 [us]; 1,000,000 x is 1 [ms]; 1,000,000,000 x is 1 [s] ~~~~~~~~~~~~~~~~~~~~~~~~~
3~4 ns - CPU L2 CACHE reference (2020/Q1)
5 ns - CPU L1 iCACHE Branch mispredict
7 ns - CPU L2 CACHE reference
10 ns - DIV
19 ns - CPU L3 CACHE reference (2020/Q1 considered slow on 28c Skylake)
71 ns - CPU cross-QPI/NUMA best case on XEON E5-46*
100 ns - MUTEX lock/unlock
100 ns - own DDR MEMORY reference
135 ns - CPU cross-QPI/NUMA best case on XEON E7-*
202 ns - CPU cross-QPI/NUMA worst case on XEON E7-*
325 ns - CPU cross-QPI/NUMA worst case on XEON E5-46*
|Q>~~~~~ 5,000 ns - QPU on-chip QUBO ( quantum annealer minimiser 1 Qop )
10,000 ns - Compress 1K bytes with a Zippy PROCESS
20,000 ns - Send 2K bytes over 1 Gbps NETWORK
250,000 ns - Read 1 MB sequentially from MEMORY
500,000 ns - Round trip within a same DataCenter
?~~~ 2,500,000 ns - Read 10 MB sequentially from MEMORY~~(about an empty python process to copy on spawn)~~~~ x ( 1 + nProcesses ) on spawned process instantiation(s), yet an empty python interpreter is indeed not a real-world, production-grade use-case, is it?
10,000,000 ns - DISK seek
10,000,000 ns - Read 1 MB sequentially from NETWORK
?~~ 25,000,000 ns - Read 100 MB sequentially from MEMORY~~(somewhat light python process to copy on spawn)~~~~ x ( 1 + nProcesses ) on spawned process instantiation(s)
30,000,000 ns - Read 1 MB sequentially from a DISK
?~~ 36,000,000 ns - Pickle.dump() SER a 10 MB object for IPC-transfer and remote DES in spawned process~~~~~~~~ x ( 2 ) for a single 10MB parameter-payload SER/DES + add an IPC-transport costs thereof or NETWORK-grade transport costs, if going into [distributed-computing] model Cluster ecosystem
150,000,000 ns - Send a NETWORK packet CA -> Netherlands
1s: | | |
. | | ns|
. | us|
. ms|
Just for a sake of 2015's review of the predictions for 2020:
Still some improvements, prediction for 2020 (Ref. olibre's answer below)
16 000 ns ( 16 µs) SSD random read (olibre's note: should be less)
500 000 ns ( ½ ms) Round trip in datacenter
2 000 000 ns ( 2 ms) HDD random read (seek)
1s: | | |
. | | ns|
. | us|
. ms|
In 2015 there are currently available:
======================================
820 ns ( 0.8µs) random read from a SSD-DataPlane
1 200 ns ( 1.2µs) Round trip in datacenter
1 200 ns ( 1.2µs) random read from a HDD-DataPlane
1s: | | |
. | | ns|
. | us|
. ms|
Just for a sake of CPU and GPU latency landscape comparison:
Not an easy task to compare even the simplest CPU / cache / DRAM lineups ( even in a uniform memory access model ), where DRAM-speed is a factor in determining latency, and loaded latency (saturated system), where the latter rules and is something the enterprise applications will experience more than an idle fully unloaded system.
+----------------------------------- 5,6,7,8,9,..12,15,16
| +--- 1066,1333,..2800..3300
v v
First word = ( ( CAS latency * 2 ) + ( 1 - 1 ) ) / Data Rate
Fourth word = ( ( CAS latency * 2 ) + ( 4 - 1 ) ) / Data Rate
Eighth word = ( ( CAS latency * 2 ) + ( 8 - 1 ) ) / Data Rate
^----------------------- 7x .. difference
********************************
So:
===
resulting DDR3-side latencies are between _____________
3.03 ns ^
|
36.58 ns ___v_ based on DDR3 HW facts
GPU-engines have received a lot of technical marketing, while deep internal dependencies are keys to understand both the real strengths and also the real weaknesses these architectures experience in practice ( typically much different than the aggressive marketing whistled-up expectations ).
1 ns _________ LETS SETUP A TIME/DISTANCE SCALE FIRST:
° ^
|\ |a 1 ft-distance a foton travels in vacuum ( less in dark-fibre )
| \ |
| \ |
__|___\__v____________________________________________________
| |
|<-->| a 1 ns TimeDOMAIN "distance", before a foton arrived
| |
^ v
DATA | |DATA
RQST'd| |RECV'd ( DATA XFER/FETCH latency )
25 ns # 1147 MHz FERMI: GPU Streaming Multiprocessor REGISTER access
35 ns # 1147 MHz FERMI: GPU Streaming Multiprocessor L1-onHit-[--8kB]CACHE
70 ns # 1147 MHz FERMI: GPU Streaming Multiprocessor SHARED-MEM access
230 ns # 1147 MHz FERMI: GPU Streaming Multiprocessor texL1-onHit-[--5kB]CACHE
320 ns # 1147 MHz FERMI: GPU Streaming Multiprocessor texL2-onHit-[256kB]CACHE
350 ns
700 ns # 1147 MHz FERMI: GPU Streaming Multiprocessor GLOBAL-MEM access
- - - - -
Understanding internalities is thus much more important, than in other fields, where architectures are published and numerous benchmarks freely available. Many thanks to GPU-micro-testers, who 've spent their time and creativity to unleash the truth of the real schemes of work inside the black-box approach tested GPU devices.
+====================| + 11-12 [usec] XFER-LATENCY-up HostToDevice ~~~ same as Intel X48 / nForce 790i
| |||||||||||||||||| + 10-11 [usec] XFER-LATENCY-down DeviceToHost
| |||||||||||||||||| ~ 5.5 GB/sec XFER-BW-up ~~~ same as DDR2/DDR3 throughput
| |||||||||||||||||| ~ 5.2 GB/sec XFER-BW-down #8192 KB TEST-LOAD ( immune to attempts to OverClock PCIe_BUS_CLK 100-105-110-115 [MHz] ) [D:4.9.3]
|
| Host-side
| cudaHostRegister( void *ptr, size_t size, unsigned int flags )
| | +-------------- cudaHostRegisterPortable -- marks memory as PINNED MEMORY for all CUDA Contexts, not just the one, current, when the allocation was performed
| ___HostAllocWriteCombined_MEM / cudaHostFree() +---------------- cudaHostRegisterMapped -- maps memory allocation into the CUDA address space ( the Device pointer can be obtained by a call to cudaHostGetDevicePointer( void **pDevice, void *pHost, unsigned int flags=0 ); )
| ___HostRegisterPORTABLE___MEM / cudaHostUnregister( void *ptr )
| ||||||||||||||||||
| ||||||||||||||||||
| | PCIe-2.0 ( 4x) | ~ 4 GB/s over 4-Lanes ( PORT #2 )
| | PCIe-2.0 ( 8x) | ~16 GB/s over 8-Lanes
| | PCIe-2.0 (16x) | ~32 GB/s over 16-Lanes ( mode 16x )
|
| + PCIe-3.0 25-port 97-lanes non-blocking SwitchFabric ... +over copper/fiber
| ~~~ The latest PCIe specification, Gen 3, runs at 8Gbps per serial lane, enabling a 48-lane switch to handle a whopping 96 GBytes/sec. of full duplex peer to peer traffic. [I:]
|
| ~810 [ns] + InRam-"Network" / many-to-many parallel CPU/Memory "message" passing with less than 810 ns latency any-to-any
|
| ||||||||||||||||||
| ||||||||||||||||||
+====================|
|.pci............HOST|
My apology for a "bigger-picture", but latency-demasking has also cardinal limits imposed from on-chip smREG/L1/L2-capacities and hit/miss-rates.
|.pci............GPU.|
| | FERMI [GPU-CLK] ~ 0.9 [ns] but THE I/O LATENCIES PAR -- ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| <800> warps ~~ 24000 + 3200 threads ~~ 27200 threads [!!]
| ^^^^^^^^|~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [!!]
| smREGs________________________________________ penalty +400 ~ +800 [GPU_CLKs] latency ( maskable by 400~800 WARPs ) on <Compile-time>-designed spillover(s) to locMEM__
| +350 ~ +700 [ns] #1147 MHz FERMI ^^^^^^^^
| | ^^^^^^^^
| +5 [ns] # 200 MHz FPGA. . . . . . Xilinx/Zync Z7020/FPGA massive-parallel streamline-computing mode ev. PicoBlazer softCPU
| | ^^^^^^^^
| ~ +20 [ns] #1147 MHz FERMI ^^^^^^^^
| SM-REGISTERs/thread: max 63 for CC-2.x -with only about +22 [GPU_CLKs] latency ( maskable by 22-WARPs ) to hide on [REGISTER DEPENDENCY] when arithmetic result is to be served from previous [INSTR] [G]:10.4, Page-46
| max 63 for CC-3.0 - about +11 [GPU_CLKs] latency ( maskable by 44-WARPs ) [B]:5.2.3, Page-73
| max 128 for CC-1.x PAR -- ||||||||~~~|
| max 255 for CC-3.5 PAR -- ||||||||||||||||||~~~~~~|
|
| smREGs___BW ANALYZE REAL USE-PATTERNs IN PTX-creation PHASE << -Xptxas -v || nvcc -maxrregcount ( w|w/o spillover(s) )
| with about 8.0 TB/s BW [C:Pg.46]
| 1.3 TB/s BW shaMEM___ 4B * 32banks * 15 SMs * half 1.4GHz = 1.3 TB/s only on FERMI
| 0.1 TB/s BW gloMEM___
| ________________________________________________________________________________________________________________________________________________________________________________________________________________________
+========| DEVICE:3 PERSISTENT gloMEM___
| _|______________________________________________________________________________________________________________________________________________________________________________________________________________________
+======| DEVICE:2 PERSISTENT gloMEM___
| _|______________________________________________________________________________________________________________________________________________________________________________________________________________________
+====| DEVICE:1 PERSISTENT gloMEM___
| _|______________________________________________________________________________________________________________________________________________________________________________________________________________________
+==| DEVICE:0 PERSISTENT gloMEM_____________________________________________________________________+440 [GPU_CLKs]_________________________________________________________________________|_GB|
! | |\ + |
o | texMEM___|_\___________________________________texMEM______________________+_______________________________________________________________________________________|_MB|
| |\ \ |\ + |\ |
| texL2cache_| \ \ .| \_ _ _ _ _ _ _ _texL2cache +370 [GPU_CLKs] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | \ 256_KB|
| | \ \ | \ + |\ ^ \ |
| | \ \ | \ + | \ ^ \ |
| | \ \ | \ + | \ ^ \ |
| texL1cache_| \ \ .| \_ _ _ _ _ _texL1cache +260 [GPU_CLKs] _ _ _ _ _ _ _ _ _ | \_ _ _ _ _^ \ 5_KB|
| | \ \ | \ + ^\ ^ \ ^\ \ |
| shaMEM + conL3cache_| \ \ | \ _ _ _ _ conL3cache +220 [GPU_CLKs] ^ \ ^ \ ^ \ \ 32_KB|
| | \ \ | \ ^\ + ^ \ ^ \ ^ \ \ |
| | \ \ | \ ^ \ + ^ \ ^ \ ^ \ \ |
| ______________________|__________\_\_______________________|__________\_____^__\________+__________________________________________\_________\_____\________________________________|
| +220 [GPU-CLKs]_| |_ _ _ ___|\ \ \_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \ _ _ _ _\_ _ _ _+220 [GPU_CLKs] on re-use at some +50 GPU_CLKs _IF_ a FETCH from yet-in-shaL2cache
| L2-on-re-use-only +80 [GPU-CLKs]_| 64 KB L2_|_ _ _ __|\\ \ \_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \ _ _ _ _\_ _ _ + 80 [GPU_CLKs] on re-use from L1-cached (HIT) _IF_ a FETCH from yet-in-shaL1cache
| L1-on-re-use-only +40 [GPU-CLKs]_| 8 KB L1_|_ _ _ _|\\\ \_\__________________________________\________\_____+ 40 [GPU_CLKs]_____________________________________________________________________________|
| L1-on-re-use-only + 8 [GPU-CLKs]_| 2 KB L1_|__________|\\\\__________\_\__________________________________\________\____+ 8 [GPU_CLKs]_________________________________________________________conL1cache 2_KB|
| on-chip|smREG +22 [GPU-CLKs]_| |t[0_______^:~~~~~~~~~~~~~~~~\:________]
|CC- MAX |_|_|_|_|_|_|_|_|_|_|_| |t[1_______^ :________]
|2.x 63 |_|_|_|_|_|_|_|_|_|_|_| |t[2_______^ :________]
|1.x 128 |_|_|_|_|_|_|_|_|_|_|_| |t[3_______^ :________]
|3.5 255 REGISTERs|_|_|_|_|_|_|_|_| |t[4_______^ :________]
| per|_|_|_|_|_|_|_|_|_|_|_| |t[5_______^ :________]
| Thread_|_|_|_|_|_|_|_|_|_| |t[6_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| |t[7_______^ 1stHalf-WARP :________]______________
| |_|_|_|_|_|_|_|_|_|_|_| |t[ 8_______^:~~~~~~~~~~~~~~~~~:________]
| |_|_|_|_|_|_|_|_|_|_|_| |t[ 9_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| |t[ A_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| |t[ B_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| |t[ C_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| |t[ D_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| |t[ E_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| W0..|t[ F_______^____________WARP__:________]_____________
| |_|_|_|_|_|_|_|_|_|_|_| ..............
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[0_______^:~~~~~~~~~~~~~~~\:________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[1_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[2_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[3_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[4_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[5_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[6_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[7_______^ 1stHalf-WARP :________]______________
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[ 8_______^:~~~~~~~~~~~~~~~~:________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[ 9_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[ A_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[ B_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[ C_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[ D_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ............|t[ E_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| W1..............|t[ F_______^___________WARP__:________]_____________
| |_|_|_|_|_|_|_|_|_|_|_| ....................................................
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[0_______^:~~~~~~~~~~~~~~~\:________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[1_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[2_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[3_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[4_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[5_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[6_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[7_______^ 1stHalf-WARP :________]______________
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[ 8_______^:~~~~~~~~~~~~~~~~:________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[ 9_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[ A_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[ B_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[ C_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[ D_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_| ...................................................|t[ E_______^ :________]
| |_|_|_|_|_|_|_|_|_|_|_|tBlock Wn....................................................|t[ F_______^___________WARP__:________]_____________
|
| ________________ °°°°°°°°°°°°°°°°°°°°°°°°°°~~~~~~~~~~°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
| / \ CC-2.0|||||||||||||||||||||||||| ~masked ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| / \ 1.hW ^|^|^|^|^|^|^|^|^|^|^|^|^| <wait>-s ^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|
| / \ 2.hW |^|^|^|^|^|^|^|^|^|^|^|^|^ |^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^|^
|_______________/ \______I|I|I|I|I|I|I|I|I|I|I|I|I|~~~~~~~~~~I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|I|
|~~~~~~~~~~~~~~/ SM:0.warpScheduler /~~~~~~~I~I~I~I~I~I~I~I~I~I~I~I~I~~~~~~~~~~~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I~I
| \ | //
| \ RR-mode //
| \ GREEDY-mode //
| \________________//
| \______________/SM:0__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:1__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:2__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:3__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:4__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:5__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:6__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:7__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:8__________________________________________________________________________________
| | |t[ F_______^___________WARP__:________]_______
| ..|SM:9__________________________________________________________________________________
| ..|SM:A |t[ F_______^___________WARP__:________]_______
| ..|SM:B |t[ F_______^___________WARP__:________]_______
| ..|SM:C |t[ F_______^___________WARP__:________]_______
| ..|SM:D |t[ F_______^___________WARP__:________]_______
| |_______________________________________________________________________________________
*/
The bottom line?
Any low-latency motivated design has to rather reverse-engineer the "I/O-hydraulics" ( as 0 1-XFERs are incompressible by the nature ) and the resulting latencies rule the performance envelope for any GPGPU solution be it computationally intensive ( read: where processing costs are forgiving a bit more a poor latency XFERs ... ) or not ( read: where ( might be to someone's surprise ) CPU-s are faster in end-to-end processing, than GPU fabrics [citations available] ).

Look at this "staircase" plot, perfectly illustrating different access times (in terms of clock tics). Notice the red CPU having an additional "step", probably because it has L4 (while others don't).
Taken from this Extremetech article.
In computer science this is called "I/O complexity".

Related

USRP - daughterboard installation issue

I'm trying to receive a signal whith a center frequency equal to 2e8Hz and a bandwith slightly smaller than 2e5Hz with my USRP X310.
I use the rx_sampletofile.cpp func of uhd 3.10.1 as shown :
./rx_sampletofile --file test.bin --duration --rate 4e8 --nsamps 4e8 1 --freq 2e8 --type float --bw 8e7 --skip-lo
When I look at the terminal, everything is executed without error or warnings but I see that the actual rx frequency isn't changed afterall ...
Setting RX Rate: 20.000000 Msps...
Actual RX Rate: 20.000000 Msps...
Setting RX Freq: 200.000000 MHz...
Setting RX LO Offset: 0.000000 MHz...
Actual RX Freq: 0.000000 MHz...
Setting RX Bandwidth: 0.250000 MHz...
Actual RX Bandwidth: 0.250000 MHz...
I tried to change RX Freq to lower freq and also--lo-offset but it always stays at RX Freq = 0Hz so if you have any idea I'll take it.
Thanks
It appears that my issue is finally related to the daughterboards because the drivers don't detect them well.
Daughterboard Issue :
_____________________________________________________
| | /
| | | RX Dboard: A
| | | ID: Unknown (0x0095)
| | | Serial: 31F94F3
| | | _____________________________________________________
| | | /
| | | | RX Frontend: 0
| | | | Name: Unknown (0x0095) - 0
| | | | Antennas:
| | | | Sensors:
| | | | Freq range: 0.000 to 0.000 MHz
| | | | Gain Elements: None
| | | | Bandwidth range: 0.0 to 0.0 step 0.0 Hz
| | | | Connection Type: IQ
| | | | Uses LO offset: No
| | | _____________________________________________________
| | | /
| | | | RX Codec: A
| | | | Name: ads62p48
| | | | Gain range digital: 0.0 to 6.0 step 0.5 dB
| | _____________________________________________________
| | /
| | | RX Dboard: B
| | | _____________________________________________________
| | | /
| | | | RX Frontend: 0
| | | | Name: Unknown (0xffff) - 0
| | | | Antennas:
| | | | Sensors:
| | | | Freq range: 0.000 to 0.000 MHz
| | | | Gain Elements: None
| | | | Bandwidth range: 0.0 to 0.0 step 0.0 Hz
| | | | Connection Type: IQ
| | | | Uses LO offset: No
| | | _____________________________________________________
| | | /
| | | | RX Codec: B
| | | | Name: ads62p48
| | | | Gain range digital: 0.0 to 6.0 step 0.5 dB
| | _____________________________________________________
Your UHD is too old for your hardware revision of the TwinRX daughterboard.
The only solution is to use a more modern version of UHD. This will also require you to load a more modern version of the FPGA image.

Tensorflow: Multi-GPU training cannot make all GPU running at the same time

I have a machine that has 3x 1080 GPU. Below are the code of the training:
dynamic_learning_rate = tf.placeholder(tf.float32, shape=[])
model_version = tf.constant(1, tf.int32)
with tf.device('/cpu:0'):
with tf.name_scope('Input'):
# Input images and labels.
batch_images,\
batch_input_vectors,\
batch_one_hot_labels,\
batch_file_paths,\
batch_labels = self.get_batch()
grads = []
pred = []
cost = []
# Define optimizer
optimizer = tf.train.MomentumOptimizer(learning_rate=dynamic_learning_rate / self.batch_size,
momentum=0.9,
use_nesterov=True)
split_input_image = tf.split(batch_images, self.num_gpus)
split_input_vector = tf.split(batch_input_vectors, self.num_gpus)
split_input_one_hot_label = tf.split(batch_one_hot_labels, self.num_gpus)
for i in range(self.num_gpus):
with tf.device(tf.DeviceSpec(device_type="GPU", device_index=i)):
with tf.variable_scope(tf.get_variable_scope(), reuse=i > 0):
with tf.name_scope('Model'):
# Construct model
with tf.variable_scope("inference"):
tower_pred = self.model(split_input_image[i], split_input_vector[i], is_training=True)
pred.append(tower_pred)
with tf.name_scope('Loss'):
# Define loss and optimizer
softmax_cross_entropy_cost = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=tower_pred, labels=split_input_one_hot_label[i]))
cost.append(softmax_cross_entropy_cost)
# Concat variables
pred = tf.concat(pred, 0)
cost = tf.reduce_mean(cost)
# L2 regularization
trainable_vars = tf.trainable_variables()
l2_regularization = tf.add_n(
[tf.nn.l2_loss(v) for v in trainable_vars if any(x in v.name for x in ['weights', 'biases'])])
for v in trainable_vars:
if any(x in v.name for x in ['weights', 'biases']):
print(v.name + ' - included for L2 regularization!')
else:
print(v.name)
cost = cost + self.l2_regularization_strength*l2_regularization
with tf.name_scope('Accuracy'):
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(batch_one_hot_labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
prediction = tf.nn.softmax(pred, name='softmax')
# Creates a variable to hold the global_step.
global_step = tf.Variable(0, trainable=False, name='global_step')
# Minimization
update = optimizer.minimize(cost, global_step=global_step, colocate_gradients_with_ops=True)
After I run the training:
Fri Nov 10 12:28:00 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90 Driver Version: 384.90 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:03:00.0 Off | N/A |
| 42% 65C P2 62W / 198W | 7993MiB / 8114MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 1080 Off | 00000000:04:00.0 Off | N/A |
| 33% 53C P2 150W / 198W | 7886MiB / 8114MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 1080 Off | 00000000:05:00.0 On | N/A |
| 26% 54C P2 170W / 198W | 7883MiB / 8108MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 23228 C python 7982MiB |
| 1 23228 C python 7875MiB |
| 2 4793 G /usr/lib/xorg/Xorg 40MiB |
| 2 23228 C python 7831MiB |
+-----------------------------------------------------------------------------+
Fri Nov 10 12:28:36 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90 Driver Version: 384.90 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:03:00.0 Off | N/A |
| 42% 59C P2 54W / 198W | 7993MiB / 8114MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 1080 Off | 00000000:04:00.0 Off | N/A |
| 33% 57C P2 154W / 198W | 7886MiB / 8114MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 1080 Off | 00000000:05:00.0 On | N/A |
| 27% 55C P2 155W / 198W | 7883MiB / 8108MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 23228 C python 7982MiB |
| 1 23228 C python 7875MiB |
| 2 4793 G /usr/lib/xorg/Xorg 40MiB |
| 2 23228 C python 7831MiB |
+-----------------------------------------------------------------------------+
You see that the whenever the first GPU is running, the other two GPUs will be idle and vice versa. The alternate frequency is about 0.5 second.
For a single GPU, the training speed is around 650 [images/second], with all the 3 GPUs I got only 1050 [images/second].
Any idea of the problem?
You need to make sure that all the trainable variables are on the controller device (usually the CPU) and all the other worker devices (usually GPUs) are using the variables from the CPU in parallel.

Does Amazon GPU Instance Get Exclusive Access to the GPU?

I am running Ubuntu 16.04 on an EC2 p2.xlarge shared instance.
The P2.xlarge instance gives access to a single GPU (1/2 of an NVidia K80 GPU?) to my shared instance.
How is this GPU shared between other Amazon EC2 instances on the same physical machine?
I was under the impression that 100% of the GPU was allocated to my instance. But this is clearly not the case. When my instance is running nothing on the GPU:
$ nvidia-smi
Tue Feb 21 00:11:16 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39 Driver Version: 375.39 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 |
| N/A 39C P0 55W / 149W | 0MiB / 11439MiB | 63% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
A moment later:
| 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 |
| N/A 40C P0 57W / 149W | 0MiB / 11439MiB | 97% Default |
And the next moment:
| 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 |
| N/A 40C P0 56W / 149W | 0MiB / 11439MiB | 100% Default |
And pretty much stays there...
| 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 |
| N/A 41C P0 56W / 149W | 0MiB / 11439MiB | 100% Default |
| 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 |
| N/A 41C P0 56W / 149W | 0MiB / 11439MiB | 99% Default |
What are the rules for GPU allocation for GPU instances?

How can a writer after a barrier be visible before a write preceding the barrier?

In the memory barrier documentation of the linux kernel (Documentation/memory-barriers.txt), there are examples showing that a writer after a memory barrier is visible before a write preceding the memory barrier to other CPUs. How can this happen? Why is the write barrier not sufficient in ordering these writes?
In particular the following:
843 CPU 1 CPU 2
844 ======================= =======================
845 { B = 7; X = 9; Y = 8; C = &Y }
846 STORE A = 1
847 STORE B = 2
848 <write barrier>
849 STORE C = &B LOAD X
850 STORE D = 4 LOAD C (gets &B)
851 LOAD *C (reads B)
852
853 Without intervention, CPU 2 may perceive the events on CPU 1 in some
854 effectively random order, despite the write barrier issued by CPU 1:
855
856 +-------+ : : : :
857 | | +------+ +-------+ | Sequence of update
858 | |------>| B=2 |----- --->| Y->8 | | of perception on
859 | | : +------+ \ +-------+ | CPU 2
860 | CPU 1 | : | A=1 | \ --->| C->&Y | V
861 | | +------+ | +-------+
862 | | wwwwwwwwwwwwwwww | : :
863 | | +------+ | : :
864 | | : | C=&B |--- | : : +-------+
865 | | : +------+ \ | +-------+ | |
866 | |------>| D=4 | ----------->| C->&B |------>| |
867 | | +------+ | +-------+ | |
868 +-------+ : : | : : | |
869 | : : | |
870 | : : | CPU 2 |
871 | +-------+ | |
872 Apparently incorrect ---> | | B->7 |------>| |
873 perception of B (!) | +-------+ | |
874 | : : | |
875 | +-------+ | |
876 The load of X holds ---> \ | X->9 |------>| |
877 up the maintenance \ +-------+ | |
878 of coherence of B ----->| B->2 | +-------+
879 +-------+
880 : :
881
882
883 In the above example, CPU 2 perceives that B is 7, despite the load of *C
884 (which would be B) coming after the LOAD of C.
The write barrier does order the write correctly.
As the following text explains, the problem is that CPU 2 can read *C before C because it does not use any kind of read barrier.
A better article to read for memory barriers is http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.07.23a.pdf

Direct-­mapped instruction cache VS fully associative instruction cache using LRU replacement

For caches of small size, a direct-­mapped instruction cache can sometimes outperform a fully associative instruction cache using LRU replacement.
Could anyone explain how this would be possible with an example access pattern?
This can happen for caches of small size. Let's compare caches of size 2.
In my example, the directly-mapped "DM" cache will use row A for odd addresses, and row B for even addresses.
The LRU cache will use the least recently used row to store values on a miss.
The access pattern I suggest is 13243142 (repeated as many times as one wants).
Here's a breakdown of how botch caching algorithms will behave:
H - hits
M - misses
----- time ------>>>>>
Accessed: 1 | 3 | 2 | 4 | 3 | 1 | 4 | 2
\ \ \ \ \ \ \ \
LRU A ? | ? | 3 | 3 | 4 | 4 | 1 | 1 | 2 |
B ? | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |
M M M M M M M M
DM A ? | 1 | 3 | 3 | 3 | 3 | 1 | 1 | 1 |
B ? | ? | ? | 2 | 4 | 4 | 4 | 4 | 2 |
M M M M H M H M
That gives 8 misses for the LRU, and 6 for directly-mapped. Let's see what happens if this pattern gets repeated forever:
----- time ------>>>>>
Accessed: 1 | 3 | 2 | 4 | 3 | 1 | 4 | 2
\ \ \ \ \ \ \ \
LRU A | 2 | 3 | 3 | 4 | 4 | 1 | 1 | 2 |
B | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |
M M M M M M M M
DM A | 1 | 3 | 3 | 3 | 3 | 1 | 1 | 1 |
B | 2 | 2 | 2 | 4 | 4 | 4 | 4 | 2 |
H M H M H M H M
So the directly-mapped cache will have 50% of hits, which outperforms 0% hits of LRU.
This works this way because:
Any address repeated in this pattern has not been accessed for previous 2 accesses (and both these were different), so LRU cache will always miss.
The DM cache will sometimes miss, as the pattern is designed so that it utilizes what has been stored the last time the corresponding row was used.
Therefore once can build similar patterns for larger cache sizes, but the larger the cache size, the longer such pattern would need to be. This corresponds to the intuition that for larger caches it would be harder to exploit them this way.

Resources