I'm trying to learn this basic thing about processors that should be taught in every CS department of every university. Yet i can't find it on the net (Google doesn't help) and i can't find it in my class materials either.
Do you know any good resource on how addressing modes work on a physical level? I'm particularly interested in Intel processors.
You might want to take a look into the book "Modern Operating Systems" from Tanenbaum.
If you are interested in the x86 architecture the Intel Manuals might help (but they go really deep)
http://www.intel.com/products/processor/manuals/
Start on the Wikipedia Virtual Memory page for a bit of background, then follow up with specific pages such as the MMU etc. as to satisfy your curiosity.
You will normally go in detail over all of the above concepts (and some more, such as pipelined and superscalar architectures, caches, etc.) in any decent Computer Architecture course, typically taught by the Faculty of (Electrical or Computer) Engineering.
This page might help. I did a search for HC12 addressing modes since that's what we learnt with, and it is MUCH better to learn on a simple processor rather than jumping into the deep end with something like an Intel processor. The basic concepts should be similar for any processor though.
http://spx.arizona.edu/ECE372/Supporting%20Documents/lecture/HCS12%20Addressing%20Modes%20and%20Subroutines.pdf
I wouldn't imagine you'd need to know any of the more complicated ones in an introductory course. We only really used the basic ones, then had to explain a few of the others in our exam.
You should be able to see what's going on on a physical level from that provided you understand the assembly code examples. The inherent addressing command inca for example is going to use a set of logic gates within the processor (http://en.wikipedia.org/wiki/Adder_%28electronics%29) in order to increment register A by one. That's all well and good but trying to understand the physical layer of anything more complicated than that is just going to give you headaches. You really don't need to know it, which is the whole point of using a microprocessor in the first place.
Related
This is actually a 2 part question:
For people who want to squeeze every clock cycle, people talk about pipelines, cache locality, etc.
I have seen these low level performance techniques mentioned here and there but I have not seen a good introduction to the subject, from start to finish. Any resource recommendations? (Google gave me definitions and papers, where I'd really appreciate some kind of worked examples/tutorials real-life hands-on kind of materials)
How does one actually measure this kind of things? Like, as in a profiler of some sort? I know we can always change the code, see the improvement and theorize in retrospect, I am just wondering if there are established tools for the job.
(I know algorithm optimization is where the orders of magnitudes are. I am interested in the metal here)
The chorus of replies is, "Don't optimize prematurely." As you mention, you will get a lot more performance out of a better design than a better loop, and your maintainers will appreciate it, as well.
That said, to answer your question:
Learn assembly. Lots and lots of assembly. Don't MUL by a power of two when you can shift. Learn the weird uses of xor to copy and clear registers. For specific references,
http://www.mark.masmcode.com/ and http://www.agner.org/optimize/
Yes, you need to time your code. On *nix, it can be as easy as time { commands ; } but you'll probably want to use a full-features profiler. GNU gprof is open source http://www.cs.utah.edu/dept/old/texinfo/as/gprof.html
If this really is your thing, go for it, have fun, and remember, lots and lots of bit-level math. And your maintainers will hate you ;)
EDIT/REWRITE:
If it is books you need Michael Abrash did a good job in this area, Zen of Assembly language, a number of magazine articles, big black book of graphics programming, etc. Much of what he was tuning for is no longer a problem, the problems have changed. What you will get out of this is the ideas of the kinds of things that can cause bottle necks and the kinds of ways to solve. Most important is to time everything, and understand how your timing measurements work so that you are not fooling yourself by measuring incorrectly. Time the different solutions and try crazy, weird solutions, you may find an optimization that you were not aware of and didnt realize until you exposed it.
I have only just started reading but See MIPS Run (early/first edition) looks good so far (note that ARM took over MIPS as the leader in the processor market, so the MIPS and RISC hype is a bit dated). There are a number of text books old and new to be had about MIPS. Mips being designed for performance (At the cost of the software engineer in some ways).
The bottlenecks today fall into the categories of the processor itself and the I/O around it and what is connected to that I/O. The insides of the processor chips themselves (for higher end systems) run much faster than the I/O can handle, so you can only tune so far before you have to go off chip and wait forever. Getting off the train, from the train to your destination half a minute faster when the train ride was 3 hours is not necessarily a worthwhile optimization.
It is all about learning the hardware, you can probably stay within the ones and zeros world and not have to get into the actual electronics. But without really knowing the interfaces and internals you really cannot do much performance tuning. You might re-arrange or change a few instructions and get a little boost, but to make something several hundred times faster you need more than that. Learning a lot of different instruction sets (assembly languages) helps get into the processors. I would recommend simulating HDL, for example processors at opencores, to get a feel for how some folks do their designs and getting a solid handle on how to really squeeze clocks out of a task. Processor knowledge is big, memory interfaces are a huge deal and need to be learned, media (flash, hard disks, etc) and displays and graphics, networking, and all the types of interfaces between all of those things. And understanding at the clock level or as close to it as you can get, is what it takes.
Intel and AMD provide optimization manuals for x86 and x86-64.
http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html/
http://developer.amd.com/documentation/guides/pages/default.aspx
Another excellent resource is agner.
http://www.agner.org/optimize/
Some of the key points (in no particular order):
Alignment; memory, loop/function labels/addresses
Cache; non-temporal hints, page and cache misses
Branches; branch prediction and avoiding branching with compare&move op-codes
Vectorization; using SSE and AVX instructions
Op-codes; avoiding slow running op-codes, taking advantage of op-code fusion
Throughput / pipeline; re-ordering or interleaving op-codes to perform separate tasks avoiding partial stales and saturating the processor's ALUs and FPUs
Loop unrolling; performing multiple iterations for a single "loop comparison, branch"
Synchronization; using atomic op-code (or LOCK prefix) to avoid high level synchronization constructs
Yes, measure, and yes, know all those techniques.
Experienced people will tell you "don't optimize prematurely", which I relate as simply "don't guess".
They will also say "use a profiler to find the bottleneck", but I have a problem with that. I hear lots of stories of people using profilers and either liking them a lot or being confused with their output.
SO is full of them.
What I don't hear a lot of is success stories, with speedup factors achieved.
The method I use is very simple, and I've tried to give lots of examples, including this case.
I'd suggest Optimizing subroutines in assembly
language
An optimization guide for x86 platforms.
It's quite heavy stuff though ;)
I am trying to hack an old unix kernel. I just want to implement the MMU and TLB using software. Can some one tell me what are the best Data structures and algorithms to use in building one. I saw lots of people using splay trees because its easy to implement LRU. Is there any better Data Structure ? What is the most efficient way of translating virtual to physical address in software.Assume its x86 architecture and translation as any basic page table translation.
You mention efficiency. Is that the goal you're engineering towards? If you're not constrained to any particular goal, just try to get it working. I'd do a single level page table if you can, either direct or fully associative. It sounds like you're past this though.
Most efficient is going to depend on size-speed tradeoffs and what kind of locality you expect. Do you have any critical apps profiled or is this just messing around to try out some implementations? Inverted page tables are used on some newer architectures. I would take that as an indication that someone spending a lot of time working on this thinks it's a good way to go.
I wondered if any of you have knowledge of the internal workings of windows (kernel, interrupts, etc) and if you've found that you've become a better developer as a result?
Do you find that the more knowledge the better is a good motto to have as a developer?
I find myself studying a lot of things, thinking with more understanding, I'll be a better developer. Of course practice and experience also comes into play.
This is a no brainier - absolutely (assuming you're a developer primarily on the Windows platform, of course). A working knowledge of how the car engine works will make a lot of common programming tasks (debugging, performance work, etc) a lot easier.
Windows Internals is the standard reference.
I believe it is valuable to understand how things work underneath. CLR/.NET to C++, native to ASM, ASM to CPU architecture, building registers and ops from logical gates, logical gates from MOSFETs, transistors from quantum physics and the latter from respective mathematical apparatus (group theory, etc).
Understanding low level makes you not only think different but also feel different - like you are in control of things, standing on the shoulders of giants.
More knowledge is always better, and having knowledge at many levels is a lot more valuable than just knowing whatever layer of abstraction you are working at.
A good rule of thumb is that you should have a good knowledge of the layer below the layer where you are working. So, for example, if you write a lot of .NET code, you should know how the CLR works. If you write a lot of web apps, you should understand HTTP. If you writing code that uses HTTP directly, then you should understand TCP/IP. If you are implementing a TCP/IP stack, then you need to understand how Ethernet works.
Knowledge of Windows internals is really helpful if you are writing native Win32 code, or if OS performance issues are critical to what you are doing. At higher levels of abstraction, it may be less helpful, but it never hurts.
I dont think that one requires special or secret knowledge of internals such as those that may be extended to members of the windows team or those with source access but I absolutely contend that understanding internals helps you become a better developer.
Take threading for instance, if you are going to build an application that uses threading in even a moderate way - understanding how windows works, how the threading works, how memory processes work are all keys to being able to do a good job with that code.
I agree to a point with your edict but I would not agree that experience/practice/knowledge are mutually exclusive. That net-net of experience is that you have knowledge gained from that experience. There is also a wisdom component to experience and practice but those are usually intangible situational elements that you apply in the future to avoid mistakes. Bottom line knowledge is a precipitate of experience.
Think of it this way, how many people do you know with 30+ years of experience in IT, think of them and take the top two. Now go into that memory bank and think of the people you know in the industry who are super smart, who know so much about so many things and pick the top two of those. You now have your final 4 - if you had to pick one to start a project with who would it be? Invariably we pick the super smart guy.
Yes, understanding Windows internals helped me to become a better programmer. It also taught be a lot of bad practices, bad ideas, and poor design concepts.
I highly suggest studying OS X or Linux internals as an alternative. It'll take less time, make more sense, and be much more productive.
Read code. Read lots of code. Read lots of good code. jQuery, Django, AIR framework source, Linux kernel, compilers.
Try to learn programming languages that introduce you to new approaches, like Lisp, Ruby, Python, or Javascript. OOP is good, but .net and Java seem to take the brainwash approach on it, and elevate it to some kind of religious level, instead of it just being a good tool in your toolbox.
If you don't understand the code you are reading, it likely means you are on the right track, and learning new techniques.
I'd suggest getting a mac simply because you'll find yourself wanting to make your UIs simpler and easier. It's really important to have a good environment if you want to become a great programmer. Surround yourself with engineers better than yourself (if you can), work with frameworks and languages that take the 'engineer' approach vs. the 'experimenter' approach, and... use a operating system that contains code better than yours.
I'd also reccomend the book "Coders at Work".
It depends. Many programmers who understand the internals of a system begin writing optimised code to exploit that knowledge. This has three very serious side-effects:
1.) It's harder for others without that knowledge to extend or support the code.
2.) System internals may change without notice, whereas interfaces are usually versioned and changes discussed publicly.
3.) Interfaces are generally consistent across platform revisions and hardware, internals do not have this consistency.
In short, There's a lot of broken, unsupportable code out there that's borked because it relies on an internal process that the vendor changed without notice.
Father of language C said that "you don't need to learn all features of language to write great codes. Better you understand the problem, better you write the code." Having knowledge is always better.
In my quest for getting some basics down before I start going into programming I am looking for essential knowledge about how the computer works down at the core level.
I have a theory that actually understanding what for instance a stackoverflow let alone a stack is, instead of my sporadic knowledge about computer systems, will help me longer term.
Is there any books or sites that take you through how processors are structured and give a holistic overview and that somehow relates to good to know about digital logic?
Am i making sense?
Yes, you should read some topics of
John L. Hennessy & David A. Patterson, "Computer Architecture: A quantitative Approach"
It has microprocessors' history and theory , (starting with RISC archs - MIPS), pipelining, memory, storage, etc.
David Patterson is a Professor of Computer of Computer Science on EECS Department - U. Berkeley. http://www.eecs.berkeley.edu/~pattrsn/
Hope it helps, here's the link
Tanenbaum's Structured Computer Organization is a good book about how computers work. You might find it hard to get through the book, but that's mostly due to the subject, not the author.
However, I'm not sure I would recommend taking this approach. Understanding how the computer works can certainly be useful, but if you don't really have any programming knowledge, you can't really put your knowledge to good use - and you probably don't need that knowledge yet anyway. You would be better off learning about topics like object-oriented programming and data structures to learn about program design, because unless you're looking at doing embedded programming on very limited systems, you'll find those skills far more useful than knowledge of a computer's inner workings.
In my opinion, 20 years ago it was possible to understand the whole spectrum from BASIC all the way through operating system, hardware, down to the transistor or even quantum level. I don't know that it's possible for one person to understand that whole spectrum with today's technology. (Years ago, everyone serviced their own car. Today it's too hard.)
Some of the "layers" that you might be interested in:
http://en.wikipedia.org/wiki/Boolean_logic (this will be helpful for programming)
http://en.wikipedia.org/wiki/Flip-flop_%28electronics%29
http://en.wikipedia.org/wiki/Finite-state_machine
http://en.wikipedia.org/wiki/Static_random_access_memory
http://en.wikipedia.org/wiki/Bus_%28computing%29
http://en.wikipedia.org/wiki/Microprocessor
http://en.wikipedia.org/wiki/Computer_architecture
It's pretty simple really - the cpu loads instructions and executes them, most of those instructions revolve around loading values into registers or memory locations, and then manipulating those values. Certain memory ranges are set aside for communicating with the peripherals that are attached to the machine, such as the screen or hard drive.
Back in the days of Apple ][ and Commodore 64 you could put a value directly in to a memory location and that would directly change a pixel on the screen - those days are long gone, it is abstracted away from you (the programmer) by several layers of code, such as drivers and the operating system.
You can learn about this sort of stuff, or assembly language (which i am a huge fan of), or AND/NAND gates at the hardware level, but knowing this sort of stuff is not going to help you code up a web application in ASP.NET MVC, or write a quick and dirty Python or Powershell script.
There are lots of resources out there sprinkled around the net that will give you insight into how the CPU and the rest of the hardware works, but if you want to get down and dirty i honestly think you should buy one of those older machines off eBay or somewhere, and learn its particular flavour of assembly language (i understand there are also a lot of programmable PIC controllers out there that might also be good to learn on). Picking up an older machine is going to eliminate the software abstractions and make things way easier to learn. You learn way better when you get instant gratification, like making sprites move around a screen or generating sounds by directly toggling the speaker (or using a PIC controller to control a small robot). With those older machines, the schematics for an Apple ][ motherboard fit on to a roughly A2 size sheet of paper that was folded into the back of one of the Apple manuals - i would hate to imagine what they look like these days.
While I agree with the previous answers insofar as it is incredibly difficult to understand the entire process, we can at least break it down into categories, from lowest (closest to electrons) to highest (closest to what you actually see).
Lowest
Solid State Device Physics (How transistors work physically)
Circuit Theory (How transistors are combined to create logic gates)
Digital Logic (How logic gates are put together to create digital functions or digital structures i.e. multiplexers, full adders, etc.)
Hardware Organization (How the data path is laid out in the CPU, the components of a Von Neuman machine -> memory, processor, Arithmetic Logic Unit, fetch/decode/execute)
Microinstructions (Bit level programming)
Assembly (Programming with words, but directly specifying registers and takes forever to program even simple things)
Interpreted/Compiled Languages (Programming languages that get compiled or interpreted to assembly; the operating system may be in one of these)
Operating System (Process scheduling, hardware interfaces, abstracts lower levels)
Higher level languages (these kind of appear twice; it depends on the language. Java is done at a very high level, but C goes straight to assembly, and the C compiler is probably written in C)
User Interfaces/Applications/Gui (Last step, making it look pretty)
You can find out a lot about each of these. I'm only somewhat expert in the digital logic side of things. If you want a thorough tutorial on digital logic from the ground up, go to the electrical engineering menu of my website:
affablyevil.wordpress.com
I'm teaching the class, and adding online lessons as I go.
I'm working on a project were we need more performance. Over time we've continued to evolve the design to work more in parallel(both threaded and distributed). Then latest step has been to move part of it onto a new machine with 16 cores. I'm finding that we need to rethink how we do things to scale to that many cores in a shared memory model. For example the standard memory allocator isn't good enough.
What resources would people recommend?
So far I've found Sutter's column Dr. Dobbs to be a good start.
I just got The Art of Multiprocessor Programming and The O'Reilly book on Intel Threading Building Blocks
A couple of other books that are going to be helpful are:
Synchronization Algorithms and Concurrent Programming
Patterns for Parallel Programming
Communicating Sequential Processes by C. A. R. Hoare (a classic, free PDF at that link)
Also, consider relying less on sharing state between concurrent processes. You'll scale much, much better if you can avoid it because you'll be able to parcel out independent units of work without having to do as much synchronization between them.
Even if you need to share some state, see if you can partition the shared state from the actual processing. That will let you do as much of the processing in parallel, independently from the integration of the completed units of work back into the shared state. Obviously this doesn't work if you have dependencies among units of work, but it's worth investigating instead of just assuming that the state is always going to be shared.
You might want to check out Google's Performance Tools. They've released their version of malloc they use for multi-threaded applications. It also includes a nice set of profiling tools.
Jeffrey Richter is into threading a lot. He has a few chapters on threading in his books and check out his blog:
http://www.wintellect.com/cs/blogs/jeffreyr/default.aspx.
As monty python would say "and now for something completely different" - you could try a language/environment that doesn't use threads, but processes and messaging (no shared state). One of the most mature ones is erlang (and this excellent and fun book: http://www.pragprog.com/titles/jaerlang/programming-erlang). May not be exactly relevant to your circumstances, but you can still learn a lot of ideas that you may be able to apply in other tools.
For other environments:
.Net has F# (to learn functional programming).
JVM has Scala (which has actors, very much like Erlang, and is functional hybrid language). Also there is the "fork join" framework from Doug Lea for Java which does a lot of the hard work for you.
The allocator in FreeBSD recently got an update for FreeBSD 7. The new one is called jemaloc and is apparently much more scaleable with respect to multiple threads.
You didn't mention which platform you are using, so perhaps this allocator is available to you. (I believe Firefox 3 uses jemalloc, even on windows. So ports must exist somewhere.)
Take a look at Hoard if you are doing a lot of memory allocation.
Roll your own Lock Free List. A good resource is here - it's in C# but the ideas are portable. Once you get used to how they work you start seeing other places where they can be used and not just in lists.
I will have to check-out Hoard, Google Perftools and jemalloc sometime. For now we are using scalable_malloc from Intel Threading Building Blocks and it performs well enough.
For better or worse, we're using C++ on Windows, though much of our code will compile with gcc just fine. Unless there's a compelling reason to move to redhat (the main linux distro we use), I doubt it's worth the headache/political trouble to move.
I would love to use Erlang, but there way to much here to redo it now. If we think about the requirements around the development of Erlang in a telco setting, the are very similar to our world (electronic trading). Armstrong's book is on my to read stack :)
In my testing to scale out from 4 cores to 16 cores I've learned to appreciate the cost of any locking/contention in the parallel portion of the code. Luckily we have a large portion that scales with the data, but even that didn't work at first because of an extra lock and the memory allocator.
I maintain a concurrency link blog that may be of ongoing interest:
http://concurrency.tumblr.com