Can printf be hardware accelerated? - fpga

Printf is a very slow function to execute, there are variations of it that are less flexible but are faster too. Including this function in program can take quite a bit of memory. Has anyone tried to find a way to "hardware accelerate" printf so the processor can spend less time on it?

Related

Why avoid subshells?

I've seen a lot of answers and comments on Stack Overflow
that mention doing something to avoid a subshell. In some
cases, a functional reason for this is given
(most often, the potential need to read a variable
outside the subshell that was assigned inside it), but in
other cases, the avoidance seems to be viewed as an end
in itself. For example
union of two columns of a tsv file
suggesting { ... ; } | ... rather than
( ... ) | ..., so there's a subshell either way.
unhide hidden files in unix with sed and mv commands
Linux bash script to copy files
explicitly stating,
"the goal is just to avoid a subshell"
Why is this? Is it for style/elegance/beauty? For
performance (avoiding a fork)? For preventing likely
bugs? Something else?
There are a few things going on.
First, forking a subshell might be unnoticible when it happens only once, but if you do it in a loop, it adds up to measurable performance impact. The performance impact is also greater on platforms such as Windows where forking is not as cheap as it is on modern Unixlikes.
Second, forking a subshell means you have more than one context, and information is lost in switching between them -- if you change your code to set a variable in a subshell, that variable is lost when the subshell exits. Thus, the more your code has subshells in it, the more careful you have to be when modifying it later to be sure that any state changes you make will actually persist.
See BashFAQ #24 for some examples of surprising behavior caused by subshells.
sometimes examples are helpful.
f='fred';y=0;time for ((i=0;i<1000;i++));do if [[ -n "$( grep 're' <<< $f )" ]];then ((y++));fi;done;echo $y
real 0m3.878s
user 0m0.794s
sys 0m2.346s
1000
f='fred';y=0;time for ((i=0;i<1000;i++));do if [[ -z "${f/*re*/}" ]];then ((y++));fi;done;echo $y
real 0m0.041s
user 0m0.027s
sys 0m0.001s
1000
f='fred';y=0;time for ((i=0;i<1000;i++));do if grep -q 're' <<< $f ;then ((y++));fi;done >/dev/null;echo $y
real 0m2.709s
user 0m0.661s
sys 0m1.731s
1000
As you can see, in this case, the difference between using grep in a subshell and parameter expansion to do the same basic test is close to 100x in overall time.
Following the question further, and taking into account the comments below, which clearly fail to indicate what they are trying to indicate, I checked the following code:
https://unix.stackexchange.com/questions/284268/what-is-the-overhead-of-using-subshells
time for((i=0;i<10000;i++)); do echo "$(echo hello)"; done >/dev/null
real 0m12.375s
user 0m1.048s
sys 0m2.822s
time for((i=0;i<10000;i++)); do echo hello; done >/dev/null
real 0m0.174s
user 0m0.165s
sys 0m0.004s
This is actually far far worse than I expected. Almost two orders of magnitude slower in fact in overall time, and almost THREE orders of magnitude slower in sys call time, which is absolutely incredible.
https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html
Note that the point of demonstrating this is to show that if you are using a testing method that's quite easy to fall into the habit of using, subshell grep, or sed, or gawk (or a bash builtin, like echo), which is for me a bad habit I tend to fall into when hacking fast, it's worth realizing that this will have a significant performance hit, and it's probably worth the time avoiding those if bash builtins can handle the job natively.
By carefully reviewing a large programs use of subshells, and replacing them with other methods, when possible, I was able to cut about 10% of the overall execution time in a just completed set of optimizations (not the first, and not the last, time I have done this, it's already been optimized several times, so gaining another 10% is actually quite significant)
So it's worth being aware of.
Because I was curious, I wanted to confirm what 'time' is telling us here:
https://en.wikipedia.org/wiki/Time_(Unix)
The total CPU time is the combination of the amount of time the CPU or
CPUs spent performing some action for a program and the amount of time
they spent performing system calls for the kernel on the program's
behalf. When a program loops through an array, it is accumulating user
CPU time. Conversely, when a program executes a system call such as
exec or fork, it is accumulating system CPU time.
As you can see in particularly the echo loop test, the cost of the forks is very high in terms of system calls to the kernel, those forks really add up (700x!!! more time spent on sys calls).
I'm in an ongoing process of resolving some of these issues, so these questions are actually quite relevant to me, and the global community of users who like the program in question, that is, this is not an arcane academic point for me, it's realworld, with real impacts.
well, here's my interpretation of why this is important: it's answer #2!
there's no little performance gain, even when it's about avoiding one subshell… Call me Mr Obvious, but the concept behind that thinking is the same that's behind avoiding useless use of <insert tool here> like cat|grep, sort|uniq or even cat|sort|uniq etc..
That concept is the Unix philosophy, which ESR summed up well by a reference to KISS: Keep It Simple, Stupid!
What I mean is that if you write a script, you never know how it may get used in the end, so every little byte or cycle you can spare is important, so if your script ends up eating billions of lines of input, then it will be by that many forks/bytes/… more optimized.
I think the general idea is it makes sense to avoid creating an extra shell process unless otherwise required.
However, there are too many situations where either can be used and one makes more sense than the other to say one way is overall better than the other. It seems to me to be purely situational.

Where to learn about low-level, hard-core performance stuffs?

This is actually a 2 part question:
For people who want to squeeze every clock cycle, people talk about pipelines, cache locality, etc.
I have seen these low level performance techniques mentioned here and there but I have not seen a good introduction to the subject, from start to finish. Any resource recommendations? (Google gave me definitions and papers, where I'd really appreciate some kind of worked examples/tutorials real-life hands-on kind of materials)
How does one actually measure this kind of things? Like, as in a profiler of some sort? I know we can always change the code, see the improvement and theorize in retrospect, I am just wondering if there are established tools for the job.
(I know algorithm optimization is where the orders of magnitudes are. I am interested in the metal here)
The chorus of replies is, "Don't optimize prematurely." As you mention, you will get a lot more performance out of a better design than a better loop, and your maintainers will appreciate it, as well.
That said, to answer your question:
Learn assembly. Lots and lots of assembly. Don't MUL by a power of two when you can shift. Learn the weird uses of xor to copy and clear registers. For specific references,
http://www.mark.masmcode.com/ and http://www.agner.org/optimize/
Yes, you need to time your code. On *nix, it can be as easy as time { commands ; } but you'll probably want to use a full-features profiler. GNU gprof is open source http://www.cs.utah.edu/dept/old/texinfo/as/gprof.html
If this really is your thing, go for it, have fun, and remember, lots and lots of bit-level math. And your maintainers will hate you ;)
EDIT/REWRITE:
If it is books you need Michael Abrash did a good job in this area, Zen of Assembly language, a number of magazine articles, big black book of graphics programming, etc. Much of what he was tuning for is no longer a problem, the problems have changed. What you will get out of this is the ideas of the kinds of things that can cause bottle necks and the kinds of ways to solve. Most important is to time everything, and understand how your timing measurements work so that you are not fooling yourself by measuring incorrectly. Time the different solutions and try crazy, weird solutions, you may find an optimization that you were not aware of and didnt realize until you exposed it.
I have only just started reading but See MIPS Run (early/first edition) looks good so far (note that ARM took over MIPS as the leader in the processor market, so the MIPS and RISC hype is a bit dated). There are a number of text books old and new to be had about MIPS. Mips being designed for performance (At the cost of the software engineer in some ways).
The bottlenecks today fall into the categories of the processor itself and the I/O around it and what is connected to that I/O. The insides of the processor chips themselves (for higher end systems) run much faster than the I/O can handle, so you can only tune so far before you have to go off chip and wait forever. Getting off the train, from the train to your destination half a minute faster when the train ride was 3 hours is not necessarily a worthwhile optimization.
It is all about learning the hardware, you can probably stay within the ones and zeros world and not have to get into the actual electronics. But without really knowing the interfaces and internals you really cannot do much performance tuning. You might re-arrange or change a few instructions and get a little boost, but to make something several hundred times faster you need more than that. Learning a lot of different instruction sets (assembly languages) helps get into the processors. I would recommend simulating HDL, for example processors at opencores, to get a feel for how some folks do their designs and getting a solid handle on how to really squeeze clocks out of a task. Processor knowledge is big, memory interfaces are a huge deal and need to be learned, media (flash, hard disks, etc) and displays and graphics, networking, and all the types of interfaces between all of those things. And understanding at the clock level or as close to it as you can get, is what it takes.
Intel and AMD provide optimization manuals for x86 and x86-64.
http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html/
http://developer.amd.com/documentation/guides/pages/default.aspx
Another excellent resource is agner.
http://www.agner.org/optimize/
Some of the key points (in no particular order):
Alignment; memory, loop/function labels/addresses
Cache; non-temporal hints, page and cache misses
Branches; branch prediction and avoiding branching with compare&move op-codes
Vectorization; using SSE and AVX instructions
Op-codes; avoiding slow running op-codes, taking advantage of op-code fusion
Throughput / pipeline; re-ordering or interleaving op-codes to perform separate tasks avoiding partial stales and saturating the processor's ALUs and FPUs
Loop unrolling; performing multiple iterations for a single "loop comparison, branch"
Synchronization; using atomic op-code (or LOCK prefix) to avoid high level synchronization constructs
Yes, measure, and yes, know all those techniques.
Experienced people will tell you "don't optimize prematurely", which I relate as simply "don't guess".
They will also say "use a profiler to find the bottleneck", but I have a problem with that. I hear lots of stories of people using profilers and either liking them a lot or being confused with their output.
SO is full of them.
What I don't hear a lot of is success stories, with speedup factors achieved.
The method I use is very simple, and I've tried to give lots of examples, including this case.
I'd suggest Optimizing subroutines in assembly
language
An optimization guide for x86 platforms.
It's quite heavy stuff though ;)

How small should be an algorithm to be stored in cache? (Need a clue)

I'm doing a chess moves generator, I have the opportunity to replace 'while/for loops' with many 'if statements' and I was wondering if adding those ~3000 lines would improve performance as in theory or just make the algorithm be too big to be stored in cpu cache
I know it depends of the cache size (I got an AMD Phenom 8650 Triple Core 2.3) but I really don't have IDEA
First of all, you probably have a few MB of cache, so I doubt it would fill the entire cache.
Besides that, the CPU is busy doing lots of things beside your code, so I doubt the entire cache will be used just for your code.
Besides that, the movement between RAM (you probably got a few GB) to the cache is pretty neglectable.
So, yea, removing the loop (if it's a constant length loop) and replacing it with explicit lines should give you an improvement.
The amount or percentage of the improvement really depends more on the language and compiler than the hardware in this case.
Note that there are languages and situations where this might even take longer (interpretative languages for example)
Disclaimer:
This type of optimization isn't used much, mainly because it doesn't improve much (usually).
Try looking at other places (or others ways) to improve.
It shouldn't make any difference expect for making your code larger and have a larger memory footprint. For example, If the last if is to be used then it would be similar to the last iteration in a while loop.... However you can still use something such as break or terminate the loop if a condition is met, which is similar to the having multiple if loops.
Make the code faster by optimizing other parts and profiling.

Most hazardous performance bottleneck misconceptions

The guys who wrote Bespin (cloud-based canvas-based code editor [and more]) recently spoke about how they re-factored and optimize a portion of the Bespin code because of a misconception that JavaScript was slow. It turned out that when all was said and done, their optimization produced no significant improvements.
I'm sure many of us go out of our way to write "optimized" code based on misconceptions similar to that of the Bespin team.
What are some common performance bottleneck misconceptions developers commonly subscribe to?
In no particular order:
"Ready, Fire, Aim" - thinking you know what needs to be optimized without proving it (i.e. guessing) and then acting on that, and since it doesn't help much, therefore assuming the code must have been optimal to begin with.
"Penny Wise, Pound Foolish" - thinking that optimization is all about compiler optimization, fussing about ++i vs. i++ while mountains of time are being spent needlessly in overblown designs, especially of data structures and databases.
"Swat Flies With a Bazooka" - being so enamored of the fanciest ideas heard in classrooms that they are just used for everything, regardless of scale.
"Fuzzy Thinking about Performance" - throwing around terms like "hotspot" and "bottleneck" and "profiler" and "measure" as if these things were well understood and / or relevant. (I bet I get clobbered for that!) OK, one at a time:
hotspot - What's the definition? I have one: it is a region of physical addresses where the PC register is found a significant fraction of time. It is the kind of thing PC samplers are good at finding. Many performance problems exhibit hotspots, but only in the simplest programs is the problem in the same place as the hotspot is.
bottleneck - A catch-all term used for performance problems, it implies a limited channel constraining how fast work can be accomplished. The unstated assumption is that the work is necessary. In my decades of performance tuning, I have in fact found a few problems like that - very few. Nearly all are of a very different nature. Rather than taking the shortest route from point A to B, little detours are taken, in the form of function calls which take little code, but not little time. Then those detours take further nested detours, sometimes 30 levels deep. The more the detours are nested, the more likely it is that some of them are less than necessary - wasteful, in fact - and it nearly always arises from galloping generality - unquestioning over-indulgence in "abstraction".
profiler - a universal good thing, right? All you gotta do is get a profiler and do some profiling, right? Ever think about how easy it is to fool a profiler into telling you a lot of nothing, when your goal is to find out what you need to fix to get better performance? Somewhere in your call tree, tuck a little file I/O, or a little call to some system routine, or have your evil twin do it without your knowledge. At some point, that will be your problem, and most profilers will miss it completely because the only inefficiency they contemplate is algorithmic inefficiency. Or, not all your routines will be little, and they may not call another routine in a small number of places, so your call graph says there's a link between the two routines, but which call? Or suppose you can figure out that some big percent of time is spent in a path A calls B calls C. You can look at that and think there's not much you can do about it, when if you could also look at the data being passed in those calls, you could see if it's necesssary. Here's a fun project - pick your favorite profiler, and then see how many ways you could fool it. That is, find ways to make the program take longer without the profiler being able to tell what you did, because if you can do it intentionally, you can also do it without intending to.
measure - (i.e. measure time) that's what profilers have done for decades, and they take pride in the accuracy and precision with which they measure. But measure what time? and why with accuracy? Remember, the goal is to precisely locate performance problems, such that you could fruitfully optimize them to gain a speedup. When you get that speedup, it is what it is, regardless of how precisely you estimated it beforehand. If that precision of measurement is bought at the expense of precision of location, then you've just bought apples when what you needed was oranges.
Here's a list of myths about performance.
And this is what happens when one optimizes without a valid profile in hand. All you are doing without a profile is guessing and probably wasting time and money. I could list a bunch of other misconceptions, but many come down to the fact that if the code in question isn't a top resource consumer, it is probably fine as is. Kinda like unrolling a for loop that is doing disk I/O...
If I convert the whole code base over to [Insert xxx latest technology here], it'll be much faster.
Relational databases are slow.
I'm smarter than the optimizer.
This should be optimized.
Java is slow
And, unrelated:
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.
-jwz
Optimizing the WRONG part of the code (people, use your profiler!).
The optimizer in my compiler is smart, so I don't have to help it.
Java is fast (LOL)
Relational databases are fast (ROTFL LOL LMAO)
"This has to be as fast as possible."
If you don't have a performance problem, you don't need to worry about optimizing performance (beyond just paying attention to using good algorithms).
This misconception also manifests in attempts to optimize performance for every single aspect of your program. I see this most often with people trying to shave every last millisecond off of a low-volume web application's execution time while failing to take into account that network latency will take orders of magnitude longer than their code's execution time, making any reduction in execution time irrelevant anyhow.
My rules of optimization.
Don't optimize
Don't optimize now.
Profile to identify the problem.
Optimize the component that is taking at least 80% of the time.
Find an optimization that is 10 times faster.
My best optimization has been reducing a report from 3 days to 9 minutes. The optimized code was sped up from three days to three minutes.
In my carreer I have met three people who had been tasked with producing a faster sort on VAX than the native sort. They invariably had been able to produce sorts that took only three times longer.
The rules are simple:
Try to use standard library functions first.
Try to use brute-force and ignorance second.
Prove you've got a problem before trying to do any optimization.

Optimization! - What is it? How is it done?

Its common to hear about "highly optimized code" or some developer needing to optimize theirs and whatnot. However, as a self-taught, new programmer I've never really understood what exactly do people mean when talking about such things.
Care to explain the general idea of it? Also, recommend some reading materials and really whatever you feel like saying on the matter. Feel free to rant and preach.
Optimize is a term we use lazily to mean "make something better in a certain way". We rarely "optimize" something - more, we just improve it until it meets our expectations.
Optimizations are changes we make in the hopes to optimize some part of the program. A fully optimized program usually means that the developer threw readability out the window and has recoded the algorithm in non-obvious ways to minimize "wall time". (It's not a requirement that "optimized code" be hard to read, it's just a trend.)
One can optimize for:
Memory consumption - Make a program or algorithm's runtime size smaller.
CPU consumption - Make the algorithm computationally less intensive.
Wall time - Do whatever it takes to make something faster
Readability - Instead of making your app better for the computer, you can make it easier for humans to read it.
Some common (and overly generalized) techniques to optimize code include:
Change the algorithm to improve performance characteristics. If you have an algorithm that takes O(n^2) time or space, try to replace that algorithm with one that takes O(n * log n).
To relieve memory consumption, go through the code and look for wasted memory. For example, if you have a string intensive app you can switch to using Substring References (where a reference contains a pointer to the string, plus indices to define its bounds) instead of allocating and copying memory from the original string.
To relieve CPU consumption, cache as many intermediate results if you can. For example, if you need to calculate the standard deviation of a set of data, save that single numerical result instead looping through the set each time you need to know the std dev.
I'll mostly rant with no practical advice.
Measure First. Optimization should be done to places where it matters. Highly optimized code is often difficult to maintain and a source of problems. In places where the code does not slow down execution anyway, I alwasy prefer maintainability to optimizations. Familiarize yourself with Profiling, both intrusive (instrumented) and non-intrusive (low overhead statistical). Learn to read a profiled stack, understand where the time inclusive/time exclusive is spent, why certain patterns show up and how to identify the trouble spots.
You can't fix what you cannot measure. Have your program report through some performance infrastructure the thing it does and the times it takes. I come from a Win32 background so I'm used to the Performance Counters and I'm extremely generous at sprinkling them all over my code. I even automatized the code to generate them.
And finally some words about optimizations. Most discussion about optimization I see focus on stuff any compiler will optimize for you for free. In my experience the greatest source of gains for 'highly optimized code' lies completely elsewhere: memory access. On modern architectures the CPU is idling most of the times, waiting for memory to be served into its pipelines. Between L1 and L2 cache misses, TLB misses, NUMA cross-node access and even GPF that must fetch the page from disk, the memory access pattern of a modern application is the single most important optimization one can make. I'm exaggerating slightly, of course there will be counter example work-loads that will not benefit memory access locality this techniques. But most application will. To be specific, what these techniques mean is simple: cluster your data in memory so that a single CPU can work an a tight memory range containing all it needs, no expensive referencing of memory outside your cache lines or your current page. In practice this can mean something as simple as accessing an array by rows rather than by columns.
I would recommend you read up the Alpha-Sort paper presented at the VLDB conference in 1995. This paper presented how cache sensitive algorithms designed specifically for modern CPU architectures can blow out of the water the old previous benchmarks:
We argue that modern architectures
require algorithm designers to
re-examine their use of the memory
hierarchy. AlphaSort uses clustered
data structures to get good cache
locality...
The general idea is that when you create your source tree in the compilation phase, before generating the code by parsing it, you do an additional step (optimization) where, based on certain heuristics, you collapse branches together, delete branches that aren't used or add extra nodes for temporary variables that are used multiple times.
Think of stuff like this piece of code:
a=(b+c)*3-(b+c)
which gets translated into
-
* +
+ 3 b c
b c
To a parser it would be obvious that the + node with its 2 descendants are identical, so they would be merged into a temp variable, t, and the tree would be rewritten:
-
* t
t 3
Now an even better parser would see that since t is an integer, the tree could be further simplified to:
*
t 2
and the intermediary code that you'd run your code generation step on would finally be
int t=b+c;
a=t*2;
with t marked as a register variable, which is exactly what would be written for assembly.
One final note: you can optimize for more than just run time speed. You can also optimize for memory consumption, which is the opposite. Where unrolling loops and creating temporary copies would help speed up your code, they would also use more memory, so it's a trade off on what your goal is.
Here is an example of some optimization (fixing a poorly made decision) that I did recently. Its very basic, but I hope it illustrates that good gains can be made even from simple changes, and that 'optimization' isn't magic, its just about making the best decisions to accomplish the task at hand.
In an application I was working on there were several LinkedList data structures that were being used to hold various instances of foo.
When the application was in use it was very frequently checking to see if the LinkedListed contained object X. As the ammount of X's started to grow, I noticed that the application was performing more slowly than it should have been.
I ran an profiler, and realized that each 'myList.Contains(x)' call had O(N) because the list has to iterate through each item it contains until it reaches the end or finds a match. This was definitely not efficent.
So what did I do to optimize this code? I switched most of the LinkedList datastructures to HashSets, which can do a '.Contains(X)' call in O(1)- much better.
This is a good question.
Usually the best practice is 1) just write the code to do what you need it to do, 2) then deal with performance, but only if it's an issue. If the program is "fast enough" it's not an issue.
If the program is not fast enough (like it makes you wait) then try some performance tuning. Performance tuning is not like programming. In programming, you think first and then do something. In performance tuning, thinking first is a mistake, because that is guessing.
Don't guess what to fix; diagnose what the program is doing.
Everybody knows that, but mostly they do it anyway.
It is natural to say "Could be the problem is X, Y, or Z" but only the novice acts on guesses. The pro says "but I'm probably wrong".
There are different ways to diagnose performance problems.
The simplest is just to single-step through the program at the assembly-language level, and don't take any shortcuts. That way, if the program is doing unnecessary things, then you are doing the same things, and it will become painfully obvious.
Another is to get a profiling tool, and as others say, measure, measure, measure.
Personally I don't care for measuring. I think it's a fuzzy microscope for the purpose of pinpointing performance problems. I prefer this method, and this is an example of its use.
Good luck.
ADDED: I think you will find, if you go through this exercise a few times, you will learn what coding practices tend to result in performance problems, and you will instinctively avoid them. (This is subtly different from "premature optimization", which is assuming at the beginning that you must be concerned about performance. In fact, you will probably learn, if you don't already know, that premature concern about performance can well cause the very problem it seeks to avoid.)
Optimizing a program means: make it run faster
The only way of making the program faster is making it do less:
find an algorithm that uses fewer operations (e.g. N log N instead of N^2)
avoid slow components of your machine (keep objects in cache instead of in main memory, or in main memory instead of on disk); reducing memory consumption nearly always helps!
Further rules:
In looking for optimization opportunities, adhere to the 80-20-rule: 20% of typical program code accounts for 80% of execution time.
Measure the time before and after every attempted optimization; often enough, optimizations don't.
Only optimize after the program runs correctly!
Also, there are ways to make a program appear to be faster:
separate GUI event processing from back-end tasks; priorize user-visible changes against back-end calculation to keep the front-end "snappy"
give the user something to read while performing long operations (every noticed the slideshows displayed by installers?)
However, as a self-taught, new programmer I've never really understood what exactly do people mean when talking about such things.
Let me share a secret with you: nobody does. There are certain areas where we know mathematically what is and isn't slow. But for the most part, performance is too complicated to be able to understand. If you speed up one part of your code, there's a good possibility you're slowing down another.
Therefore, anyone who tells you that one method is faster than another, there's a good possibility they're just guessing unless one of three things are true:
They have data
They're choosing an algorithm that they know is faster mathematically.
They're choosing a data structure that they know is faster mathematically.
Optimization means trying to improve computer programs for such things as speed. The question is very broad, because optimization can involve compilers improving programs for speed, or human beings doing the same.
I suggest you read a bit of theory first (from books, or Google for lecture slides):
Data structures and algorithms - what the O() notation is, how to calculate it,
what datastructures and algorithms can be used to lower the O-complexity
Book: Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest
Compilers and assembly - how code is translated to machine instructions
Computer architecture - how the CPU, RAM, Cache, Branch predictions, out of order execution ... work
Operating systems - kernel mode, user mode, scheduling processes/threads, mutexes, semaphores, message queues
After reading a bit of each, you should have a basic grasp of all the different aspects of optimization.
Note: I wiki-ed this so people can add book recommendations.
I am going with the idea that optimizing a code is to get the same results in less time. And fully optimized only means they ran out of ideas to make it faster. I throw large buckets of scorn on claims of "fully optimized" code! There's no such thing.
So you want to make your application/program/module run faster? First thing to do (as mentioned earlier) is measure also known as profiling. Do not guess where to optimize. You are not that smart and you will be wrong. My guesses are wrong all the time and large portions of my year are spent profiling and optimizing. So get the computer to do it for you. For PC VTune is a great profiler. I think VS2008 has a built in profiler, but I haven't looked into it. Otherwise measure functions and large pieces of code with performance counters. You'll find sample code for using performance counters on MSDN.
So where are your cycles going? You are probably waiting for data coming from main memory. Go read up on L1 & L2 caches. Understanding how the cache works is half the battle. Hint: Use tight, compact structures that will fit more into a cache-line.
Optimization is lots of fun. And it's never ending too :)
A great book on optimization is Write Great Code: Understanding the Machine by Randall Hyde.
Make sure your application produces correct results before you start optimizing it.

Resources