Related
When I recently asked about the uses of Ruby someone told me it was good for prototyping. I basically know what that means, quickly get the very base of your app up and working, see if there are conceptual problems and then add the rest.
Am I right with how I understand prototyping?
What would be a concrete example of prototyping a Snake game in Ruby or any other language?
Yep, prototyping serves as a proof of concept, to ensure what you want to build is feasible. Something might be left out in a prototype could be like exception handling, logging, etc.
A common mistake often made is teams switching from prototype to real code on the fly, i.e. just continuing on the so-called "prototype", except it just now becomes real code.
For many clients, describing what an application will do, or enumerating a set of requirements, is not enough for them to fully grasp how it will work. This leads to the infamous mid-project changes and scope creep. One way to mitigate this is to create a throwaway version that lets them see a "working" example of how the real application will operate.
It can often function as a proof-of-concept as well, but I think client communication is the more useful purpose of a prototype. In particular, you may want to do a prototype using a different technology -- Ruby/Rails, say, or pure Javascript -- than the final working application will use. If so, there's still proof-of-concept value in terms of the algorithms you're using, or the ways you may have to connect to other systems, but again, the actual code will all be thrown away.
So the part of your description I would disagree with is "add the rest" -- I'd throw the prototype out and start over.
Yes, that's a good basic description of prototyping. It's just getting the foundations working so that you know it can be done, and that it suits your needs.
An example of a Snake game prototype would be having a snake that you can move up down, left and right, eats something, maybe have one block in the middle of the board to maneuver around, and the snake growing when it eats something. So, you wouldn't have a splash screen, or keep track of high scores, or have different levels. Just the basics of the game.
Is "Good for prototyping" a polite way of saying "It's difficult to maintain"?
I have been asked to investigate porting Wii games and some (Sony) PSOne games to OpenGL ES (can you guess what platform?).
I have never undertaken a game port like this before (and will be hiring someone to do it) but I'd like to understand the process.
Does the Wii use OpenGL? If not what does it use and how easy is it to port to OpenGL / OpenGL ES?
Are there any resources/books/blogs that will help me in understanding the process?
Will my company have to become an official Wii developer? If so where do I start that process?
Porting from the Wii or the PSOne is a complex and involved task that can be broken down into multiple separate engineering efforts working in parallel to produce a working end product. The best possible thing you can do before moving to the target hardware is to compartmentalize all of the non-portable code while ensuring that the game continues to run as expected. When you commit to moving to the new platform, your effort switches to reimplementing the non-portable compartmentalized parts.
So, to answer your question, yes, you will need to become or work with a Sony and Nintendo licensed developer in order to take this approach. In the case of Sony, I don't even know if they offer a PSOne development program anymore which presents issues. Your Sony account rep can help clarify.
The major subsystems that are likely to be the focus of your porting effort are:
Rendering Graphics code contains fundamental assumptions about the hardware it is being run on in order to perform optimally. API-level compatibility is superficial compatibility and does not get you as much as you may hope it does. Plan on finding the entry point to the renderer and determining what data you need to render a scene and rewriting all the render code from there for your target hardware.
Game Saving Game state serialization and archival will need to be separated out. Older games often fwrite() structs with #pragma packed fields. Is that still going to work for you?
Networking Wii games write to high level services that are unavailable on your target hardware. At the low level, sockets are still sockets. What network services do your Wii games rely on?
Controls From where you are coming from to where you are going, anything short of a full redesign or reimagining of input will result in poor reviews of the software.
Memory Management Console games often make fundamental assumptions about the rate the system software returns memory from the heap, how much fragmentation it will cause and the duration the game needs to operate under these conditions. These memory management assumptions are obsolete on the new platform. It is wise to write your own memory manager that provides a cushion from the operating system. Also, console games compiled for release are stripped of most error handling and don't gracefully handle running out of memory-- just a heads up.
Content Your bottleneck will be system memory. Can you fit the necessary assets into memory? With textures, you can reduce mip where necessary and with graphics hardware timing, you can pull in the far clipping plane. With assets resident in memory, you may need a technical artist to go through and reduce the face density of your models or an animation programmer to implement a more size-friendly animation codec. This is very game specific.
You also run into the standard set of problems with things like bit compatibility (though the Wii and PSOne are both 32-bit), compiler idiosyncrasies, build script incompatibilities and proprietary compiler extensions.
Games are relatively challenging to test. A good rule of thumb is you want to have enough testers on your team to run through the game in a maximum of two days, covering all major aspects of play. In games that take a long time to beat (RPGs with 30+ hours of gameplay), your testing team needs to be quite large to offer full coverage. Because you are just doing a port, you can come up with a testing plan that maximizes coverage of your new code without having a testing team punch every wall in your game to make sure it (still) has clipping. The game shipped once.
Becoming a licensed developer requires you to apply. The turnaround time, from experience, is not good. Generally speaking, priority is given to studios with shipped titles and organized offices with reasonably good security and the ability to buy the (relatively) expensive development kits. You may be better off working with a licensed developer if you do not meet these criteria.
Console and game development is challenging for people already experienced in it. There is no book that covers it all. My recommendation is to attempt to recruit an expert who has experience shipping titles in a position of systems or engine programmer. What types of programmers and skillsets exist in games is a whole different question for Stack, though.
Games consoles don't use OpenGL but their own, custom libraries. The main reason is that they are pretty slow and have little RAM. So you need to squeeze out every drop of performance you can get. And that means: Custom code. Usually, you get a framework with the developer kit which gets you started and then, you build your code from that. Eventually, you'll start replacing parts from the developer kit with your own special code to get all the speed and special effects you need.
There is a reason why PSOne games are so ugly on the PS3 despite the fact that the developers have access to the sources: Revenue just doesn't justify to touch the code.
Which is one reason why game development is so expensive: Every game is (more or less) a completely new product. Sometimes, game companies can reuse a bit of code from the last version but more often than not, they have to develop everything again. They also don't talk much with each other.
In recent years, kits have become more complex and powerful and you can get complete game engines (with all kinds of effects and 3D support) but each engine is a completely different kind of beast, so you can't even copy code from engine A to B.
Today, media content (video, audio and render sequences) are so expensive that the actual game engine is often a minor detail, so this isn't going to change any time soon.
Net result: If you want to port a game, write an emulator for the hardware (which is usually pretty simple and allows you to run all kinds of games).
[EDIT] To develop software for the Wii, see here: http://www.warioworld.com/
For a Wii emulator, see http://wiiemulator.net/
I ported a couple of games, when I was a new game programmer, from working with one version of our engine to a newer version (where backwards compatibility was neither ignored nor pursued). Even copying (and possibly renaming) the files and placing them in a home in the new project was a bit of work. Following that, the procedure was:
recompile
fix many of the hundreds of errors [in many places, with the same error occurring over and over again]
and
"wire up" calls from the new game engine to the appropriate calls in the old code
"wire up" function calls from the old code into the new game engine
deal with other oddities (ex. in the old game engine, the 2d game would "swizzle" textures itself; in the new version, the engine did it (on specific platforms))
and, while I don't recall this clearly, it was probably mixed in with a bunch of #ifdeffing out portions of code so the thing would actually compile, and possibly creating function stubs to be filled in later.
As I recall, it was three or four days until I had something that compiled. (But, it did help when we ported other games from the old version to the new one!)
The magnitude of the task will come down to what the code you are getting is like. If it has generic 3D calls that you can intercept -- add a thunking layer to -- then you are in business. It depends on the level of abstraction in the code. If it is well-behaved and has things like "RenderModel" and "RenderWorld" calls, you can replace those functions, and even the structures that they work with. If drawing is occurring all over the place, and calls are more like "Draw Polygon" and "Draw Line" or "Draw using this highly optimised data structure", then you are likely in for a long slog.
You shouldn't need a Wii dev kit. Sometimes it is nice to verify that the code you are given does indeed compile in the original environment (and matches the shipping code!), but sometimes you can just take it on faith and make it work in its new environment.
Lastly, I don't think the Wii uses OpenGL, and I really don't know where to point you for further help.
What you may want to do is to start with designing the architecture of the game, write up a detailed specification for what the new game is like.
Once you have this, since you will be rewriting the code, you may find that some of the business logic that doesn't deal with the console can be ported over. But, anything dealing with I/O, user interaction or graphics/sounds will be rewritten, so you might as well do that from scratch.
A specification is very important, to make certain that you know how the current game is working so that the new port will give the same user experience, if that is what is desired.
You may want to keep the same bugs, if that is part of the experience, as, if I know that in the Wii I can jump down and bounce off the wall to safely land, then if I can't do that in the new version then that may be bothersome.
Well porting a PS1 game to an iPhone would be quite a task they work in very different ways. I'm sure its doable but it will be a LOT of work to replace all the fixed point maths and lack of Z-Buffer based rendering to a real graphics chip.
Wii would be a lot easier. The Wii API is very similar to OpenGL. However the Wii has some very nice fixed function features that just are not available on any other GL based platform. Should be doable, though ...
I'm not really sure I can say anything more than that. Have signed far too many NDAs over the years to be 100% sure of what I can and cannot say ;)
Still if you want to hire someone to do some porting work and are prepared to supply the required hardware then I might be free ;)
All you Stackoverflowers,
I was wondering why GUI code is responsible for sucking away many, many cpu cycles. In principle, the graphical rendering is far less complex than Doom (although most corporate GUIs will introduce lots of window dressing). The event handling layer is also seemingly a heavy cost, however, it seems that a well-written implementation should switch between contexts efficiently on modern processors with a lot of memory/cache.
If anybody has run a profiler on their big GUI application, or a common API itself, I'm interested in where the bottlenecks lie.
Possible explanations (that I imagine) may be:
High levels of abstraction between hardware and application interface
Lots of levels of indirection to the correct code to execute
Low priority (compared to other processes)
Misbehaving applications flooding API with calls
Excessive object orientation?
Complete poor design choices in API (not just issues, but design philosophy)
Some GUI frameworks are much better than others, so I'd like to hear varied perspectives. For example, the Unix/X11 system is much different than Windows and even than WinForms.
Edit: Now a community wiki - go for it. I have one more thing to add -- I'm an algorithms guy in school and would be interested if there are inefficient algorithms in GUI code and which they are. Then again, it's probably just the implementation overhead.
I've no idea generally, but I'd like to add another item to your list - font rendering and calculations. Finding vector glyphs in a font and converting them to bitmap representations with anti-aliasing is no small task. And often it needs to be done twice - first to calculate the width/height of the text for positioning, and then actually drawing the text at the right coordinates.
Also, most drawing code today relies on clipping mechanisms to update just a part of the GUI. So, if just one part needs to be redrawn, the code actually redraws the whole window behind the scenes, and then takes just the needed part to actually update.
Added:
In the comments I found this:
I'm also very interested in this. It can't be that the gui is rendered using only the cpu because if you don't have proper drivers for your gfx-card, desktop graphics render incredibly slow. If you have gfx-drivers however desktop-gfx go kinda fast but never as fast as a directx/opengl app.
Here's the deal as I understand it: every graphic card out there today supports a generic interface for drawing. I'm not sure if it's called "VESA", "SVGA", or if those are just old names from the past. Anyway, this interface involves doing everything through interrupts. For every pixel there is an interrupt call. Or something like that. The proper VGA driver however is able to take advantage of DMA and other enhancements that make the whole process WAY less CPU-intensive.
Added 2: Ah, and for OpenGL/DirectX - that's another feature of today's graphics cards. They are optimized for 3D operations in exclusive mode. That's why the speed. The normal GUI just utilizes basic 2D drawing procedures. So it gets to send the contents of the whole screen every time it wants an update. 3D applications however send a bunch of textures and triangle definitions to the VRAM (video-RAM) and then just reuse them for drawing. They just say something like "take the triangle set #38 with the texture set #25 and draw them". All these things are cached in the VRAM so this is again way faster.
I'm not sure, but I would suspect that the modern 3D-accelerated GUIs (Vista Aero, compiz on Linux, etc.) also might take advantage of this. They could send common bitmaps to the VGA up front and then just reuse them directly from the VRAM. Any application-drawn surfaces however would still need to be sent directly every time for updates.
Added 3: More ideas. :) The modern GUI's for Windows, Linux, etc. are widget-oriented (that's control-oriented for Windows speakers). The problem with this is that each widget has its own drawing code and associated drawing surface (more or less). When the window needs to get redrawn, it calls the drawing code for all its child-widgets, who in turn call the drawing code for their child-widgets, etc.. Every widget redraws its whole surface, even though some of it is obscured by other widgets. With above mentioned clipping techniques some of this drawn information is immediately discarded to reduce flickering and other artifacts. But still it's lots of manual drawing code that includes bitmap blitting, stretching, skewing, drawing lines, text, flood-filling, etc.. And all this gets translated to a series of putpixel calls that get filtered through clipping filters/masks and other stuff. Ah, yes, and alpha blending has also become popular today for nice effects which means even more work. So... yes, you could say this is because of lots of abstraction and indirection. But... could you really do it any better? I don't think so. Only 3D techniques might help, because they take advantage of GPU for alpha-calculations and clipping.
Let's begin by saying that writing libraries is much harder than writing a stand-alone code. The requirement that your abstraction be reusable in as many contexts as possible, including contexts which you haven't though of yet, makes the task challenging even for experienced programmers.
Amongst libraries, writing a GUI toolkit library is a famously difficult problem. This is because the programs which use GUI libraries range over a very wide variety of domains with very different needs. Mr Why and Martin DeMollo discussed the requirements placed of GUI libraries a little while ago.
Writing GUI widgets themselves is difficult because computer users are very sensitive minute details of the behavior of the interface. Non-native widget never feel right, don't they? In order to get non-native widget right -- in order to get any widget right, in fact -- you need to spend an inordinate amount of time tweaking the details of the behavior.
So, GUI are slow because of the inefficiencies introduced by the abstraction mechanisms used to create highly-reusable components, that added to shortness of time available to optimize the code once so much time has been spent just getting the behavior right.
Uhm, that's quite a lot.
The most simple but probably obvious answer is that the programmers behind these GUI apps, are really bad programmers. You can go along way in writing code which does the most bizarre things and it will be faster but few people seem to care how to do this or they deem it to be an expensive non-profitable time wasted effort.
To set things straight off-loading computations to the GPU won't necessarily fix any problems. The GPU is just like the CPU except it's less general purpose and more a data paralleled processor. It can do graphics computations exceptionally well. Whatever graphics API/OS and driver combination you have doesn't really matter that much... well OK, with Vista as an example, they changed the desktop composition engine. This engine is far better composting only that which has changed, and since the number one bottle neck for GUI apps is redrawing is a neat optimization strategy. This idea of virtualizing your computational needs and only update the smallest change every time.
Win32 sends WM_PAINT messages to windows when they need to be redrawn, this can be a result of windows occluding each other. However it's up to the window itself to figure out whats actually changed. More than so nothing did change or the change that was made was trivial enough so that it could have been just preformed on top of what ever top most surface you had.
This kind of graphics handling doesn't necessarily exist today. I would say that people have refrained from writing really efficient and virtualizing rendering solutions because the benefit/cost ration is rather low/high (bad).
Something Windows Presentation Foundation (WPF) does, which I think is far superior to most other GUI API is that it splits layout updates and rendering updates into two separate passes. And while WPF is managed code the rendering engine is not. What happens with rendering is that the managed WPF rendering engine builds a command queue (this is what DirectX and OpenGL does) which is then handed of to the native rendering engine. What's a bit more elegant here is that WPF will then try to retain any computation which didn't change the visual state. A trick if you may, where you avoid costly rendering calls for things that doesn't have to be rendered (virtualizing).
In contrast to WM_PAINT which tells a Win32 window to repaint itself a WPF app would check what parts of that window requires repainting and only repaint the smallest change.
Now WPF is not supreme, it's a solid effort from Microsoft but it's not the holy grail yet... the code which runs the pipeline could still be improved and the memory footprint of any managed app is still more than I would want. But I hope this is the kind of answer you are looking for.
WPF is able to do some things asynchronously rather decent, which is a huge deal if you wanna make a really responsive low-latency/low-cpu UI. Asynchronous operations is more than off-loading work on a different thread.
To summarize things slow and expensive GUI means too much repainting and the kind of repainting which is very expensive i.e. the entire surface area.
I does to some degree depend on the language. You might have noticed that Java and RealBasic applications are a fair bit slower than their C-based (C++, C#, Objective-C) counterparts.
However GUI applications are much more complex than command line apps. The Terminal window needs only to draw a simple window that doesn't support buttons.
There are also multiple loops for extra inputs and features.
I think that you can find some interesting thoughts on this topic in "Window System Design: If I had it to do over again in 2002" by James Gosling (the Java guy, also known for his work on pre-X11 windowing systems). Available online here[pdf].
The article focuses on the positive side (how to make it fast), not on the negative side (what's making it slow), but it is still a good read on the topic.
Speed and learnability do not directly fight each other, but it seems easy enough to design such a GUI that lacks either (or both) of them. GUI designers seem to prefer 'easy to learn' most of the time even when 'fast to apply' would be wiser.
There's only few UI concepts or programs that are weighted towards maximizing the peak efficiency of whatever you are doing with the program. Most of them haven't gotten common.
Normal people prefer gedit instead of vim. For normal people there are already good-enough GUIs because there was tons of research on them two decades ago.
I'd like to get some advices on doing UIs that do the tradeoffs from 'easy to learn' rather than from 'fast to apply'.
We have a product in our lineup that has won numerous awards based largely on its ability to provide more power with an easier interface than any of our competitors. I designed the interface a few years after holding a position in one of Bell Labs' human interface research groups so I had a pretty clear idea of what constituted "success" when I approached it. I have four pieces of design advice for creating easy but powerful interfaces.
First, select a metaphor that makes sense in their environment and do your best to stick to it. This doesn't have to be a physical metaphor although that can help if working with people who are not tech savvy. This was popular in the early days of Windows but its value remains. We used a "folder and page" metaphor that permitted us to organize a wide range of tasks while not crimping power users' style.
Second, offer a consistent layout relationship between data display and tasks. In our interface, each "page" displays a set of action buttons in the exact same position and, wherever possible, uses the same actual buttons. Thus, once one page is learned, the user has a head start on learning the rest. One of these buttons, always placed in a distinctive position, is a "Help" button...which brings me to point #3. The more general rule is: find ways of leveraging learning in one area to assist in learning others.
Third, offer context-sensitive help and make sure that it addresses the user's primary question (which is usually "what do I do now"?) How often have you seen technical help that simply shows you the Inheritance tree, constructor syntax and an alphabetical list of methods? That isn't help, it is abuse. We focused all of our help on walking people through sample tasks. In particularly tough areas, we also offered multimedia tutorials.
Fourth, offer users the ability to customize the interface. Our users often had no use for specific "pages" (analysis types) in their work. Thus, we made it very simple to turn them off so that the user would see an interface that was no more complicated than it had to be. Our app was usually installed by a power user and then used by multiple staff members so this was more of a win for us because we could usually count on the power user to understand what to shut off. However, I think it is good advice in general.
Good luck!
Autocad has a console mode. As you do things using the mouse and toolbars, the text-equivalent of those commands is written to the console. You can type commands directly in there. This provides a great way to learn the power-user names for commands (they are very short, like unix commands) which aids greatly the process of moving from beginner to productive power-user. Generally speaking, one primary focus has to be in minimising movement between the mouse and keyboard, so put lots of functionality into the mouse, or into the keyboard, because when you have to move your hands like that, there is a real delay in trying to find the right place to put them.
Beyond avoiding an angry fruit salad, just try to make it as intuitive as possible. Typically, programs with a very frustrating UI share one common problem, the developers didn't define a clear scope of what the program would actually do prior to marrying a UI design.
Its not so much a question of 'easy' , some people jump right into the UI and begin writing stuff to back the interface, rather than writing the core of a planned program and then planning an interface to use it.
This goes for web apps, desktop apps .. or even command line programs. A good design means writing the user interface after (and only after) you are sure that 'scope creep' is no longer a possibility.
Sure, you need some interface to test your program, but be prepared to trash it and do something better prior to releasing the program. Otherwise, there's a good chance that the UI is only going to make sense to you.
Rant (or, Stuff I think you should keep in mind):
Speed and learnability do directly fight each other. A menu item tells you what it does so that you don't have to remember. But it's much slower than a keyboard shortcut that you have to memorize to benefit from. The general technique for resolving this conflict seems to be allowing more than one way of doing things. While one way of doing something usually cannot be both fast and easy to learn, you can often provide two ways to accomplish the same task: one that's fast, and one that's obvious.
There are different kinds of people. The learning gap is a result of interest, motivation, intellectual capacity, etc. There is a class of person that will never bother to even learn which menu provides the action they want, and they'll scrub the menubar every time. There is also a (minority) class of person that thinks vim (or emacs) is the best thing since sliced bread. Most people probably fall somewhere in between these extremes.
My answer to the actual question:
I think you are asking how to strive for a fast UI. Your question wasn't particularly clear (to me).
First of all, be consistent. This helps both speed and learnability. Self consistency is the most important, but consistency with your environment may also be important.
For real speed, require as little attention and motion as possible. Keyboard shortcuts are fast because experienced users know where they are (they don't have to look), and their hands are already on the keyboard. Especially avoid forcing the user to change their position in front of the computer (e.g., moving one hand between the mouse and keyboard).
The keyboard is almost always faster than the mouse.
Customization (especially the ability to write custom scripts) will let power users make the interface work the way that is fastest for their specific habits.
Make it possible to get by without the most powerful features. All you need to know in order to survive in vim is "i, ESC, :wq, :q!". With that, you can use vi about the same way a lot of people use notepad. but once you start learning "h,j,k,l,w,b,e,d,c" (and so on) you get much more efficient. So there is a steep learning curve, but you can get by until you surmount it.
Keep in mind that if you focus on interface efficiency, you will probably limit your user base. Vim is popular among programmers, but lots of programmers use other tools, and it's virtually unknown among non-programmers. Most people want easy, not fast. Some want a balance. A very few just want fast.
I would like to point you towards Kathy Sierra's old blog for thoughts on 'easy to learn' and 'fast to apply' — I don't necessarily agree there needs to be a tradeoff between the two.
Three posts to get you started:
How much control should users have? This post ponders on whether 'fast to apply' is the ideal we should strive for.
The hi-res user experience talks about what you say about "normal people" vs. others. It's not so much that there are different kinds of people, but there are different levels of learning/expertise/involvement. Some are satisfied with less, some need more. How you get from less to more is arguably pretty much the same for everyone.
Finally, Featuritis vs. the Happy User Peak talks about the scope creep pointed out by #tinkertim.
Have you seen Gimp shortcuts?
Use nice visual controls and show keyboard shortcuts for them while hovering control - that will help to learn fast mode. If your software copy some behavior of other programs - copy shortcuts mapping from them (such as Copy/Paste/New Tab/Close Window/etc), but allow to dynamically re-map them as shown in Gimp. For reaped operations you could implement Action recoder. But it depends on type the software.
The main thing to be careful of is putting UI elements where they are most commonly located for other applications in that environment. For example, if you're going to make use of a menu system, people are accustomed to it being along top of the window by default for a desktop application. If you're in a web browser a menu system on a webpage seems out of place because it's not a consistent feature. If you're going to have an options/preferences configuration window, people are used to finding it under the Tools menu option, occasionally under the Edit menu. The main thing with keeping a UI "easy to learn" is that your UI elements shouldn't break the mold too much of how they're used in other applications.
If you haven't had the opportunity to see Mark Miller's presentation on The Science of Great User Experience, I'd recommend you watch the DNR TV episodes Part 1 and Part 2.
While I've been writing my own UI I've understood couple of things myself.
I imitated vim, but at the same time realized why it's so fast to use for text editing. It is because it acknowledges a thing: People prefer doing one thing at a time (inserting text, navigating around, selecting text), but they may switch the task often.
This means that you can pack different activities into different modes if you keep the mode switching schemes simple. It gives space for more commands. The user also gets better chances at learning the full interface because they are sensibly grouped already.
Vim is practically stuffed full of commands, every letter in the keyboard does something in vim, depending on the mode. Still I can remember most of them. And it's all because of modes.
I know bunch of projects that sneer at mode-dependent behavior. Main argument is the uncertainty of which mode you are in. In vim I'm never uncertain about the mode where I am in. Therefore I say the interface design is a failure if a trained user fails to recognize in which mode the interface is operating at the moment.
I'm a computer science student designing a project and I've started wondering what are good examples or software, or even hardware that are toeing the line between being feature rich with good usable features for regular users and being too intimidating for new users. Also could anyone recommend any good tips/books for designing good quality applications that are feature rich but not "bloated"?
"Make everything as simple as possible, but not simpler." - Albert Einstein
"Perfection is reached not when there is nothing left to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry
I am not trying to be flippant but these quotes really are the best advice. Simplicity of design should be your goal. Not that achieving simplicity is easy! On the contrary, it is quite difficult but it is possible.
Try thinking about things a bit differently. Rather than
How many things can I add before this becomes bloated?
try
What are the fewest number of features and elements I can include while still providing a superior experience for my users?
Here's a good set of slides from a presentation on the topic: Rescue Princess 2.0.
The first order of business should just be keeping the application easy to use. Beyond that, all I can say is, beware of writing features for an imaginary user: make sure someone actually needs it before you start coding.
As a direct answer to your question: pretty much any Microsoft product. I'm showing my bias here, but Microsoft has a strong tendency to keep their codebase, and add features on top of features until the original functionality of the app is nearly lost beneath mounds of accreted crud.
Look at MS Word, for example; while you can still just open it up and start typing, god forbid if you want to renumber a section of your document while leaving the rest alone. Heaven forbid if you want to generate a Table of Contents that includes references to an Appendix. This sort of stuff is something that is de rigeur for Word Processors, and Word supports it, it just supports it in a way that you cannot get it done without a manual, several cups of coffee, and bandages to stop the bleeding from banging your head on the desk.
Microsoft isn't alone in doing this; this thing tends to happen all the time, with all sorts of products; but they are among the worst offenders, I've found.
1: What do your users need, and want, and
2: Which features will you have time to implement?
Your question is pretty general. Which features constitute bloat? That kind of depends on whether you're writing an antivirus scanner, an OS or a word processor.
There is no clear barrier between "good" and "too much".
However, it depends on what you want to do.
If you're developing a SDK, I recommend splitting your implementation in several small libraries(rather than just one big SDL library, there is the SDL core, SDL_Mixer, SDL_Image, etc.)
If you're developing an application, keep a module-based system and a plug-in mechanism.
That way, new features can be added more easily and bloat can be more easily detected.
You may get to a point where you'll add new features some will consider "great" and others "bloat". Otherwise, your application may reach a point that some will call it "feature-poor" and others will call it "just enough".
This isn't an exact quote, but the idea was something like this:
A piece of software is perfect not when there is nothing more to add, but when there is nothing more to remove.
In essence, the simpler and more to-the-point is a software, the better.
To get examples of good software design, take a look at programs that are popular today. Google applications would be a nice place to look. Skype perhaps. Heh, even StackOverflow. :)
If you want intimidating, go to the world of CAD. Check out for example Blender. That's a freeware 3D designer software. Good tool I'm told, but the UI has so many buttons/panels/menus/etc. that it makes baby bunnies cry. Unfortunately I cannot say if this would be a good example of a "bad" UI. 3D designing is a very complex process and all those tools are probably in the right place. But it's definately intimidating. :)
A bad UI design can often be found with propieritary software that comes with propieritary hardware. Unfortunately I cannot give you any examples from the top of my head.
I always tend to design my projects in a way that they're just skeletons which are as extensible as possible. Limiting factors are performance, complexity or Thirdparty-limitations.
This way you could add additional features after finishing the basic structure. A user could also add his needed features.
This probably does not work very good for GUI-applications which should have a good usability without much configuration, but I'm sticking good with this approach for those libs I develop. (They're used by other coders who like to have a highly modifable piece of software)
It's not very hard to develop an application/lib which is bloated with features. But it is to develop an app which could be easily extended by other developers/users to match their own needs.
Develop a wide-ranging plug-in system so you add and take out stuff at any time. Problem solved. If only that was as easy as writing spaghetti code. ;)