Choosing a strategy for BI module - reporting

The company I work for produces a content management system (CMS) with different various add-ons for publishing, e-commerce, online printing, etc. We are now in process of adding "reporting module" and I need to investigate which strategy should be followed. The "reporting module" is otherwise known as Business Intelligence, or BI.
The module is supposed to be able to track item downloads, executed searches and produce various reports out of it. Actually, it is not that important what kind of data is being churned as in the long term we might want to be able to push whatever we think is needed and get a report out of it.
Roughly speaking, we have two options.
Option 1 is to write a solution based on Apache Solr (specifically, using https://issues.apache.org/jira/browse/SOLR-236). Pros of this approach:
free / open source / good quality
we use Solr/Lucene elsewhere so we know the domain quite well
total flexibility over what is being indexed as we could take incoming data (in XML format), push it through XSLT and feed it to Solr
total flexibility of how to show search results. Similar to step above, we could have custom XSLT search template and show results back in any format we think is necessary
our frontend developers are proficient in XSLT so fitting this mechanism for a different customer should be relatively easy
Solr offers realtime / full text / faceted search which are absolutely necessary for us. A quick prototype (based on Solr, 1M records) was able to deliver search results in 55ms. Our estimated maximum of records is about 1bn of rows (this isn't a lot for typical BI app) and if worse comes to worse, we can always look at SolrCloud, etc.
there are companies doing very similar things using Solr (Honeycomb Lexicon, for example)
Cons of this approach:
SOLR-236 might or might not be stable, moreover, it's not yet clear when/if it will be released as a part of official release
there would possibly be some stuff we'd have to write to get some BI-specific features working. This sounds a bit like reinventing the wheel
the biggest problem is that we don't know what we might need in the future (such as integration with some piece of BI software, export to Excel, etc.)
Option 2 is to do an integration with some free or commercial piece of BI software. So far I have looked at Wabit and will have a look at QlikView, possibly others. Pros of this approach:
no need to reinvent the wheel, software is (hopefully) tried and tested
would save us time we could spend solving problems we specialize in
Cons:
as we are a Java shop and our solution is cross-platform, we'd have to eliminate a lot of options which are in the market
I am not sure how flexible BI software can be. It would take time to go through some BI offerings to see if they can do flexible indexing, real time / full text search, fully customizable results, etc.
I was told that open source BI offers are not mature enough whereas commercial BIs (SAP, others) cost fortunes, their licenses start from tens of thousands of pounds/dollars. While I am not against commercial choice per se, it will add up to the overall price which can easily become just too big
not sure how well BI is made to work with schema-less data
I am definitely not be the best candidate to find the most approprate integration option in the market (mainly because of absence of knowledge in BI area), however a decision needs to be done fast.
Has anybody been in a similar situation and could advise on which route to take, or even better - advise on possible pros/cons of the option #2? The biggest problem here is that I don't know what I don't know ;)

I have spent some time playing with both QlikView and Wabit, and, have to say, I am quite disappointed.
I had an expectation that the whole BI industry actually has some science under it but from what I found this is just a mere buzzword. This MSDN article was actually an eye opener. The whole business of BI consists of taking data from well-normalized schemas (they call it OLTP), putting it into less-normalized schemas (OLAP, snowflake- or star-type) and creating indices for every aspect you want (industry jargon for this is data cube). The rest is just some scripting to get the pretty graphs.
OK, I know I am oversimplifying things here. I know I might have missed many different aspects (nice reports? export to Excel? predictions?), but from a computer science point of view I simply cannot see anything beyond a database index here.
I was told that some BI tools support compression. Lucene supports that, too. I was told that some BI tools are capable of keeping all index in the memory. For that there is a Lucene cache.
Speaking of the two candidates (Wabit and QlikView) - the first is simply immature (I've got dozens of exceptions when trying to step outside of what was suggested in their demo) whereas the other only works under Windows (not very nice, but I could live with that) and the integration would likely to require me to write some VBScript (yuck!). I had to spend a couple of hours on QlikView forums just to get a simple date range control working and failed because the Personal Edition I had did not support downloadable demo projects available on their site. Don't get me wrong, they're both good tools for what they have been built for, but I simply don't see any point of doing integration with them as I wouldn't gain much.
To address (arguable) immatureness of Solr I will define an abstract API so I can move all the data to a database which supports full text queries if anything goes wrong. And if worse comes to worse, I can always write stuff on top of Solr/Lucene if I need to.

If you're truly in a scenario where you're not sure what you don't know i think it's best to explore an open-source tool and evaluate its usefulness before diving into your own implementation. It could very well be that using the open-source solution will help you further crystallise your own understanding and required features.
I had worked previously w/ an open-source solution called Pentaho. I seriously felt that I understood a whole lot more by learning to use Pentaho's features for my end. Of course, as is the case of working w/ most of the open-source solutions, Pentaho seemed to be a bit intimidating at first, but I managed to get a good grip of it in a month's time. We also worked with Kettle ETL tool and Mondrian cubes - which I think most of the serious BI tools these days build on top of.
Earlier, all these components were independent, but off-late i believe Pentaho took ownership of all these projects.
But once you're confident w/ what you need and what you don't, I'd suggest building some basic reporting tool of your own on top of a mondrian implementation. Customising a sophisticated open-source tool can indeed be a big issue. Besides, there are licenses to be wary of. I believe Pentaho is GPL, though you might want to check on that.

First you should make clear what your reports should show. Which reporting feature do you need? Which output formats do you want? Do you want show it in the browser (HTML) or as PDF or with an interactive viewer (Java/Flash). Where are the data (database, Java, etc.)? Do you need Ad-Hoc reporting or only some hard coded reports? This are only some questions.
Without answers to this question it is difficult to give a real recommendation, but my general recommendation would be i-net Clear Reports (used to be called i-net Crystal-Clear). It is a Java tool. It is a commercial tool but the cost are lower as SAP and co.

Related

Scalability of open source XML Databases

We are looking to develop a reporting application that reports on data stored in a large number of XML files. ~3,000,000 files ranging in size from 7KB to 5MB (Each file conforms to the same schema). I’m guessing that there will be about around 200GB of XML. I’m looking at a number of open source XML databases (Sedna, BaseX and eXist-db) and I’m not sure how well these systems will scale, I read a comparison of these three database here. Which is where my concerns of scalability originated from.
Some details regarding what we want to do are: We won’t be changing the data in any of the XML files and new files will be added daily. Since we are concerned with reporting query performance is important to us, and the time it takes to add and index new files isn’t a high priority for us.
I’m wondering if anyone has experience using these systems at similar scales? I’ve looked at the BaseX statistics page and see some fairly large XML instances but no mention of performance.
We don’t require an open source product and the MarkLogic system looks like it can fit the bill nicely, but I’m curious what’s been done with open source products.
I think it is impossible to answer your question with either a yes or no. It is really impossible to state anything about performance from the little details that you have given.
Performance is typically based on the queries that you want to perform and the distribution of your data. Not to mention, what you consider to be "acceptable".
In the paper you referenced, it is interesting to note that they state that they could not get the new range indexes in eXist 2.2 preview to work. Certainly without those, they would have seen much worse performance. Also at the end they state that they will select Sedna as they can overcome the problems with Sedna, it was not clear to me why that was, i.e. do they have C++ devs that can work with Sedna but they don't have Java devs that could work with eXist or BaseX? Finally, the version of Java they used for testing eXist and BaseX is rather old, the next release of eXist (3.0) will only support Java 8 and newer.
I would be surprised if you could not store 200GB of data into BaseX, eXist or Sedna, but without knowing your data and the sort of queries you want to execute, I cannot comment on query performance.
I think you would be best to do a small trial of either one or all, in a manner not dissimilar to that linked article.
Just want to share my experience on this topic. My experience is limited to much smaller data sets - that is roughly about 50k documents of about 1GB total size. We use Sedna XML DB for this purpose. We do not change documents but rather overwrite existing documents when changes occur and have a lot of read-only XQueries including big reports.
Shortly, my opinion is Sedna won't work for you unless you find a way to replicate it to another server to be used for reading. I have experienced major performance issues related to collection locks with a rather moderate load on the database when performing some long-lasting reporting XQueries. As far as I know, Sedna does not offer replication capabilities but you can probably adopt some solution on top of Sedna. For example, quick googling revealed some research in this area. You can try asking on the Sedna mailing list. Among other disadvantages are lack of XQuery 3.0 support and seemingly frozen further development. However, the support is still quite active on the mailing list.
Also I have some experience with eXist-db but I use it more as a XML processing and pipelining platform rather than an XML storage. Still it looks a bit more promising in relation to scaling. Although I have not used its replication capabilities, they are mentioned in the docs. I suggest you try searching/asking on the mailing list as well.

Bleeding edge vs field tested technology. How will you strike a balance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have been pondering about this for some time. How do you pick a technology ( am not talking about Java vs .Net vs PHP) when you are planning for a new project /maintaining an existing project in an organization.
Arguments for picking the latest technology
It might overcome some of the limitations of the existing technology ( Think No SQL vs RDBMS when it comes to scalability). Sometimes latest technology is backward compatible and only get to gain the new features without breaking the old functionality
It will give better user experience (May be HTML 5 for videos, just a thought)
Will cut down development time/cost and make maintenance of the code base relatively easy
Arguments for picking field tested technology/against picking a bleeding edge technology
It has not stood the test of time. There can be unforeseen problems. convoluted solutions might lead to more problems during maintenance phase and the application might become a white elephant
Standards might not yet be in place. Standards might change and significant rework might be needed to make the project adhere to standards.Choosing the field tested technology will save these efforts
The new technology might not be supported by the organization. Supporting a new (or for that matter a different technology) would require additional resources
It might be hard to get qualified resources with bleeding edge technology
From a developer perspective, I do not see a reason not to get hands dirty with some new technology (in your spare time) but he/she might be limited to open source/free ware/developer editions
From am organization perspective, it looks like its a double edged sword. Sit too long in a "field tested" technology and good people might move away (not to mention that there will always be people who prefer familiar technology who refuse to update their knowledge). Try an unconventional approach and you risk overrunning the budged/time not to mention the unforeseen risks
TL;DR
Bottom line. When do you consider a technology mature enough so that it can be adopted by an organization ?
Most likely you work with a team of people and this should be taken into consideration as well. Some ways to test/evaluate technology maturity:
Is your team and management receptive to using new technology at all? This might be your biggest barrier. If you get the sense that they aren't receptive, you can try big formal presentations to convince them... or just go try it out (see below).
Do some Googling around for problems people have with it. If you don't find much, then this is what you'll run into when you have problems
Find some new small project with very low risk (e.g. something just you or a couple people would use), apply new technology in skunkworks fashion to see how it pans out.
Try to find the most mature of the immature. For instance if you're thinking about a NoSQL type of data store. All of the NoSQL stuff is immature when you compare against RDBMS like Oracle that has been around for decades, so look at the most mature solution for these that has support organizations, either professionally or via support groups.
Easiest project to start is to re-write an existing piece of software. You already have your requirements: make it just like that. Just pick a small piece of the software to re-write in the new technology, preferably something you can hammer at with unit/load testing to see how it stands up. I'm not advocating to re-write an entire application to prove it out, but a small measurable chunk.
A few rules of thumb.
Only use one "new" technology at a time. The more new things you use, the greater chance of there being a serious problem.
Make sure there is an advantage to using it. If that cool, new technology does not give you some advantage, why are you using it?
Plan for the learning curve. There will be aspects of the new technology that you do not know about. You, and your team, will have to spend more time learning about them then you think you will.
If possible try the new technology on a small less important project first. Your companies accounting system is not the best place to experiment.
Have a back up plan. New technologies don't always turn out to be worth it. Know when you are in a "coffin corner" and it is time to bail out.
There's a difference between "field tested" and "out-of-date." Developers (myself included) generally prefer bleeding edge stuff. To some extent, you have to keep your development staff happy and interested in their jobs.
But I've never had a customer unhappy with field tested technology. They are generally unaware or unconcerned about the technology that is used in producing a product. Their number one priority is how it works in their daily interactions with it.
When starting a new project, two questions come to mind in evaluating if I should move to a new platform:
1) What benefits do I get from going to the new platform. If it offers me a dramatically reduced development time or significant performance increases for the users, I will consider a semi-bleeding edge technology.
2) What risks are associated with the new platform. Is it likely that there are some scenarios that I will encounter that aren't quite worked out in the new platform? Is it likely that support for this new platform will fizzle out and I'll be left holding the bag on supporting a deprecated environment? Are there support channels in place that I can use if I get stuck at a critical juncture of my project?
Like everything, it's a cost/benefit analysis. Generally speaking, while I always learn and train on new technologies, I won't build something for a client using a technology (environment, library, server platform, etc) that hasn't been widely adopted by a large number of developers for at least 6-12 months.
That depends on the context. Each organisation has to make its own decisions. The classic literature on this topic is Crossing the Chasm by Geoffrey A. Moore.
If the company/community developing the product is known for good products then I'm very happy to put a safe bet on their new products.
For example I would be quite happy to develop on Rails 3 or Ruby 1.9, as I am quite sure they will be fine when finalized.
However I wouldn't write much code in superNewLang untill I was convinced that they had a great, well supported product, or they had a feature I couldn't live without.
I will tend to get the most trusted product, that suits all my needs.
You got to ask yourself only one question ... do I feel lucky ?
Where's the money ?
Do you get to profit big and fast enough even if tech X is a flop ?
If not, does new tech bring higher perf for a long time?
Like 64-bit CPU, Shader Model 4, heavy multi-threading
Do you see a lot of ideological trumpeting around it
"paradigm shift" blurbs etc. - wait 2-8 yr till it cools off and gets replaces :-)
Is it bulky and requires 2x of everything just to run?
let your enemy pay for it first :-)
Can you just get some basic education and a trial project wihtout risking anything?
might as well try unless it looks like a 400 lb lady who doesn't sing :-)
There is no general answer to such question - please got to #1

Where can i find sample alogrithms for analyzing historical stock prices?

Can anyone direct me in the right direction?
Basically, I'm trying to analyze stock prices and see if I can spot any patterns. I'm using PHP and MySQL to do this. Where can I find sample algorithms like the ones used in MetaStock or thinkorswim? I know they are closed source, but are there any tutorials available for beginners?
Thank you,
P.S. I don't even know what to search for in google :(
A basic, educational algorithm to start with is a dual-crossover moving average. Simply chart fast (say, 5-day) and slow (say, 10-day) moving averages of a stock's closing price, and you have a weak predictor of when to buy long (fast line goes above slow) and sell short (slow line goes above the fast). After getting this working, you could implement exponential smoothing (see previously linked wiki article).
That would be a decent start. Take a look at other technical analysis techniques, but do keep in mind that this is quite a perilous method of trading.
Update: As for actually implementing this? You're a PHP programmer, so here is a charting library for PHP. This is the one I used a few years ago for this very project, and it worked out swimmingly. Maybe someone else can recommend a better one. If you need a free source of data, take a look at Yahoo! Finance's historical data. They dispense CSV files containing daily opening prices, closing prices, trading volume, etc. of virtually every indexed corporation.
Check out algorithms at investopedia and FM Labs has formulas for a lot of technical analysis indicators.
First you will need a solid math background : statistics in general, correlation analysis, linear algebra... If you really want to push it check out dimensional transposition. Then you will need solid basis in Data Mining. Associations can be useful if yo want to link strict numerical data with news headlines and other events.
One thing for sure you will most likely not find pre-digested algorithms out there that will make you rich...
I know someone who is trying just that... He is somewhat successful (meaning is is not loosing money and is making a bit) and making his own algorithms... I should mention he has a doctorate in Actuarial science.
Here are a few more links... hope they help out a bit
http://mathworld.wolfram.com/ActuarialScience.html
http://www.actuary.com/actuarial-science/
http://www.actuary.ca/
Best of luck to you
Save yourself time and use programs like NinjaTrader and Wealth-Lab. Both of them are great technical analysis platforms and accept C# as a programming language for defining your trading rules. Every possible technical indicator you can imagine is already included and if you need something more advanced you can always write your own indicator. You would also need a lot of data in order for your analysis to be statistically significant. For US stocks and ETFs, visit www.Kibot.com. We have good experience using their data.
Here's a pattern for ya
http://ddshankar.files.wordpress.com/2008/02/image001.jpg
I'd start with a good introduction to time series analysis and go from there. If you're interested in finding patterns then the interesting term is "1D-Pattern Matching". But for that you need nice features, so google for "Feature extraction in time series". Remember GiGo. So make sure you have error-free stock price data for a sufficiently long timeperiod before you start.
May I suggest that you do a little reading with respect to the Kalman filter? Wikipedia is a pretty good place to start:
http://en.wikipedia.org/wiki/Kalman_filter/
This should give you a little background on the problem of estimating and predicting the variables of some system (the stock market in this case).
But the stock market is not very well behaved so you may want to familiarize yourself with non linear extensions to the KF. Yes, the wikipedia entry has sections on the extended KF and the unscented KF, but here is an introduction that is just a little more in-depth:
http://cslu.cse.ogi.edu/nsel/ukf/
I suppose if anyone had ever tried this before then it would have been all over the news and very well known. So you may very well be on to something.
Use TradeStation
It is a platform that lets you write software to analyze historical stock data. You can even write programs that would trade the stock, and you can back test your program on historical data or run it real time through out the day.

How to assemble a project with software products and your own code

Let's say you have a specific project on hand, it can be divided to parts, and you are not completely sure about all the difficulties that will arise.
Time is of the essence.
How do you decide whether a part should use software product or your own code? (considering, that some tools are awesome, but will require much time to learn)
How do you choose the right software product?
How much time (as a percentage) should this stage of choosing the right product, if any, take, and how much time to evaluate a single product?
Is there a way-back, is it o.k to change your mind, after putting efforts in a product, and finding it not suitable?
I would love to hear any rules of thumb about those.
Changing your decisions is like changing your blueprint for a house while it's already being built.
It will entirely depend on what you have spent in time and money to that point.
Some considerations:
0) Understand the problem in clear and simple terms before beginning. Know what's critical to it's success and then use that list to see if any software, language, or tool will aid it, and at what cost, and if the cost outweighs the benefit.
1) Use a crammer's schedule. Build it in the order of what you would build if you only had 1 day or 1 week and no more to work on it. It's amazing how much doesn't matter anymore when you have to do 50% of the features at 100% of the quality. Focus on value, value, value. Read something like 37 Signal's book Getting Real for more on this.
2) Do not re-invent the wheel. It's always easier it seems to build something from scratch. Unless you are doing a fraction of the implementation and it's truly simpler, meaning you can avoid abstraction until you forget what you were building, consider it. If you can build it faster, better, cheaper and in the same amount of time, do it.
3) Know the features of your tools, and the benefits any tools need to give your solution. You should be familiar with or at least aware of many of the tools out there that you may or may not integrate.
4) Pick a language that is used to solve a lot of problems. Chances are you will find many great libraries and tools to build your software that will save your time. If you need something that delivers, can run, and you can lean on the smarts of others, use something established, or a language that can access .NET or Java easily if need be.
For each part of your software you recognize as a software component/package:
How do you decide whether a part should use software product or your own code?
(considering, that some tools are awesome, but will require much time to learn)
Ask yourself whether the component you are considering is a part of your product's main business core.
If not then it is usually better to use an existing solution and not send too much time on it.
If it is then make sure there is no existing product that is better than what you are planning. - It there is, consider purchasing licenses to it instead of developing your product.
Search online for similar components (commercial, open source and even articles/demo-source-code).
Do any of them implement all of your requirements from the components?
How much do they cost, would it cost you more to develop and maintain a similar component?
What are the license conditions? - Are they OK for your product?
If component includes a user-interface, is it plesent to look at and easy to use?
If you answered yes to all the above then do not develop the component yourself.
If not:
Is the component open source or published in an article / demo-code? - If so, it robust, could you take the code an improve it or use it as an example to help you write code that is more suitable for your requirements? - If so write your own code, use code as part of your own component that is not developed from scratch.
If your answer to the above is no, then you'll have to develop your own (or you're searching in the wrong places).
How do you choose the right software product?
See answers to 1.
How much time (as a percentage) should this stage of choosing the right product, if any, take, and how much time to evaluate a single product?
Clear an entire day, search for existing components, read about them (features, prices, reviews) and download + install up to 5 of them.
Clear another day evaluate 2-3 products, compare demos/examples, look at code, write 2 small examples of using each (same example different product).
If you choose more than 3, clear another day and test the others.
Is there a way-back, is it o.k to change your mind, after putting efforts in a product, and finding it not suitable?
Always design your software so that every component is replaceable.
This guarantees that there is always "a way back".
(Use interfaces & adapter design pattern, divide to many assemblies, connect all components as loosely as possible (using events, binding, as etc.) - loose coupling.
Even if you implement something yourself make sure there is a way back - sometime you may use the wrong technology/design and have to replace a component with a new one you develop/purchase.
Other rules of thumb:
Consider which application-wide technologies to use before considering each component.
Writing in assembly would take the longest, in C less, in C++ even less, in more modern languages such as C#, Java, Delphi even less.
Which has more of the self components that are relevant to you? What does your team have experience in.
If you are using .NET (C#), then WPF could help you lower the coupling between GUI and business logic and make a better looking GUI, however it take time to learn how to use it (a 5 day minimum course is very much recommended).
As in any art the difficulty is composing a good solution based on a very large possible solutions space. There as many ways to go about this as there are developers.
I’d normally spend some time understanding the problem and stating it clearly and succinctly as possible, preferably in a written form. The problem description should be completely abstracted away from any possible solutions. Next I’d normally list available constraints that will need to be applied to the solution (time, budget, legal, political, performance, usability, skill availability within team and so on).
Then the theory goes that you need to look on the market for something that solves the problem and meets the constraints at the same time. In practise, the process is not that straight-forward: you try to identify market categories that are likely to be useful, then research them, see what is available and continuously try to reduce the gap between the constraints and capabilities as much as possible, often by going back and revisiting and re-negotiating the constraints.
A few generic tips:
During the research keep coming back to the original problem.
There is always more than one solution, try to extend breadth (concentrating on very different ways of solving the problem) of the search space before going deeper.
Be clear on a number of options it’s worth researching, and amount of time worth spending on each of them before making a decision whether to investigate further.
It’s seldom worth finding an optimal solution, especially then technological landscape keeps changing very rapidly. Look for a solution that is good enough: “The Paradox of Choice - Why More is Less”.
It’s rarely worth turning to users for help (unless they are software experts) on choosing between several options. If you’ve got a number of options all looking equally attractive that means you need to go back and understand the original problem better, it’s likely you’ve missed a requirement or two.
Some further notes on using third-party components (refers to GUI components, but easy to apply to other software areas as well).
And even more notes on scoping, composing and researching for a project.
How do you decide whether a part should use software product or your own code? (considering, that some tools are awesome, but will require much time to learn)
Ask your self two questions.
1) Is it a mature product. If yes, then
2) How long it would take to create the functionality it provides on your own. If that value times your hourly rate is greater than the cost of the product, then use that product.
How do you choose the right software product?
Consult your network of other developers. Have they used it, did they run into problems. Consult the interweb. Create a prototype using the product. Does it work well? Any major bugs?
How much time (as a percentage) should this stage of choosing the right product, if any, take, and how much time to evaluate a single product?
It depends on the size of the project, and the criticality of the product to the success. Most of the time, you are going to be able to get a high level view of the product in a very short amount of time.
It may be just a few minutes using it before you say, nope - not ready for prime time. If it makes past that, a day or two of experimentation may tell you that it passes muster for your project.
If it's a huge project with many developers, then you probably want to spend more time doing a prototype application with it to be sure it's worth investing all that time in.
Is there a way-back, is it o.k to change your mind, after putting efforts in a product, and finding it not suitable?
If you find it's not working out, there's nothing wrong with going back. In fact you probably have to. Ideally you will find this out early. Not at the 11th hour. Again, this is the purpose of prototyping.
There are already some really good answers here, so I won't repeat it, however there is one point you should definitely consider, and though I would have thought its obvious I havent seen it mentioned here yet:
The personnel you have available to implement the solution, their core competency, and their general level of competence.
Who you have to implement this (assuming it's a team, and not just yourself - but relevant even if its just you, too...) can have a HUGE effect on the outcome. If you don't have experienced programmers to help you develop this, you're better off looking for some OTS product to do the work for you... Or, even if you have programmers who are not likely to succeed, you still might want to find a solution with lower overall project risk.

Feature bloat - how much is too much?

I'm a computer science student designing a project and I've started wondering what are good examples or software, or even hardware that are toeing the line between being feature rich with good usable features for regular users and being too intimidating for new users. Also could anyone recommend any good tips/books for designing good quality applications that are feature rich but not "bloated"?
"Make everything as simple as possible, but not simpler." - Albert Einstein
"Perfection is reached not when there is nothing left to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry
I am not trying to be flippant but these quotes really are the best advice. Simplicity of design should be your goal. Not that achieving simplicity is easy! On the contrary, it is quite difficult but it is possible.
Try thinking about things a bit differently. Rather than
How many things can I add before this becomes bloated?
try
What are the fewest number of features and elements I can include while still providing a superior experience for my users?
Here's a good set of slides from a presentation on the topic: Rescue Princess 2.0.
The first order of business should just be keeping the application easy to use. Beyond that, all I can say is, beware of writing features for an imaginary user: make sure someone actually needs it before you start coding.
As a direct answer to your question: pretty much any Microsoft product. I'm showing my bias here, but Microsoft has a strong tendency to keep their codebase, and add features on top of features until the original functionality of the app is nearly lost beneath mounds of accreted crud.
Look at MS Word, for example; while you can still just open it up and start typing, god forbid if you want to renumber a section of your document while leaving the rest alone. Heaven forbid if you want to generate a Table of Contents that includes references to an Appendix. This sort of stuff is something that is de rigeur for Word Processors, and Word supports it, it just supports it in a way that you cannot get it done without a manual, several cups of coffee, and bandages to stop the bleeding from banging your head on the desk.
Microsoft isn't alone in doing this; this thing tends to happen all the time, with all sorts of products; but they are among the worst offenders, I've found.
1: What do your users need, and want, and
2: Which features will you have time to implement?
Your question is pretty general. Which features constitute bloat? That kind of depends on whether you're writing an antivirus scanner, an OS or a word processor.
There is no clear barrier between "good" and "too much".
However, it depends on what you want to do.
If you're developing a SDK, I recommend splitting your implementation in several small libraries(rather than just one big SDL library, there is the SDL core, SDL_Mixer, SDL_Image, etc.)
If you're developing an application, keep a module-based system and a plug-in mechanism.
That way, new features can be added more easily and bloat can be more easily detected.
You may get to a point where you'll add new features some will consider "great" and others "bloat". Otherwise, your application may reach a point that some will call it "feature-poor" and others will call it "just enough".
This isn't an exact quote, but the idea was something like this:
A piece of software is perfect not when there is nothing more to add, but when there is nothing more to remove.
In essence, the simpler and more to-the-point is a software, the better.
To get examples of good software design, take a look at programs that are popular today. Google applications would be a nice place to look. Skype perhaps. Heh, even StackOverflow. :)
If you want intimidating, go to the world of CAD. Check out for example Blender. That's a freeware 3D designer software. Good tool I'm told, but the UI has so many buttons/panels/menus/etc. that it makes baby bunnies cry. Unfortunately I cannot say if this would be a good example of a "bad" UI. 3D designing is a very complex process and all those tools are probably in the right place. But it's definately intimidating. :)
A bad UI design can often be found with propieritary software that comes with propieritary hardware. Unfortunately I cannot give you any examples from the top of my head.
I always tend to design my projects in a way that they're just skeletons which are as extensible as possible. Limiting factors are performance, complexity or Thirdparty-limitations.
This way you could add additional features after finishing the basic structure. A user could also add his needed features.
This probably does not work very good for GUI-applications which should have a good usability without much configuration, but I'm sticking good with this approach for those libs I develop. (They're used by other coders who like to have a highly modifable piece of software)
It's not very hard to develop an application/lib which is bloated with features. But it is to develop an app which could be easily extended by other developers/users to match their own needs.
Develop a wide-ranging plug-in system so you add and take out stuff at any time. Problem solved. If only that was as easy as writing spaghetti code. ;)

Resources