Component with shouldComponentUpdate vs stateless Component. Performance? - performance

I know that Stateless components are a lot more comfortable to use (In specific scenarios), But since you cannot use shouldComponentUpdate, Doesn't that mean that the component will re-render for each props change?. My question is whether or not its better (Performance-wise) to use a class component with a smart shouldComponentUpdate than use a stateless component.
Are stateless components a better solution as far as performance goes?
consider this silly example:
const Hello =(props) =>(
<div>
<div> {props.hello}</div>
<div>{props.bye}</div>
</div>);
vs
class Hello extends Component{
shouldComponentUpdate(nextProps){
return nextProps.hello !== this.props.hello
};
render() {
const {hello, bye} = this.props;
return (
<div>
<div> {hello}</div>
<div>{bye}</div>
</div>
);
}
}
Let's assume these components both has 2 props and only one of them we want to track and update if changed (which is a common use-case), would it be better to use Stateless functional component or a class component?
UPDATE
After doing some research as well I agree with the marked answer. Although the performance is better in a class component (with shouldComponentUpdate) it seems as the improvement is negligible for simple component. So my take is this:
For complex components use class extended by PureComponent or Component (Depending on if you were gonna implement your own shouldComponentUpdate)
For simple components Use functional components even if it means that the rerender runs
Try to cut the amount of updates from the closest possible class base component in order to make the tree as static as possible (if needed)

I think you should read Stateless functional components and shouldComponentUpdate #5677
For complex components, defining shouldComponentUpdate (eg. pure render) will generally exceed the performance benefits of stateless components. The sentences in the docs are hinting at some future optimizations that we have planned, whereby we won't allocate an internal instance for stateless functional components (we will just call the function). We also might not keep holding the props, etc. Tiny optimizations. We don't talk about the details in the docs because the optimizations aren't actually implemented yet (stateless components open the doors to these optimizations).
https://github.com/facebook/react/issues/5677#issuecomment-165125151
There are currently no special optimizations done for functions, although we might add such optimizations in the future. But for now, they perform exactly as classes.
https://github.com/facebook/react/issues/5677#issuecomment-241190513
I also recommend checking https://medium.com/missive-app/45-faster-react-functional-components-now-3509a668e69f
To measure the change, I created this benchmark, the results are quite staggering! From just converting a class-based component to a functional one, we get a little 6% speedup. But by calling it as a simple function instead of mounting, we get a ~45﹪ total speed improvement.
To answer the question: it depends.
If you have complex/heavy components you probably should implement shouldComponentUpdate. Otherwise going with regular functions should be fine. I don't think that implementing shouldComponentUpdate for components like you Hello will make big difference, it probably doesn't worth the time implementing.
You should also consider extending from PureComponent rather than Component.

Related

In React ES6, is there any benefit to using an imported component that contains only one element tag with a class?

In my application, I have many instances of:
<div className="node">
// various types of other elements
</div>
Is there any benefit, such as better performance, to creating a component that contains this markup and importing and using it rather than just writing the markup itself in other components?
I'd expect the performance of putting it in a separate component to be a bit worse, due to an extra render call. I'd also expect the difference to be unnoticeable. There's also some overhead to additional modules, at least in Webpack.
The biggest advantage is the advantage that you get any time you pull common code into a function or class:
You give a name to the functionality, to make your code more self-documenting.
You reduce duplication in your code and introduce a single place to make future changes. This is minor now, because the only duplication is the node className, but you may find that you need to make changes in the future, and having a separate component then could help.
since you aren't actually doing any class related things in the repeated markup, you could just do
const Node = (
<div className="node" >
// various other elements
</div>
);
and then use <Node/> wherever you need it.
This should be the most efficient.

How to reduce renders using redux + normalizr

I have an app using React + Redux + Normalizr, I would like to know the best practices to reduce the number of renders when something changes on entities.
Right if I change just one entity inside the entities, it will re-render all the components and not only the components that need that specific entity
There are several things you can do to minimize the number of renders in your app, and make updates faster.
Let's look at them from your Redux store down to your React components.
In your Redux store, you can use immutable data structures (for example, Immutable.js). This will make all subsequent optimizations faster, as you'll be able to compare changes by only checking for previous/next state slice equality rather than recursively comparing all props.
In your containers, that is in your top-level components where you inject redux state as props, ask only for the state slices you need, and use the pure option (I assume you're using react-redux) to make sure your container will be re-rendered only if the state slices returned by your mapStateToProps functions have changed.
If you need to compute derived data, that is if you inject in your containers data computed from various state slices, use a memoized function to make sure the computation is not triggered again if the input doesn't change, and to keep object equality with the value the previous call returned. Reselect is a very good library to do that.
In your dumb components use the shouldComponentUpdate lifecycle to avoid a re-render when incoming props do not change. If you do not want to implement this manually, you can use React's PureRenderMixin to check all props for you, or, for example, the pure function from the Recompose library if you need more control. A good use case at this level is rendering a list of items. If your item component implements shouldComponentUpdate only the modified items will be re-rendered. But this shouldn't be a fix-all-problems habit : a good components separation is often preferable in that it makes flow props only to those components which will need them.
As far as Normalizr is concerned, there is nothing more specific to be done.
If in some case (it should be rare) you detect performance problems that are directly related to React's rendering cycles of components, then you should implement the shouldComponentUpdate() method in the involved components (details can be found in React's docs here).
Change-detection in shouldComponentUpdate() will be particularly easy because Redux forces you to implement immutable state:
shouldComponentUpdate(nextProps, nextState) {
return nextProps.dataObject !== this.props.dataObject;
// true when dataObject has become a new object,
// which happens if (and only if) its data has changed,
// thanks to immutability
}

What are the negative impacts of extending classes in ActionScript 3?

In my game engine I use Box2D for physics. Box2D's naming conventions and poor commenting ruin the consistent and well documented remainder of my engine which is a little frustrating and presents poorly when you're using it.
I've considered making a set of wrapper classes for Box2D. That is, classes which extend each of the common Box2D objects and have their functions rewritten to follow the naming conventions of the rest of my engine, and to have them more clearly and consistently commented. I have even considered building ontop of some of the classes and adding some bits and pieces (like getters for pixel-based measurements in the b2Vec2 class).
This is fine but I am not 100% sure what the negative impacts of this would be and the degree to which those would affect my applications and games. I'm not sure if the compiler alleviates some of my concerns to a degree or whether I do need to be considerate when adding somewhat unnecessary classes for the sake of readability and consistency.
I have some suspicions:
More memory consumption to accommodate the extra level of class structure.
Performance impact when creating new objects due to initializing an extra level of members?
I am asking specifically about runtime impacts.
This is a pretty common problem when it comes to integrating third party libraries, especially libraries that are ports (as Box2DAS3 is), where they keep the coding and naming conventions of the parent language rather than fully integrating with the destination language (case in point: Box2DAS3 using getFoo() and setFoo() instead of a .foo getter/setter).
To answer your question quickly, no, there will be no significant performance impact with making wrapper classes; no more than you'll see in the class hierarchy in your own project. Sure, if you time a loop of 5 million iterations, you might see a millisecond or two of difference, but in normal usage, you won't notice it.
"More memory consumption to accommodate the extra level of class structure."
Like any language that has class inheritence, a vtable will be used behind the scenes, so you will have a small increase in memory/perf, but it's negligible.
"Performance impact when creating new objects due to initializing an extra level of members?"
No more than normal instantiation, so not something to worry about unless you're creating a huge amount of objects.
Performance wise, you should generally have no problem (favour readability and usability over performance unless you actually have a problem with it), but I'd look at it more as an architectural problem and, with that in mind, what I would consider to be a negative impact of extending/modifying classes of an external library generally fall into 3 areas, depending on what you want to do:
Modify the library
Extend the classes
Composition with your own classes
Modify the libary
As Box2DAS3 is open source, there's nothing stopping you jumping in and refactoring all the class/function names to your hearts content. I've seriously considered doing this at times.
Pros:
You can modify what you want - functions, classes, you name it
You can add any missing pieces that you need (e.g. pixel-meters conversion helpers)
You can fix any potential performance issues (I've noticed a few things that could be done better and faster if they were done in an "AS3" way)
Cons:
If you plan to keep your version up to date, you'll need to manually merge/convert any updates and changes. For popular libraries, or those that change a lot, this can be a huge pain
It's very time-consuiming - aside from modifications, you'll need a good understanding on what's going on so you can make any changes without breaking functionality
If there's multiple people working with it, they can't rely as much on external documentation/examples, as the internals might have changed
Extend the classes
Here, you simply make your own wrapper classes, which extend the base Box2D classes. You can add properties and functions as you want, including implementing your own naming scheme which translates to the base class (e.g. MyBox2DClass.foo() could simply be a wrapper for Box2DClass.bar())
Pros:
You implement just the classes you need, and make just the changes necessary
Your wrapper classes can still be used in the base Box2D engine - i.e. you can pass a MyBox2DClass object to an internal method that takes a Box2DClass and you know it'll work
It's the least amount of work, out of all three methods
Cons:
Again, if you plan to keep your version up to date, you'll need to check that any changes don't break your classes. Normally not much of a problem, though
Can introduce confusion into the class, if you create your own functions that call their Box2D equivalent (e.g. "Should I use setPos() or SetPosition()?). Even if you're working on your own, when you come back to your class in 6 months, you'll have forgotten
Your classes will lack coherence and consistency (e.g. some functions using your naming methodology (setPos()) while others use that of Box2D (SetPosition()))
You're stuck with Box2D; you can't change physics engines without a lot of dev, depending on how your classes are used throughout the project. This might not be such a big deal if you don't plan on switching
Composition with your own classes
You make your own classes, which internally hold a Box2D property (e.g. MyPhysicalClass will have a property b2Body). You're free to implement your own interface as you wish, and only what's necessary.
Pros:
Your classes are cleaner and fit in nicely with your engine. Only functions that you're interested in are exposed
You're not tied to the Box2D engine; if you want to switch to Nape, for example, you only need to modify your custom classes; the rest of your engine and games are oblivious. Other developers also don't need to learn the Box2D engine to be able to use it
While you're there, you can even implement multiple engines, and switch between them using a toggle or interfaces. Again, the rest of your engine and games are oblivious
Works nicely with component based engines - e.g. you can have a Box2DComponent that holds a b2Body property
Cons:
More work than just extending the classes, as you're essentially creating an intermediary layer between your engine and Box2D. Ideally, outside of your custom classes, there shouldn't be a reference to Box2D. The amount of work depends on what you need in your class
Extra level of indirection; normally it shouldn't be a problem, as Box2D will use your Box2D properties directly, but if your engine is calling your functions a lot, it's an extra step along the way, performance wise
Out of the three, I prefer to go with composition, as it gives the most flexibility and keeps the modular nature of your engine intact, i.e. you have your core engine classes, and you extend functionality with external libraries. The fact that you can switch out libraries with minimal effort is a huge plus as well. This is the technique that I've employed in my own engine, and I've also extended it to other types of libraries - e.g. Ads - I have my engine Ad class, that can integrate with Mochi, Kongregate, etc as needed - the rest of my game doesn't care what I'm using, which lets me keep my coding style and consistency throughout the engine, whilst still being flexible and modular.
----- Update 20/9/2013 -----
Big update time! So I went back to do some testing on size and speed. The class I used is too big to paste here, so you can download it at http://divillysausages.com/files/TestExtendClass.as
In it, I test a number of classes:
An Empty instance; a Class that just extends Object and implements an empty getPostion() function. This will be our benchmark
A b2Body instance
A Box2DExtends instance; a Class that extends b2Body and implements a function getPosition() that just returns GetPosition() (the b2Body function)
A Box2DExtendsOverrides instance; a Class that extends b2Body and overrides the GetPosition() function (it simply returns super.GetPosition())
A Box2DComposition instance; a Class that has a b2Body property and a getPosition() function that returns the b2Body's GetPosition()
A Box2DExtendsProperty instance; a Class that extends b2Body and adds a new Point property
A Box2DCompositionProperty instance; a Class that has both a b2Body property and a Point property
All tests were done in the standalone player, FP v11.7.700.224, Windows 7, on a not-great laptop.
Test1: Size
AS3 is a bit annoying in that if you call getSize(), it'll give you the size of the object itself, but any internal properties that are also Objects will just result in a 4 byte increase as they're only counting the pointer. I can see why they do this, it just makes it a bit awkward to get the right size.
Thus I turned to the flash.sampler package. If we sample the creation of our objects, and add up all the sizes in the NewObjectSample objects, we'll get the full size of our object (NOTE: if you want to see what's created and the size, comment in the log calls in the test file).
Empty's size is 56 // extends Object
b2Body's size is 568
Box2DExtends's size is 568 // extends b2Body
Box2DExtendsOverrides's size is 568 // extends b2Body
Box2DComposition's size is 588 // has b2Body property
Box2DExtendsProperty's size is 604 // extends b2Body and adds Point property
Box2DCompositionProperty's size is 624 // has b2Body and Point properties
These sizes are all in bytes. Some points worth noting:
The base Object size is 40 bytes, so just the class and nothing else is 16 bytes.
Adding methods doesn't increase the size of the object (they're implemented on a class basis anyway), while properties obviously do
Just extending the class didn't add anything to it
The extra 20 bytes for Box2DComposition come from 16 for the class and 4 for the pointer to the b2Body property
For Box2DExtendsProperty etc, you have 16 for the Point class itself, 4 for the pointer to the Point property, and 8 for each of the x and y property Numbers = 36 bytes difference between that and Box2DExtends
So obviously the difference in size depends on the properties that you add, but all in all, pretty negligible.
Test 2: Creation Speed
For this, I simply used getTimer(), with a loop of 10000, itself looped 10 (so 100k) times to get the average. System.gc() was called between each set to minimise time due to garbage collection.
Empty's time for creation is 3.9ms (av.)
b2Body's time for creation is 65.5ms (av.)
Box2DExtends's time for creation is 69.9ms (av.)
Box2DExtendsOverrides's time for creation is 68.8ms (av.)
Box2DComposition's time for creation is 72.6ms (av.)
Box2DExtendsProperty's time for creation is 76.5ms (av.)
Box2DCompositionProperty's time for creation is 77.2ms (av.)
There's not a whole pile to note here. The extending/composition classes take slightly longer, but it's like 0.000007ms (this is the creation time for 100,000 objects), so it's not really worth considering.
Test 3: Call Speed
For this, I used getTimer() again, with a loop of 1000000, itself looped 10 (so 10m) times to get the average. System.gc() was called between each set to minimise time due to garbage collection. All the objects had their getPosition()/GetPosition() functions called, to see the difference between overriding and redirecting.
Empty's time for getPosition() is 83.4ms (av.) // empty
b2Body's time for GetPosition() is 88.3ms (av.) // normal
Box2DExtends's time for getPosition() is 158.7ms (av.) // getPosition() calls GetPosition()
Box2DExtendsOverrides's time for GetPosition() is 161ms (av.) // override calls super.GetPosition()
Box2DComposition's time for getPosition() is 160.3ms (av.) // calls this.body.GetPosition()
Box2DExtendsProperty's time for GetPosition() is 89ms (av.) // implicit super (i.e. not overridden)
Box2DCompositionProperty's time for getPosition() is 155.2ms (av.) // calls this.body.GetPosition()
This one surprised me a bit, with the difference between the times being ~2x (though that's still 0.000007ms per call). The delay seems entirely down to the class inheritence - e.g. Box2DExtendsOverrides simply calls super.GetPosition(), yet is twice as slow as Box2DExtendsProperty, which inherits GetPosition() from its base class.
I guess it has to do with the overhead of function lookups and calling, though I took a look at the generated bytecode using swfdump in the FlexSDK, and they're identical, so either it's lying to me (or doesn't include it), or there's something I'm missing :) While the steps might be the same, the time between them probably isn't (e.g. in memory, it's jumping to your class vtable, then jumping to the base class vtable, etc)
The bytecode for var v:b2Vec2 = b2Body.GetPosition() is simply:
getlocal 4
callproperty :GetPosition (0)
coerce Box2D.Common.Math:b2Vec2
setlocal3
whilst var v:b2Vec2 = Box2DExtends.getPosition() (getPosition() returns GetPosition()) is:
getlocal 5
callproperty :getPosition (0)
coerce Box2D.Common.Math:b2Vec2
setlocal3
For the second example, it doesn't show the call to GetPosition(), so I'm not sure how they're resolving that. The test file is available for download if someone wants to take a crack at explaining it.
Some points to keep in mind:
GetPosition() doesn't really do anything; it's essentially a getter disguised as a function, which is one reason why the "extra class step penalty" appears so big
This was on a loop of 10m, which you're unlikely to doing in your game. The per-call penalty isn't really worth worrying about
Even if you do worry about the penalty, remember that this is the interface between your code and Box2D; the Box2D internals will be unaffected by this, only the calls to your interface
All-in-all, I'd expect the same results from extending one of my own classes, so I wouldn't really worry about it. Implement the architecture that works the best for your solution.
I know this answer will not qualify for the bounty as I am way to lazy to write benchmarks. But having worked on the Flash code base I can maybe give some hints:
The avm2 is a dynamic language, so the compiler will not optimize anything in this case.
Wrapping a call as a sub class call will have a cost. However that cost will be constant time and small.
Object creation cost will also at most be affected by a constant amount of time and memory. Also the time and amount will probably be insignificant compared to the base cost.
But, as with many things the devil is in the details. I never used box2d, but if it does any kind of object pooling things might not work well anymore. In general games should try to run without object allocations at play time. So be very careful not to add functions that allocate objects just to be prettier.
function addvectors(a:vec,b:vec,dest:vec):void
Might be ugly but is much faster than
function addvectors(a:vec,b:vec):vec
(I hope I got my AS3 syntax right...). Even more useful and more ugly might be
function addvectors(a:Vector.<vec>, b:Vector.<vec>, dest:Vector.<vec>, offset:int, count:int):void
So my answer is, if you are only wrapping for readability, go for it. It's a small, but constant cost. But be very, very careful to change how functions work.
I don't know if there is a big impact for instanciation time, but I will answer your question differently: what are your other options? Do they seem they'll do better?
There is a beautiful benchmark made by Jackson Dunstan about function performance: http://jacksondunstan.com/articles/1820
To sum it up:
closures are expensive
static is slow: http://jacksondunstan.com/articles/1713
overriding, calling a function inside a subClass does not seem to have a big impact
So, if you want to not use inheritance, maybe you'll have to replace it with static calls, and it is bad for performance.
Personally, I'll extend those classes and add an eager instanciation of all objects I'll need at runtime: if it is big, make a beautiful loading screen...
Also, take a look at post bytecode optimizations such as apparat: http://code.google.com/p/apparat/
I don't think that extending will impact the performance a lot. Yes, there is some cost, but it's not so high as long as you don't use composition. I.e. instead of extending Box2d classes directly, you create an instance of that classes and work with it inside your class. For example this
public class Child extends b2Body {
public function Child() {
// do some stuff here
}
}
instead of this
public class Child {
private var _body:b2Body;
public function Child() {
...
_body = _world.CreateBody(...);
...
}
}
I guess you know that as less objects as you create the better. As long as you keep the number of created instances you will have same performance.
From another point of view:
a) adding one more layer of abstractions may change the Box2d a lot. If you work in a team this may be an issue, because the other developers should learn your naming
b) be careful about Middle Man code smell. Usually when you start wrapping already existing functionality you end up with classes which are just delegators.
Some great answers here but I'm going to throw my two cents in.
There are two different concepts you have to recognize: when you extend a class and when you implement a class.
Here is an example of extending MovieClip
public class TrickedOutClip extends MovieClip {
private var rims = 'extra large'
public function TrickedOutClip() {
super();
}
}
Here is an example of implementing MovieClip
public class pimpMyClip {
private var rims = 'extra large';
private var pimpedMovieClip:MovieClip;
public function pimpMyClip() {
pimpedMovieClip = new MovieClip();
pimpedMovieClip.bling = rims;
}
public function getPimpedClip() {
return pimpedMovieClip;
}
}
I think you probably do not want to extend these box2D classes but implement them. Here's a rough outline:
public class myBox2DHelper {
private var box2d = new Box2D(...);
public function MyBox2DHelper(stage) {
}
public function makeBox2DDoSomeTrickyThing(varA:String, varB:Number) {
// write your custom code here
}
public function makeBox2DDoSomethingElse(varC:MovieClip) {
// write your custom code here
}
}
Good luck.

breezejs angularjs ToDo SPA template in Visual Studio

I have a question about design. I've just been through the code of the ToDo template of Visual Studio for building a SPA with BreezejS and AngularJS.
There is a todo.model.js file that do various initialization. One interesting thing is that it extends the TodoList Entity with some additional function (addToDo).
What is the advantage of doing so, over having the addToDo function in the todo.controller instead and adding it to the $scope ?
You could make a good case for moving all of the TodoList level persistence operations out of the TodoList and into some other component. The controller is a potential candidate.
The primary reason that these operations are in the TodoList is ... because that's where the authors of the original ASP.NET template put them!
One of the "community template" design goals was for all of the "TodoList" apps to be as similar to each other as possible. By holding the design constant we made it easier for readers to compare the effect of the different frameworks: Knockout, Breeze, Backbone, Ember. Had any of them relocated these operations, you would not know if that change was imposed by the target framework or was simply the the implementer's preference. We wanted to take our ego out of it and let you concentrate on the technologies involved.
Don't treat these templates as gospel. In some respects they are unrealistic; I can't imagine saving every time a single property of a single object changes.
Learn from them. Regard them with healthy skepticism. Keeping asking questions like this one. Take what make sense to you. Discard the rest.
I believe this is just letting the entity handle its own save/delete functions for items in the list. The controller seems to be handling only adding of new lists. I'm not sure there's any advantage other than keeping the controller clean.

Which one do you prefer for Searching/Reporting DataTable or DTO or Domain Class?

The project currently I am working in requires a lot of searhing/filtering pages. For example I have a comlex search page to get Issues by data,category,unit,...
Issue Domain Class is complex and contains lots of value objects and child objects.
.I am wondering how people deal with Searching/Filtering/Reporting for UI. As far As I know I have 3 options but none of them make me happier.
1.) Send parameters to Repository/DAO to Get DataTable and Bind DataTable to UI Controls.For Example to ASP.NET GridView
DataTable dataTable =issueReportRepository.FindBy(specs);
.....
grid.DataSource=dataTable;
grid.DataBind();
In this option I can simply by pass the Domain Layer and query database for given specs. And I dont have to get fully constructed complex Domain Object. No need for value objects,child objects,.. Get data to displayed in UI in DataTable directly from database and show in the UI.
But If have have to show a calculated field in UI like method return value I have to do this in the DataBase because I don't have fully domain object. I have to duplicate logic and DataTable problems like no intellisense etc...
2.)Send parameters to Repository/DAO to Get DTO and Bind DTO to UI Controls.
IList<IssueDTO> issueDTOs =issueReportRepository.FindBy(specs);
....
grid.DataSource=issueDTOs;
grid.DataBind();
In this option is same as like above but I have to create anemic DTO objects for every search page. Also For different Issue search pages I have to show different parts of the Issue Objects.IssueSearchDTO, CompanyIssueTO,MyIssueDTO....
3.) Send parameters to Real Repository class to get fully constructed Domain Objects.
IList<Issue> issues =issueRepository.FindBy(specs);
//Bind to grid...
I like Domain Driven Design and Patterns. There is no DTO or duplication logic in this option.but in this option I have to create lot's of child and value object that will not shown in the UI.Also it requires lot's ob join to get full domain object and performance cost for needles child objects and value objects.
I don't use any ORM tool Maybe I can implement Lazy Loading by hand for this version but It seems a bit overkill.
Which one do you prefer?Or Am I doing it wrong? Are there any suggestions or better way to do this?
I have a few suggestions, but of course the overall answer is "it depends".
First, you should be using an ORM tool or you should have a very good reason not to be doing so.
Second, implementing Lazy Loading by hand is relatively simple so in the event that you're not going to use an ORM tool, you can simply create properties on your objects that say something like:
private Foo _foo;
public Foo Foo
{
get {
if(_foo == null)
{
_foo = _repository.Get(id);
}
return _foo;
}
}
Third, performance is something that should be considered initially but should not drive you away from an elegant design. I would argue that you should use (3) initially and only deviate from it if its performance is insufficient. This results in writing the least amount of code and having the least duplication in your design.
If performance suffers you can address it easily in the UI layer using Caching and/or in your Domain layer using Lazy Loading. If these both fail to provide acceptable performance, then you can fall back to a DTO approach where you only pass back a lightweight collection of value objects needed.
This is a great question and I wanted to provide my answer as well. I think the technically best answer is to go with option #3. It provides the ability to best describe and organize the data along with scalability for future enhancements to reporting/searching requests.
However while this might be the overall best option, there is a huge cost IMO vs. the other (2) options which are the additional design time for all the classes and relationships needed to support the reporting needs (again under the premise that there is no ORM tool being used).
I struggle with this in a lot of my applications as well and the reality is that #2 is the best compromise between time and design. Now if you were asking about your busniess objects and all their needs there is no question that a fully laid out and properly designed model is important and there is no substitute. However when it comes to reporting and searching this to me is a different animal. #2 provides strongly typed data in the anemic classes and is not as primitive as hardcoded values in DataSets like #1, and still reduces greatly the amount of time needed to complete the design compared to #3.
Ideally I would love to extend my object model to encompass all reporting needs, but sometimes the effort required to do this is so extensive, that creating a separate set of classes just for reporting needs is an easier but still viable option. I actually asked almost this identical question a few years back and was also told that creating another set of classes (essentially DTOs) for reporting needs was not a bad option.
So to wrap it up, #3 is technically the best option, but #2 is probably the most realistic and viable option when considering time and quality together for complex reporting and searching needs.

Resources