Is LINQ faster or just more convenient? - linq

Which of theses scenarios would be faster?
Scenario 1:
foreach (var file in directory.GetFiles())
{
if (file.Extension.ToLower() != ".txt" &&
file.Extension.ToLower() != ".bin")
continue;
// Do something cool.
}
Scenario 2:
var files = from file in directory.GetFiles()
where file.Extension.ToLower() == ".txt" ||
file.Extension.ToLower() == ".bin"
select file;
foreach (var file in files)
{
// Do something cool.
}
I know that they are logically the same because of delayed execution, but which would be the faster? And why?

Faster isn't usually the issue per se, especially in a scenario like this where there is not going to be a meaningful performance difference (and in general, if the code is not a bottleneck it just doesn't matter). The issue is which is more readable and more clearly expresses the intent of the code.
I think the second block of code more clearly expresses the intent of the code. It reads as "query a collection of file names for some file names with a certain property" and then "for each of those file names with that property, do something." It declares what is happening, rather than how it is going to happen. Separating the what from the mechanism is what makes the second block of code clearer and where LINQ really shines. Use LINQ to declare the what, and let LINQ implement the mechanism instead of in the past where the what would be muddled with the mechanism.
Is LINQ faster or just more convenient?
So, to answer the question in your title, LINQ usually does not materially hinder performance but it makes code more clear by allowing the coder to declare what they want done instead of having to focus on how they want something done. At the end of the day, we don't care about the how, we care about the what.
I know that they are logically the same because of delayed execution, but which would be the faster?
Probably the imperative version because there is a tiny amount of overhead in using LINQ. But if you really must know which is faster be sure to use a profiler, and be sure to test on real-world data.
And why?
Because LINQ adds a little bit of overhead. But the trade off is significantly clearer and more maintainable code. That is a huge win compared to the usually irrelevant performance loss.

It would be faster to do a GetFiles("*.txt") and GetFile("*.bin") if the directory contains lots of files or is on a network drive.
Compared to that the extra overhead for LINQ is just noise.

Linq isn't faster and it's not really about convenience. Rather, Linq pulls the higher-order functions Fold, Map, and Filter into .NET (with different names). These functions are valuable because they allow us to DRY-up our code. Every time you set up an iteration with a secondary collection or result, you open yourself up to a bug. Linq allows you to focus on what happens inside the iteration and feel fairly confident that the iteration mechanics are bug-free.
This doesn't mean that Linq is strictly slower than manual iteration. As others have mentioned, you'll have to benchmark case-by-case.

I wrote an article on Code Project that benchmarked linq and Stored procedures as well as using compiled linq.
Please take a look.
http://www.codeproject.com/KB/cs/linqsql2.aspx
I understand you are looking at local file parsing, the article will give you some idea of what is involved and what linq is doing behind the scenes.

Related

Would you abstract your LINQ queries into extension methods

On my current project we set ourselves some goals for the code metrics "Maintainability Index" and "Cyclometic Complexity". Maintainability Index should be 60 or higher and Cyclometic Complexity 25 or less. We know that the Maintainability Index of 60 and higher is a pretty high one.
We also use a lot of linq to filter/group/select entities. I found out that these linq queries aren't scoring that high on Maintainability Index.
Abstracting this queries into extension methods is giving me a higher Maintainability Index, which is good. But in most of the cases the extension methods are not generic anymore because I use them with my Types instead of generic types.
For example the following linq-query vs extension method:
Linq query
List.Where(m => m.BeginTime >= selectionFrom && m.EndTime <= selectionTo)
Extension method:
public static IEnumerable<MyType> FilterBy(this IEnumerable<MyType> source, DateTime selectionFrom, DateTime selectionTo)
{
return (IEnumerable<MyType>)source.Where(m => m.BeginTime >= selectionFrom && m.EndTime <= selectionTo);
}
List.FilterBy(selectionFrom, selectionTo);
The extension method gives me a Maintainability Index improvement of 6 points, and gives a nice fluent syntax.
On the other hand I have to add a static class, it's not generic.
Any ideas on what approach would have your favor? Or maybe have different ideas about how to refactor the linq queries to improve Maintainability Index?
You shouldn't add classes for the sake of metrics. Any metrics are meant to make your code better but following rules blindly, even the best rules, may in fact harm your code.
I don't think it's a good idea to stick to certain Maintainability and Complexity indexes. I believe they are useful for evaluating old code, i.e. when you inherited a project and need to estimate its complexity. However, it's absurd to extract a method because you haven't scored enough points.
Only refactor if such refactoring adds value to the code. Such value is a complex human metric inexpressible in numbers, and estimating it is exactly what programming experience is about—finding balance between optimization vs readability vs clean API vs cool code vs simple code vs fast shipping vs generalization vs specification, etc.
This is the only metric you should follow but it's not always the metric everyone agrees upon...
As for your example, if the same LINQ query is used over and over, it makes perfect sense to create an EnumerableExtensions in Extensions folder and extract it there. However, if it used once or twice, or is subject to change, verbose query is so much better.
I also don't understand why you say they are not generic with somewhat negative connotations. You don't need generics everywhere! In fact, when writing extension methods, you should consider the most specific types you can choose as to not pollute other classes' method set. If you want your helper to only work with IEnumerable<MyType>, there is absolutely no shame in declaring an extension method exactly for this IEnumerable<MyType>. By the way, there's redundant casting in your example. Get rid of it.
And don't forget, tools are stupid. So are we, humans.
My advice to you would be ... don't be a slave to your metrics! They are machine generated and only intended to be used as guidance. They are never going to be a replacement for a skilled experienced programmer.
Which do you think is right for your application?
I for one agree with the extension method strategy. I've used it without a problem in a handful of real-world apps.
To me, it is not only about the metrics, but also the re-usability of the code there. See the following psuedo-examples:
var x = _repository.Customers().WhichAreGoldCustomers();
var y = _repository.Customers().WhichAreBehindInPayments();
Having those two extension methods accomplishes your goal for metrics, and it also provides "one place for the definition of what it is to be a gold customer." You don't have different queries being created in different places by different developers when they need to work with "gold customers."
Additionally, they are composable:
var z = _repository.Customers().WhichAreGoldCustomers().WhichAreBehindInPayments();
IMHO this is a winning approach.
The only problem we've faced is that there is a ReSharper bug that sometimes the Intellisense for the extension methods goes crazy. You type ".Whic" and it lets you pick the extension method you want, but when you "tab" on it, it puts something completely different into the code, not the extension method that you selected. I've considered switching from ReSharper for this, but... nah :)
NO: in this case I would ignore the cyclomatic complexity - what you had originally was better.
Ask yourself what is more explanatory. This:
List.Where(m => m.BeginTime >= selectionFrom && m.EndTime <= selectionTo)
or this:
List.FilterBy(selectionFrom, selectionTo);
The first clearly expresses what you want, whereas the second does not. The only way to know what "FilterBy" means is to go into the source code and look at its implementation.
Abstracting query fragments into extension methods makes sense with more complex scenarios, where it's not easy to judge at a glance what the query fragment is doing.
I have used this technique in places, for example a class Payment has a corresponding class PaymentLinqExtensions which provides domain specific extensions for Payments.
In the example you give I'd choose a more descriptive method name. There is also the question of whether the range is inclusive or exclusive, Otherwise it looks OK.
If you have multiple objects in your system for which the concept of having a date is common then consider an interface, maybe IHaveADate (or something better :-)
public static IQueryable<T> WithinDateRange(this IQueryable<T> source, DateTime from, DateTime to) where T:IHaveADate
(IQueryable is interesting. I don't think IEnumerable can cast to it which is a shame. If you're working with database queries then it can allow your logic to appear in the final SQL that is sent to the server which is good. There is the potential gotcha with all LINQ that your code is not executed when you expect it to be)
If date ranges are an important concept in your application, and you need to be consistent about whether the range starts at midnight on the end of "EndDate" or midnight at the start of it, then a DateRange class may be useful. Then
public static IQueryable<T> WithinDateRange(this IQueryable<T> source, DateRange range) where T:IHaveADate
You could also, if you feel like it, provide
public static IEnumerable<T> WithinDateRange(this IEnumerable<T> source, DateRange range, Func<DateTime,T> getDate)
but this to me feels more something to do with DateRange. I don't know how much it would be used, though your situation may vary. I've found that getting too generic can make things hard to understand, and LINQ can be hard to debug.
var filtered = myThingCollection.WithinDateRange(myDateRange, x => x.Date)

Is checking Perl function arguments worth it?

There's a lot of buzz about MooseX::Method::Signatures and even before that, modules such as Params::Validate that are designed to type check every argument to methods or functions. I'm considering using the former for all my future Perl code, both personal and at my place of work. But I'm not sure if it's worth the effort.
I'm thinking of all the Perl code I've seen (and written) before that performs no such checking. I very rarely see a module do this:
my ($a, $b) = #_;
defined $a or croak '$a must be defined!';
!ref $a or croak '$a must be a scalar!";
...
#_ == 2 or croak "Too many arguments!";
Perhaps because it's simply too much work without some kind of helper module, but perhaps because in practice we don't send excess arguments to functions, and we don't send arrayrefs to methods that expect scalars - or if we do, we have use warnings; and we quickly hear about it - a duck typing approach.
So is Perl type checking worth the performance hit, or are its strengths predominantly shown in compiled, strongly typed languages such as C or Java?
I'm interested in answers from anyone who has experience writing Perl that uses these modules and has seen benefits (or not) from their use; if your company/project has any policies relating to type checking; and any problems with type checking and performance.
UPDATE: I read an interesting article on the subject recently, called Strong Testing vs. Strong Typing. Ignoring the slight Python bias, it essentially states that type checking can be suffocating in some instances, and even if your program passes the type checks, it's no guarantee of correctness - proper tests are the only way to be sure.
If it's important for you to check that an argument is exactly what you need, it's worth it. Performance only matters when you already have correct functioning. It doesn't matter how fast you can get a wrong answer or a core dump. :)
Now, that sounds like a stupid thing to say, but consider some cases where it isn't. Do I really care what's in #_ here?
sub looks_like_a_number { $_[0] !~ /\D/ }
sub is_a_dog { eval { $_[0]->DOES( 'Dog' ) } }
In those two examples, if the argument isn't what you expect, you are still going to get the right answer because the invalid arguments won't pass the tests. Some people see that as ugly, and I can see their point, but I also think the alternative is ugly. Who wins?
However, there are going to be times that you need guard conditions because your situation isn't so simple. The next thing you have to pass your data to might expect them to be within certain ranges or of certain types and don't fail elegantly.
When I think about guard conditions, I think through what could happen if the inputs are bad and how much I care about the failure. I have to judge that by the demands of each situation. I know that sucks as an answer, but I tend to like it better than a bondage-and-discipline approach where you have to go through all the mess even when it doesn't matter.
I dread Params::Validate because its code is often longer than my subroutine. The Moose stuff is very attractive, but you have to realize that it's a way for you to declare what you want and you still get what you could build by hand (you just don't have to see it or do it). The biggest thing I hate about Perl is the lack of optional method signatures, and that's one of the most attractive features in Perl 6 as well as Moose.
I basically concur with brian. How much you need to worry about your method's inputs depends heavily on how much you are concerned that a) someone will input bad data, and b) bad data will corrupt the purpose of the method. I would also add that there is a difference between external and internal methods. You need to be more diligent about public methods because you're making a promise to consumers of your class; conversely you can be less diligent about internal methods as you have greater (theoretical) control over the code that accesses it, and have only yourself to blame if things go wrong.
MooseX::Method::Signatures is an elegant solution to adding a simple declarative way to explain the parameters of a method. Method::Signatures::Simple and Params::Validate are nice but lack one of the features I find most appealing about Moose: the Type system. I have used MooseX::Declare and by extension MooseX::Method::Signatures for several projects and I find that the bar to writing the extra checks is so minimal it's almost seductive.
Yes its worth it - defensive programming is one of those things that are always worth it.
The counterargument I've seen presented to this is that checking parameters on every single function call is redundant and a waste of CPU time. This argument's supporters favor a model in which all incoming data is rigorously checked when it first enters the system, but internal methods have no parameter checks because they should only be called by code which will pass them data which has already passed the checks at the system's border, so it is assumed to still be valid.
In theory, I really like the sound of that, but I can also see how easily it can fall like a house of cards if someone uses the system (or the system needs to grow to allow use) in a way that was unforeseen when the initial validation border is established. All it takes is one external call to an internal function and all bets are off.
In practice, I'm using Moose at the moment and Moose doesn't really give you the option to bypass validation at the attribute level, plus MooseX::Declare handles and validates method parameters with less fuss than unrolling #_ by hand, so it's pretty much a moot point.
I want to mention two points here.
The first are the tests, the second the performance question.
1) Tests
You mentioned that tests can do a lot and that tests are the only way
to be sure that your code is correct. In general i would say this is
absolutly correct. But tests itself only solves one problem.
If you write a module you have two problems or lets say two different
people that uses your module.
You as a developer and a user that uses your module. Tests helps with the
first that your module is correct and do the right thing, but it didn't
help the user that just uses your module.
For the later, i have one example. i had written a module using Moose
and some other stuff, my code ended always in a Segmentation fault.
Then i began to debug my code and search for the problem. I spend around
4 hours of time to find the error. In the end the problem was that i have
used Moose with the Array Trait. I used the "map" function and i didn't
provide a subroutine function, just a string or something else.
Sure this was an absolutly stupid error of mine, but i spend a long time to
debug it. In the end just a checking of the input that the argument is
a subref would cost the developer 10 seconds of time, and would cost me
and propably other a lot of more time.
I also know of other examples. I had written a REST Client to an interface
completly OOP with Moose. In the end you always got back Objects, you
can change the attributes but sure it didn't call the REST API for
every change you did. Instead you change your values and in the end you
call a update() method that transfers the data, and change the values.
Now i had a user that then wrote:
$obj->update({ foo => 'bar' })
Sure i got an error back, that update() does not work. But sure it didn't
work, because the update() method didn't accept a hashref. It only does
a synchronisation of the actual state of the object with the online
service. The correct code would be.
$obj->foo('bar');
$obj->update();
The first thing works because i never did a checking of the arguments. And i don't throw an error if someone gives more arguments then i expect. The method just starts normal like.
sub update {
my ( $self ) = #_;
...
}
Sure all my tests absolutely works 100% fine. But handling these errors that
are not errors cost me time too. And it costs the user propably a lot
of more time.
So in the end. Yes, tests are the only correct way to ensure that your code
works correct. But that doesn't mean that type checking is meaningless.
Type checking is there to help all your non-developers (on your module)
to use your module correctly. And saves you and others time finding
dump errors.
2) Performance
The short: You don't care for performance until you care.
That means until your module works to slow, Performance is always fast
enough and you don't need to care for this. If your module really works
to slow you need further investigations. But for these investigions
you should use a profiler like Devel::NYTProf to look what is slow.
And i would say. In 99% slowliness is not because you do type
checking, it is more your algorithm. You do a lot of computation, calling
functions to often etc. Often it helps if you do completly other solutions
use another better algorithm, do caching or something else, and the
performance hit is not your type checking. But even if the checking is the
performance hit. Then just remove it where it matters.
There is no reason to leave the type checking where performance don't
matters. Do you think type checking does matter in a case like above?
Where i have written a REST Client? 99% of performance issues here are
the amount of request that goes to the webservice or the time for such an
request. Don't using type checking or MooseX::Declare etc. would propably
speed up absolutly nothing.
And even if you see performance disadvantages. Sometimes it is acceptable.
Because the speed doesn't matter or sometimes something gives you a greater
value. DBIx::Class is slower then pure SQL with DBI, but DBIx::Class
gives you a lot for these.
Params::Validate works great,but of course checking args slows things down. Tests are mandatory(at least in the code I write).
Yes it's absolutely worth it, because it will help during development, maintenance, debugging, etc.
If a developer accidentally sends the wrong parameters to a method, a useful error message will be generated, instead of the error being propagated down to somewhere else.
I'm using Moose extensively for a fairly large OO project I'm working on. Moose's strict type checking has saved my bacon on a few occassions. Most importantly it has helped avoid situations where "undef" values are incorrectly being passed to the method. In just those instances alone it saved me hours of debugging time..
The performance hit is definitely there, but it can be managed. 2 hours of using NYTProf helped me find a few Moose Attributes that I was grinding too hard and I just refactored my code and got 4x performance improvement.
Use type checking. Defensive coding is worth it.
Patrick.
Sometimes. I generally do it whenever I'm passing options via hash or hashref. In these cases it's very easy to misremember or misspell an option name, and checking with Params::Check can save a lot of troubleshooting time.
For example:
sub revise {
my ($file, $options) = #_;
my $tmpl = {
test_mode => { allow => [0,1], 'default' => 0 },
verbosity => { allow => qw/^\d+$/, 'default' => 1 },
force_update => { allow => [0,1], 'default' => 0 },
required_fields => { 'default' => [] },
create_backup => { allow => [0,1], 'default' => 1 },
};
my $args = check($tmpl, $options, 1)
or croak "Could not parse arguments: " . Params::Check::last_error();
...
}
Prior to adding these checks, I'd forget whether the names used underscores or hyphens, pass require_backup instead of create_backup, etc. And this is for code I wrote myself--if other people are going to use it, you should definitely do some sort of idiot-proofing. Params::Check makes it fairly easy to do type checking, allowed value checking, default values, required options, storing option values to other variables and more.

How do you build your LINQ queries?

It seems there are two ways to build queries -- either using query expressions:
IEnumerable<Customer> result =
from customer in customers
where customer.FirstName == "Donna"
select customer;
or using extension methods:
IEnumerable<Customer> result =
customers.Where(customer => customer.FirstName == "Donna");
Which do you use and why? Which do you think will be more popular in the long-run?
Only a limited number of operations are available in the expression syntax, for example, Take() or First() are only available using extension methods.
I personally prefer expression if all the required operations are available, if not then i fall back to extension methods as I find them easier to read than lambdas.
take a look at this answer,
Linq Extension methods vs Linq syntax
I use the method syntax (almost) exclusively, because the query syntax has more limitations. For maintainability reasons, I find it preferable to use the method syntax right away, rather than maybe converting it later, or using a mix of both syntaxes.
It might be a little harder to read at first, but once you get used to it, it works fairly natural.
I only use the method syntax. This is because I find it a lot faster to write, and I write a ton of linq. I also like it because it is more terse. If working on a team, its probably best to come to a concensus as to which is the preferred style, as mixing the two styles is hard to read.
Microsoft recommends the query syntax. "In general, we recommend query syntax because it is usually simpler and more readable". http://msdn.microsoft.com/en-us/library/bb397947.aspx
It depends on which you and your team find more readable, and I would choose this on a case by case basis. There are some queries that read better in syntax form and there are some that read better in method form. And of course, there is that broad middle ground where you can't say one way or the other, or some prefer it this way and others that way.
Keep in mind that you can mix both forms together where it might make it more readable.
I see no reason to suspect that either form will dissappear in the future.

Is there an application for displaying some kind of query plan for a Linq to object query?

I'm looking for an application to display what a linq expression would do, in particular regarding the usage of multiple access to the same list in a query.
Better yet, the tool would tell if the linq query is good.
I used the expression tree visualizer in the past to at least help decode what is inside of an expression tree. It aids in figureing out the parts of the tree and how gives each part is related.
Well, to begin with, I could easily foresee a tool that would pick a query apart and detect that the Where-clause is the standard runtime implementation, and thus not examine that method, but "know" what the execution plan for that method would be, and could thus piece together a plan for the whole query.
Right up until the point where you introduce a custom Linq provider, where the only way to figure out what it will be doing would be to read the code.
So I daresay there is no such tool, and making one would be very hard.
Would be fun to try though, at least for standard classes, would be a handy debugging visualizer for Visual Studio.
What about making the tool yourself?! ;)
Take a look at Expression trees, I believe they might be useful

drawbacks of linq

What are the drawbacks of linq in general.
Can be hard to understand when you first start out with it
Deferred execution can separate errors from their causes (in terms of time)
Out-of-process LINQ (e.g. LINQ to SQL) will always be a somewhat leaky abstraction - you need to know what works and what doesn't, essentially
I still love LINQ massively though :)
EDIT: Having written this short list, I remembered that I've got an answer to a very similar question...
The biggest pain with LINQ is that (with database backends) you can't use it over a repository interface without it being a leaky abstraction.
LINQ is fantastic within a layer (especially the DAL etc), but since different providers support different things, you can't rely on Expression<Func<...>> or IQueryable<T> features working the same for different implementations.
As examples, between LINQ-to-SQL and Entity Framework:
EF doesn't support Single()
EF will error if you Skip/Take/First without an explicit OrderBy
EF doesn't support UDFs
etc. The LINQ provider for ADO.NET Data Services supports different combinations. This makes mocking and other abstractions unsafe.
But: for in-memory (LINQ-to-Objects), or in a single layer/implementation... fantastic.
Some more thoughts here: Pragmatic LINQ.
Like any abstraction in programming, it is vulnerable to a misunderstanding: "If I just understand this abstraction, I don't need to understand what's happening under the covers."
The truth is, if you do understand what's happening under the covers, you'll get much better value out of the abstraction, because you'll understand where it ceases to be applicable, so you'll be able to apply it with greater confidence of success where it is appropriate.
This is true of all abstractions, and applies to Linq in bucketfuls. To understand Linq to Objects, the best thing to do is to learn how to write Select, Where, Aggregate, etc. in C# with yield return. And then figure out how yield return replaces a lot of hand-written code by writing it all with classes. Then you'll be able to use it with an appreciation of the effort it is saving you, and it will no longer seem like magic, so you'll understand the limitations.
Same for the variants of Linq where the predicates are captured as expressions and transported off to another environment to be executed. You have to understand how it works in order to safely use it.
So the number 1 drawback of Linq is: the simple examples look deceptively short and simple. The problem is, how did the author of the sample know what to write? Because they knew how to write it all out in long form, and they knew how pieces of Linq could be used as abreviations, and so they arrived at the nice short version.
As I say, not really specific to Linq, but highly relevant to it anyway.
Anonymous types. Proper ORM should always return objects of 'your' type (partial class, with possiblity of adding my methods, overriding etc.). There are doezne of tutorials and examples of different complex queries using linq but non of them care to explain the advantage of returning a 'bag of properties' (return new { .........} ). How am I supposed to work with anonymous type, wrap it in another class again?
Actually I can´t think of any drawbacks. It makes programming life a lot simpler because a lot of things can be written in a more compact but still better readable way.
But having said this, I must also agree with Jon that you should have some idea what you´re doing (but that holds for all technological advances).
the only drawback which it has is its performance see this article

Resources