Why is this LINQ so slow? - linq

Can anyone please explain why the third query below is orders of magnitude slower than the others when it oughtn't to take any longer than doing the first two in sequence?
var data = Enumerable.Range(0, 10000).Select(x => new { Index = x, Value = x + " is the magic number"}).ToList();
var test1 = data.Select(x => new { Original = x, Match = data.Single(y => y.Value == x.Value) }).Take(1).Dump();
var test2 = data.Select(x => new { Original = x, Match = data.Single(z => z.Index == x.Index) }).Take(1).Dump();
var test3 = data.Select(x => new { Original = x, Match = data.Single(z => z.Index == data.Single(y => y.Value == x.Value).Index) }).Take(1).Dump();
EDIT: I've added a .ToList() to the original data generation because I don't want any repeated generation of the data clouding the issue.
I'm just trying to understand why this code is so slow by the way, not looking for faster alternative, unless it sheds some light on the matter. I would have thought that if Linq is lazily evaluated and I'm only looking for the first item (Take(1)) then test3's:
data.Select(x => new { Original = x, Match = data.Single(z => z.Index == data.Single(y => y.Value == x.Value).Index) }).Take(1);
could reduce to:
data.Select(x => new { Original = x, Match = data.Single(z => z.Index == 1) }).Take(1)
in O(N) as the first item in data is successfully matched after one full scan of the data by the inner Single(), leaving one more sweep of the data by the remaining Single(). So still all O(N).
It's evidently being processed in a more long winded way but I don't really understand how or why.
Test3 takes a couple of seconds to run by the way, so I think we can safely assume that if your answer features the number 10^16 you've made a mistake somewhere along the line.

The first two "tests" are identical, and both slow. The third adds another entire level of slowness.
The first two LINQ statements here are quadratic in nature. Since your "Match" element potentially requires iterating through the entire "data" sequence in order to find the match, as you progress through the range, the length of time for that element will get progressively longer. The 10000th element, for example, will force the engine to iterate through all 10000 elements of the original sequence to find the match, making this an O(N^2) operation.
The "test3" operation takes this to an entirely new level of pain, since it's "squaring" the O(N^2) operation in the second single - forcing it to do another quadratic operation on top of the first one - which is going to be a huge number of operations.
Each time you do data.Single(...) with the match, you're doing an O(N^2) operation - the third test basically becomes O(N^4), which will be orders of magnitude slower.

Fixed.
var data = Enumerable.Range(0, 10000)
.Select(x => new { Index = x, Value = x + " is the magic number"})
.ToList();
var forward = data.ToLookup(x => x.Index);
var backward = data.ToLookup(x => x.Value);
var test1 = data.Select(x => new { Original = x,
Match = backward[x.Value].Single()
} ).Take(1).Dump();
var test2 = data.Select(x => new { Original = x,
Match = forward[x.Index].Single()
} ).Take(1).Dump();
var test3 = data.Select(x => new { Original = x,
Match = forward[backward[x.Value].Single().Index].Single()
} ).Take(1).Dump();
In the original code,
data.ToList() generates 10,000 instances (10^4).
data.Select( data.Single() ).ToList() generates 100,000,000 instances (10^8).
data.Select( data.Single( data.Single() ) ).ToList() generates 100,000,000,000,000,000 instances (10^16).
Single and First are different. Single throws if multiple instances are encountered. Single must fully enumerate its source to check for multiple instances.

Related

LINQ: Improving performance of "query to find all dictionaries from list of dictionaries where given key has at least one value from list of values"

I tried searching for existing questions, but I could not find anything, so apologize if this is duplicate question.
I have following piece of code. This code runs in a loop for different values of key and listOfValues (listOfDict does not change and built only once, key and listOfValues vary for each iteration). This code currently works, but profiler shows that 50% of the execution time is spent in this LINQ query. Can I improve performance - using different LINQ construct perhaps?
// List of dictionary that allows multiple values against one key.
List<Dictionary<string, List<string>>> listOfDict = BuildListOfDict();
// Following code & LINQ query runs in a loop.
List<string> listOfValues = BuildListOfValues();
string key = GetKey();
// LINQ query to find all dictionaries from listOfDict
// where given key has at least one value from listOfValues.
List<Dictionary<string, List<string>>> result = listOfDict
.Where(dict => dict[key]
.Any(lhs => listOfValues.Any(rhs => lhs == rhs)))
.ToList();
Using HashSet will perform significantly better. You can create a HashSet<string> like so:
IEnumerable<string> strings = ...;
var hashSet = new HashSet<string>(strings);
I assume you can change your methods to return HashSets and make them run like this:
List<Dictionary<string, HashSet<string>>> listOfDict = BuildListOfDict();
HashSet<string> listOfValues = BuildListOfValues();
string key = GetKey();
List<Dictionary<string, HashSet<string>>> result = listOfDict
.Where(dict => listOfValues.Overlaps(dict[key]))
.ToList();
Here HashSet's instance method Overlaps is used. HashSet is optimized for set operations like this. In a test using one dictionary of 200 elements this runs in 3% of the time compared to your method.
UPDATED: Per #GertArnold, switched from Any/Contains to HashSet.Overlaps for slight performance improvement.
Depending on whether listOfValues or the average value for a key is longer, you can either convert listOfValues to a HashSet<string> or build your list of dictionaries to have a HashSet<string> for each value:
// optimize testing against listOfValues
var valHS = listOfValues.ToHashSet();
var result2 = listOfDict.Where(dict => valHS.Overlaps(dict[key]))
.ToList();
// change structure to optimize query
var listOfDict2 = listOfDict.Select(dict => dict.ToDictionary(kvp => kvp.Key, kvp => kvp.Value.ToHashSet())).ToList();
var result3 = listOfDict2.Where(dict => dict[key].Overlaps(listOfValues))
.ToList();
Note: if the query is repeated with differing listOfValues, it probably makes more sense to build the HashSet in the dictionaries once, rather than computing a HashSet from each listOfValues.
#LasseVågsætherKarlsen suggestion in comments to invert the structure intrigued me, so with a further refinement to handle the multiple keys, I created an index structure and tested lookups. With my Test Harness, this is about twice as fast as using a HashSet for one of the List<string>s and four times faster than the original method:
var listOfKeys = listOfDict.First().Select(d => d.Key);
var lookup = listOfKeys.ToDictionary(k => k, k => listOfDict.SelectMany(d => d[k].Select(v => (v, d))).ToLookup(vd => vd.v, vd => vd.d));
Now to filter for a particular key and list of values:
var result4 = listOfValues.SelectMany(v => lookup[key][v]).Distinct().ToList();

Using Linq, How can I properly append multiple Where clauses so that they appear in the appropriate order?

The order I would like the end result to appear in is Exact Matches first given a input string, followed by other matches that are Contains for a given field. I tried to approach this in a very rudimentary way as shown here in this example:
var raw = Model.SearchResults.Where(m => m.EffectiveDateTime != null).OrderBy(m => m.EffectiveDateTime).ToList();
var exact = raw.Where(m => m.IssueNumber.ToLower() == Model.SearchText.ToLower());
var contains = raw.Where(m => m.IssueNumber.ToLower().Contains(Model.SearchText.ToLower()));
var list = exact.Union(contains);
This approach seems like it'd be a really bad way to do this. In fact, the Union portion seems to effectively crash my application. Is there an opposite to Intersection which would give me the remaining results outside the Exact matches that I could then append to a final list so that the order would be Exact Matches followed by StartsWith matches followed finally by Contains matches in that descending order?
To answer your original question, you can use a temporary expression to classify the match types, then order by the match type and other criteria, and it will translate to SQL as well:
var st = Model.SearchText.ToLower();
var list = Model.SearchResults.Where(m => m.EffectiveDateTime != null)
.Select(m => new {
m,
im = m.IssueNumber.ToLower()
})
.Select(mim => new {
mim.m,
Rank = mim.im == st ? 1 : mim.im.StartsWith(st) ? 2 : mim.im.Contains(st) ? 3 : 4
})
.Where(mr => mr.Rank < 4)
.OrderBy(mr => mr.Rank)
.ThenBy(mr => mr.m.EffectiveDateTime)
.Select(mr => mr.m)
.ToList();
I did the double Select to emulate let from fluent syntax, which I think is a bit clearer than lambda syntax in this case:
var lisx = (from m in Model.SearchResults
where m.EffectiveDateTime != null
let im = m.IssueNumber.ToLower()
let Rank = im == st ? 1 : im.StartsWith(st) ? 2 : im.Contains(st) ? 3 : 4
where Rank < 4
orderby Rank, m.EffectiveDateTime
select m)
.ToList();
Also, if you do the whole query in the database, the ToLower is likely unnecessary, as the default for SQL is probably to be case-insensitive anyway.
Actually, I went back to the drawing board and figured it out. This is a little bit better for me and returns the results I needed.
var list = Model.SearchResults
.Where(e => e.A.ToLower().Contains(Model.SearchText.ToLower()))
.GroupBy(d => new { d.A, d.B, d.C})
.OrderBy(x => x.Key.A)
.ThenBy(x => x.Key.B)
.ThenBy(x => x.Key.C)
.Select(x => new
{
A= x.Key.A,
B= x.Key.B,
C= x.Key.C
})
.ToList();

Merging inputs in distributed application

INTRODUCTION
I have to write distributed application which counts maximum number of unique values for 3 records. I have no experience in such area and don't know frameworks at all. My input could looks as follow:
u1: u2,u3,u4,u5,u6
u2: u1,u4,u6,u7,u8
u3: u1,u4,u5,u9
u4: u1,u2,u3,u6
...
Then beginning of the results should be:
(u1,u2,u3), u4,u5,u6,u7,u8,u9 => count=6
(u1,u2,u4), u3,u5,u6,u7,u8 => count=5
(u1,u3,u4), u2,u5,u6,u9 => count=4
(u2,u3,u4), u1,u5,u6,u7,u8,u9 => count=6
...
So my approach is to first merge each two of records, and then merge each merged pair with each single record.
QUESTION
Can I do such operation like working (merge) on more than one input row on the same time in framewors like hadoop/spark? Or maybe my approach is incorrect and I should do this different way?
Any advice will be appreciated.
Can I do such operation like working (merge) on more than one input row on the same time in framewors like hadoop/spark?
Yes, you can.
Or maybe my approach is incorrect and I should do this different way?
It depends on the size of the data. If your data is small, it's faster and easier to do it locally. If your data is huge, at least hundreds of GBs, the common strategy is to save the data to HDFS(distributed file system), and do analysis using Mapreduce/Spark.
A example spark application written in scala:
object MyCounter {
val sparkConf = new SparkConf().setAppName("My Counter")
val sc = new SparkContext(sparkConf)
def main(args: Array[String]) {
val inputFile = sc.textFile("hdfs:///inputfile.txt")
val keys = inputFile.map(line => line.substring(0, 2)) // get "u1" from "u1: u2,u3,u4,u5,u6"
val triplets = keys.cartesian(keys).cartesian(keys)
.map(z => (z._1._1, z._1._2, z._2))
.filter(z => !z._1.equals(z._2) && !z._1.equals(z._3) && !z._2.equals(z._3)) // get "(u1,u2,u3)" triplets
// If you have small numbers of (u1,u2,u3) triplets, it's better prepare them locally.
val res = triplets.cartesian(inputFile).filter(z => {
z._2.startsWith(z._1._1) || z._2.startsWith(z._1._2) || z._2.startsWith(z._1._3)
}) // (u1,u2,u3) only matches line starts with u1,u2,u3, for example "u1: u2,u3,u4,u5,u6"
.reduceByKey((a, b) => a + b) // merge three lines
.map(z => {
val line = z._2
val values = line.split(",")
//count unique values using set
val set = new util.HashSet[String]()
for (value <- values) {
set.add(value)
}
"key=" + z._1 + ", count=" + set.size() // the result from one mapper is a string
}).collect()
for (line <- res) {
println(line)
}
}
}
The code is not tested. And is not efficient. It can have some optimization (for example, remove unnecessary map-reduce steps.)
You can rewrite the same version using Python/Java.
You can implement the same logic using Hadoop/Mapreduce

Linq Select into New Object Performance

I am new to Linq, using C#. I got a big surprise when I executed the following:
var scores = objects.Select( i => new { object = i,
score1 = i.algorithm1(),
score2 = i.algorithm2(),
score3 = i.algorithm3() } );
double avg2 = scores.Average( i => i.score2); // algorithm1() is called for every object
double cutoff2 = avg2 + scores.Select( i => i.score2).StdDev(); // algorithm1() is called for every object
double avg3 = scores.Average( i => i.score3); // algorithm1() is called for every object
double cutoff3 = avg3 + scores.Select( i => i.score3).StdDev(); // algorithm1() is called for every object
foreach( var s in scores.Where( i => i.score2 > cutoff2 | i.score3 > cutoff3 ).OrderBy( i => i.score1 )) // algorithm1() is called for every object
{
Debug.Log(String.Format ("{0} {1} {2} {3}\n", s.object, s.score1, s.score2/avg2, s.score3/avg3));
}
The attributes in my new objects store the function calls rather than the values. Each time I tried to access an attribute, the original function is called. I assume this is a huge waste of time? How can I avoid this?
Yes, you've discovered that LINQ uses deferred execution. This is a normal part of LINQ, and very handy indeed for building up queries without actually executing anything until you need to - which in turn is great for pipelines of multiple operations over potentially huge data sources which can be streamed.
For more details about how LINQ to Objects works internally, you might want to read my Edulinq blog series - it's basically a reimplementation of the whole of LINQ to Objects, one method at a time. Hopefully by the end of that you'll have a much clearer idea of what to expect.
If you want to materialize the query, you just need to call ToList or ToArray to build an in-memory copy of the results:
var scores = objects.Select( i => new { object = i,
score1 = i.algorithm1(),
score2 = i.algorithm2(),
score3 = i.algorithm3() } ).ToList();

How to request with random row linq

I am slow today
There is a request
"Take random child and put it into another garden."
I changed the code, but error in the last line of code "Does not contain a definition…and no extension method":
var query = db.Child.Where(x => x.Garden != null);
int count = query.Count();
int index = new Random().Next(count);
var ch = db.Child.OrderBy(x => query.Skip(index).FirstOrDefault());
ch.Garden_Id = "1";
What am I doing wrong?
It's hard to tell what you're doing wrong, because you didn't say why the results you're getting does not satisfy you.
But I can see two possible mistakes.
You're counting items with x.Garden != null condition, but taking from all children.
Take returns IEnumerable<T> even when you specify it to return only 1 item, you should probably use First instead.
I think your k should be
var k = db.Child.Where(x => x.Garden != null).Skip(rnd.Next(0,q)).First();

Resources