Based on this question & answer "How to retrieve unique count of a field using Kibana + Elastic Search"
I have been able to collect the individual count of the unique IP addresses from our Apache logs, however, What I actually want to do is to be able to display the count of the individual IP addresses, i.e. how many unique visitors.
I think I need to use the terms_stats facet to do this but I don't know what to set as the "value_field"
This is not possible with the current version of the kibana.
but i have what i did to achieve this is created the custom histogram panel.
to create the custom histogram panel, just copy the existing histogram and modify config.js, module.js to change all the path references to the new panel.
override the doSearch function to use the query http://www.elasticsearch.org/blog/count-elasticsearch/
and update the results parsing logic.
look for function
b.get_data = function(a, j, k)
return b.populate_modal(n), p = n.doSearch(), p.then(function(c) {
if (b.panelMeta.loading = !1, 0 === j && (b.legend = [], b.hits = 0, a = [], b.annotations = [], k = b.query_id = (new Date).getTime()), d.isUndefined(c.error)) {
if (b.query_id === k) {
var i, n, p, q = 0;
o = JSON.parse("[{\"query\":\"*\",\"alias\":\"\",\"color\":\"#7EB26D\",\"id\":0,\"pin\":false,\"type\":\"lucene\",\"enable\":true,\"parent\" : 0}]");
d.each(o, function(e) {
//alert(JSON.stringify(c));
//var f = c.aggregations.monthly.buckets[e.id];
if (d.isUndefined(a[q]) || 0 === j) {
var h = {interval: m,start_date: l && l.from,end_date: l && l.to,fill_style: b.panel.derivative ? "null" : b.panel.zerofill ? "minimal" : "no"};
i = new g.ZeroFilled(h), n = 0, p = {}
} else
i = a[q].time_series, n = a[q].hits, p = a[q].counters;
d.each(c.aggregations.monthly.buckets, function(a) {
var c;
n += a.visitor_count.value, b.hits += a.visitor_count.value, p[a.key] = (p[a.key] || 0) + a.visitor_count.value, "count" === b.panel.mode ? c = (i._data[a.key] || 0) + a.visitor_count.value : "mean" === b.panel.mode ? c = ((i._data[a.key] || 0) * (p[a.key] - a.visitor_count.value) + a.mean * a.visitor_count.value) / p[a.key] : "min" === b.panel.mode ? c = d.isUndefined(i._data[a.key]) ? a.min : i._data[a.key] < a.min ? i._data[a.key] : a.min : "max" === b.panel.mode ? c = d.isUndefined(i._data[a.key]) ? a.max : i._data[a.key] > a.max ? i._data[a.key] : a.max : "total" === b.panel.mode && (c = (i._data[a.key] || 0) + a.total), i.addValue(a.key, c)
}), b.legend[q] = {query: e,hits: n}, a[q] = {info: e,time_series: i,hits: n,counters: p}, q++
}), b.panel.annotate.enable && (b.annotations = b.annotations.concat(d.map(c.hits.hits, function(a) {
var c = d.omit(a, "_source", "sort", "_score"), g = d.extend(e.flatten_json(a._source), c);
return {min: a.sort[1],max: a.sort[1],eventType: "annotation",title: null,description: "<small><i class='icon-tag icon-flip-vertical'></i> " + g[b.panel.annotate.field] + "</small><br>" + f(a.sort[1]).format("YYYY-MM-DD HH:mm:ss"),score: a.sort[0]}
})), b.annotations = d.sortBy(b.annotations, function(a) {
return a.score * ("desc" === b.panel.annotate.sort[1] ? -1 : 1)
}), b.annotations = b.annotations.slice(0, b.panel.annotate.size))
}
} else
b.panel.error = b.parse_error(c.error);
b.$emit("render", a), j < h.indices.length - 1 && b.get_data(a, j + 1, k)
})
Related
I was reading Spark correlation algorithm source code and while going through the code, I coulddn't understand this particular peace of code.
This is from the file : org/apache/spark/mllib/linalg/BLAS.scala
def spr(alpha: Double, v: Vector, U: Array[Double]): Unit = {
val n = v.size
v match {
case DenseVector(values) =>
NativeBLAS.dspr("U", n, alpha, values, 1, U)
case SparseVector(size, indices, values) =>
val nnz = indices.length
var colStartIdx = 0
var prevCol = 0
var col = 0
var j = 0
var i = 0
var av = 0.0
while (j < nnz) {
col = indices(j)
// Skip empty columns.
colStartIdx += (col - prevCol) * (col + prevCol + 1) / 2
av = alpha * values(j)
i = 0
while (i <= j) {
U(colStartIdx + indices(i)) += av * values(i)
i += 1
}
j += 1
prevCol = col
}
}
}
I do not know Scala and that could be the reason I could not understand it. Can someone explain what is happening here.
It is being called from Rowmatrix.scala
def computeGramianMatrix(): Matrix = {
val n = numCols().toInt
checkNumColumns(n)
// Computes n*(n+1)/2, avoiding overflow in the multiplication.
// This succeeds when n <= 65535, which is checked above
val nt = if (n % 2 == 0) ((n / 2) * (n + 1)) else (n * ((n + 1) / 2))
// Compute the upper triangular part of the gram matrix.
val GU = rows.treeAggregate(new BDV[Double](nt))(
seqOp = (U, v) => {
BLAS.spr(1.0, v, U.data)
U
}, combOp = (U1, U2) => U1 += U2)
RowMatrix.triuToFull(n, GU.data)
}
The correlation is defined here:
https://en.wikipedia.org/wiki/Pearson_correlation_coefficient
The final goal is to understand the Spark correlation algorithm.
Update 1: Relevent paper https://stanford.edu/~rezab/papers/linalg.pdf
I am working on an extrusion function to create a mesh given a 2D texture and the thickness of it.
Example:
I have achieved finding the outline of the texture by simply looking for the pixels either near the edge or near transparent ones. It works great even for concave (donut-shaped) shapes but now I am left with an array of outline pixels.
Here is the result:
The problem is that the values, by being ordered from top-left to bottom-right, they are not suitable for building an actual 3D outline.
My current idea is the following:
Step 1.
From index [0], look at the right-hand side for the nearest contiguous point different from the starting point.
If found, move it into another array.
If nothing, look at the bottom. Continue until the starting point has been reached.
Step2.
Pick another pixel, if any, from the pixels remained in the array.
Repeat from Step1.
This, in my head, would work but it seems quite inefficient. Researching, I found about the Moore-Neighbor tracing algorithm but I couldn't find anywhere an example where it worked with convex shapes.
Any thoughts?
At the end, I managed to find my own answer, so here I want to share it:
After finding the outline of a given image (using the alpha value of each pixel), the pixels will be ordered in rows, good for drawing them but bad for constructing a mesh.
So, the next step is to find contiguous lines. This is done by checking first if there are any neighbors to the found pixel giving priority to the ones top/left/right/bottom (otherwise it will skip the corners).
Keep going until no pixels are left in the original array.
Here is the actual implementation (for Babylon.js but the idea works with any other engine):
Playground: https://www.babylonjs-playground.com/#9GPMUY#11
var GetTextureOutline = function (data, keepOutline, keepOtherPixels) {
var not_outline = [];
var pixels_list = [];
for (var j = 0; j < data.length; j = j + 4) {
var alpha = data[j + 3];
var current_alpha_index = j + 3;
// Not Invisible
if (alpha != 0) {
var top_alpha = data[current_alpha_index - (canvasWidth * 4)];
var bottom_alpha = data[current_alpha_index + (canvasWidth * 4)];
var left_alpha = data[current_alpha_index - 4];
var right_alpha = data[current_alpha_index + 4];
if ((top_alpha === undefined || top_alpha == 0) ||
(bottom_alpha === undefined || bottom_alpha == 0) ||
(left_alpha === undefined || left_alpha == 0) ||
(right_alpha === undefined || right_alpha == 0)) {
pixels_list.push({
x: (j / 4) % canvasWidth,
y: parseInt((j / 4) / canvasWidth),
color: new BABYLON.Color3(data[j] / 255, data[j + 1] / 255, data[j + 2] / 255),
alpha: data[j + 3] / 255
});
if (!keepOutline) {
data[j] = 255;
data[j + 1] = 0;
data[j + 2] = 255;
}
} else if (!keepOtherPixels) {
not_outline.push(j);
}
}
}
// Remove not-outline pixels
for (var i = 0; i < not_outline.length; i++) {
if (!keepOtherPixels) {
data[not_outline[i]] = 0;
data[not_outline[i] + 1] = 0;
data[not_outline[i] + 2] = 0;
data[not_outline[i] + 3] = 0;
}
}
return pixels_list;
}
var ExtractLinesFromPixelsList = function (pixelsList, sortPixels) {
if (sortPixels) {
// Sort pixelsList
function sortY(a, b) {
if (a.y == b.y) return a.x - b.x;
return a.y - b.y;
}
pixelsList.sort(sortY);
}
var lines = [];
var line = [];
var pixelAdded = true;
var skipDiagonals = true;
line.push(pixelsList[0]);
pixelsList.splice(0, 1);
var countPixels = 0;
while (pixelsList.length != 0) {
if (!pixelAdded && !skipDiagonals) {
lines.push(line);
line = [];
line.push(pixelsList[0]);
pixelsList.splice(0, 1);
} else if (!pixelAdded) {
skipDiagonals = false;
}
pixelAdded = false;
for (var i = 0; i < pixelsList.length; i++) {
if ((skipDiagonals && (
line[line.length - 1].x + 1 == pixelsList[i].x && line[line.length - 1].y == pixelsList[i].y ||
line[line.length - 1].x - 1 == pixelsList[i].x && line[line.length - 1].y == pixelsList[i].y ||
line[line.length - 1].x == pixelsList[i].x && line[line.length - 1].y + 1 == pixelsList[i].y ||
line[line.length - 1].x == pixelsList[i].x && line[line.length - 1].y - 1 == pixelsList[i].y)) || (!skipDiagonals && (
line[line.length - 1].x + 1 == pixelsList[i].x && line[line.length - 1].y + 1 == pixelsList[i].y ||
line[line.length - 1].x + 1 == pixelsList[i].x && line[line.length - 1].y - 1 == pixelsList[i].y ||
line[line.length - 1].x - 1 == pixelsList[i].x && line[line.length - 1].y + 1 == pixelsList[i].y ||
line[line.length - 1].x - 1 == pixelsList[i].x && line[line.length - 1].y - 1 == pixelsList[i].y
))) {
line.push(pixelsList[i]);
pixelsList.splice(i, 1);
i--;
pixelAdded = true;
skipDiagonals = true;
}
}
}
lines.push(line);
return lines;
}
Algorithm Looping over pixels, we only check each pixel once, skipping empty cells, and store it in a list as there won't be duplicates.
isEmpty implementation depends on how transparency works in your case, if a certain color is considered transparent, below is a case where we have an alpha channel.
threshold is the alpha level that represent the least-visibility for a cell to be considered non-empty.
isBorder will check if any of Moore neighbors is empty, in that case it is a border cell, otherwise it's not because it is surrounded by filled cells.
isEmpty(x,y): image[x,y].alpha <= threshold
isBorder(x,y)
: if isEmpty(x , y-1): return true
: if isEmpty(x , y+1): return true
: if isEmpty(x-1, y ): return true
: if isEmpty(x+1, y ): return true
: if isEmpty(x-1, y-1): return true
: if isEmpty(x-1, y+1): return true
: if isEmpty(x+1, y-1): return true
: if isEmpty(x+1, y+1): return true
: otherwise: return false
getBorderCellList()
: l = empty-list
: for x in 0..image.width
: : for y in 0..image.height
: : : if !isEmpty(x,y)
: : : : if isBorder(x,y)
: : : : : l.add(x,y)
: return l
Optimization You could optimize this by having a pre-computed boolean e[image.width][image.height] where e[x,y] = 1 if image[x,y]is not-empty, then use it directly to check, like isBorder(x,y): e[x-1,y] | e[x+1,y] | .. | e[x+1,y+1].
init()
: for x in 0..image.width
: : for y in 0..image.height
: : : e[x,y] = isEmpty(x,y)
isEmpty(x,y): image[x,y].alpha <= threshold
isBorder(x,y): e[x-1,y] | e[x+1,y] | .. | e[x+1,y+1]
getBorderCellList()
: l = empty-list
: for x in 0..image.width
: : for y in 0..image.height
: : : if not e[x,y]
: : : : if isBorder(x,y)
: : : : : l.add(x,y)
: return l
How to override moveTo and moveByPx methods of OpenLayers.Map for eliminate "movestart" event triggering for any actions except zooming ?
map = new OpenLayers.Map("map");
OpenLayers.Map.prototype.moveByPx = function (a, b) {
var c = this.size.w / 2,
d = this.size.h / 2,
e = c + a,
f = d + b,
g = this.baseLayer.wrapDateLine,
h = 0,
k = 0;
this.restrictedExtent && (h = c, k = d, g = !1);
a = g || e <= this.maxPx.x - h && e >= this.minPx.x + h ? Math.round(a) : 0;
b = f <= this.maxPx.y - k && f >= this.minPx.y + k ? Math.round(b) : 0;
if (a || b) {
this.dragging || (this.dragging = !0);
this.center = null;
a && (this.layerContainerOriginPx.x -= a, this.minPx.x -= a, this.maxPx.x -= a);
b && (this.layerContainerOriginPx.y -= b, this.minPx.y -= b, this.maxPx.y -= b);
this.applyTransform();
d = 0;
for (e = this.layers.length; d < e; ++d)
c = this.layers[d], c.visibility && (c === this.baseLayer || c.inRange) && (c.moveByPx(a, b), c.events.triggerEvent("move"));
this.events.triggerEvent("move")
}
}
map.events.register("movestart", map, function (e) {
My Code...
});
Given a Map of objects and designated proportions (let's say they add up to 100 to make it easy):
val ss : Map[String,Double] = Map("A"->42, "B"->32, "C"->26)
How can I generate a sequence such that for a subset of size n there are ~42% "A"s, ~32% "B"s and ~26% "C"s? (Obviously, small n will have larger errors).
(Work language is Scala, but I'm just asking for the algorithm.)
UPDATE: I resisted a random approach since, for instance, there's ~16% chance that the sequence would start with AA and ~11% chance it would start with BB and there would be very low odds that for n precisely == (sum of proportions) the distribution would be perfect. So, following #MvG's answer, I implemented as follows:
/**
Returns the key whose achieved proportions are most below desired proportions
*/
def next[T](proportions : Map[T, Double], achievedToDate : Map[T,Double]) : T = {
val proportionsSum = proportions.values.sum
val desiredPercentages = proportions.mapValues(v => v / proportionsSum)
//Initially no achieved percentages, so avoid / 0
val toDateTotal = if(achievedToDate.values.sum == 0.0){
1
}else{
achievedToDate.values.sum
}
val achievedPercentages = achievedToDate.mapValues(v => v / toDateTotal)
val gaps = achievedPercentages.map{ case (k, v) =>
val gap = desiredPercentages(k) - v
(k -> gap)
}
val maxUnder = gaps.values.toList.sortWith(_ > _).head
//println("Max gap is " + maxUnder)
val gapsForMaxUnder = gaps.mapValues{v => Math.abs(v - maxUnder) < Double.Epsilon }
val keysByHasMaxUnder = gapsForMaxUnder.map(_.swap)
keysByHasMaxUnder(true)
}
/**
Stream of most-fair next element
*/
def proportionalStream[T](proportions : Map[T, Double], toDate : Map[T, Double]) : Stream[T] = {
val nextS = next(proportions, toDate)
val tailToDate = toDate + (nextS -> (toDate(nextS) + 1.0))
Stream.cons(
nextS,
proportionalStream(proportions, tailToDate)
)
}
That when used, e.g., :
val ss : Map[String,Double] = Map("A"->42, "B"->32, "C"->26)
val none : Map[String,Double] = ss.mapValues(_ => 0.0)
val mySequence = (proportionalStream(ss, none) take 100).toList
println("Desired : " + ss)
println("Achieved : " + mySequence.groupBy(identity).mapValues(_.size))
mySequence.map(s => print(s))
println
produces :
Desired : Map(A -> 42.0, B -> 32.0, C -> 26.0)
Achieved : Map(C -> 26, A -> 42, B -> 32)
ABCABCABACBACABACBABACABCABACBACABABCABACABCABACBA
CABABCABACBACABACBABACABCABACBACABABCABACABCABACBA
For a deterministic approach, the most obvious solution would probably be this:
Keep track of the number of occurrences of each item in the sequence so far.
For the next item, choose that item for which the difference between intended and actual count (or proportion, if you prefer that) is maximal, but only if the intended count (resp. proportion) is greater than the actual one.
If there is a tie, break it in an arbitrary but deterministic way, e.g. choosing the alphabetically lowest item.
This approach would ensure an optimal adherence to the prescribed ratio for every prefix of the infinite sequence generated in this way.
Quick & dirty python proof of concept (don't expect any of the variable “names” to make any sense):
import sys
p = [0.42, 0.32, 0.26]
c = [0, 0, 0]
a = ['A', 'B', 'C']
n = 0
while n < 70*5:
n += 1
x = 0
s = n*p[0] - c[0]
for i in [1, 2]:
si = n*p[i] - c[i]
if si > s:
x = i
s = si
sys.stdout.write(a[x])
if n % 70 == 0:
sys.stdout.write('\n')
c[x] += 1
Generates
ABCABCABACABACBABCAABCABACBACABACBABCABACABACBACBAABCABCABACABACBABCAB
ACABACBACABACBABCABACABACBACBAABCABCABACABACBABCAABCABACBACABACBABCABA
CABACBACBAABCABCABACABACBABCABACABACBACBAACBABCABACABACBACBAABCABCABAC
ABACBABCABACABACBACBAACBABCABACABACBACBAABCABCABACABACBABCABACABACBACB
AACBABCABACABACBACBAABCABCABACABACBABCAABCABACBACBAACBABCABACABACBACBA
For every item of the sequence, compute a (pseudo-)random number r equidistributed between 0 (inclusive) and 100 (exclusive).
If 0 ≤ r < 42, take A
If 42 ≤ r < (42+32), take B
If (42+32) ≤ r < (42+32+26)=100, take C
The number of each entry in your subset is going to be the same as in your map, but with a scaling factor applied.
The scaling factor is n/100.
So if n was 50, you would have { Ax21, Bx16, Cx13 }.
Randomize the order to your liking.
The simplest "deterministic" [in terms of #elements of each category] solution [IMO] will be: add elements in predefined order, and then shuffle the resulting list.
First, add map(x)/100 * n elements from each element x chose how you handle integer arithmetics to avoid off by one element], and then shuffle the resulting list.
Shuffling a list is simple with fisher-yates shuffle, which is implemented in most languages: for example java has Collections.shuffle(), and C++ has random_shuffle()
In java, it will be as simple as:
int N = 107;
List<String> res = new ArrayList<String>();
for (Entry<String,Integer> e : map.entrySet()) { //map is predefined Map<String,Integer> for frequencies
for (int i = 0; i < Math.round(e.getValue()/100.0 * N); i++) {
res.add(e.getKey());
}
}
Collections.shuffle(res);
This is nondeterministic, but gives a distribution of values close to MvG's. It suffers from the problem that it could give AAA right at the start. I post it here for completeness' sake given how it proves my dissent with MvG was misplaced (and I don't expect any upvotes).
Now, if someone has an idea for an expand function that is deterministic and won't just duplicate MvG's method (rendering the calc function useless), I'm all ears!
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>ErikE's answer</title>
</head>
<body>
<div id="output"></div>
<script type="text/javascript">
if (!Array.each) {
Array.prototype.each = function(callback) {
var i, l = this.length;
for (i = 0; i < l; i += 1) {
callback(i, this[i]);
}
};
}
if (!Array.prototype.sum) {
Array.prototype.sum = function() {
var sum = 0;
this.each(function(i, val) {
sum += val;
});
return sum;
};
}
function expand(counts) {
var
result = "",
charlist = [],
l,
index;
counts.each(function(i, val) {
char = String.fromCharCode(i + 65);
for ( ; val > 0; val -= 1) {
charlist.push(char);
}
});
l = charlist.length;
for ( ; l > 0; l -= 1) {
index = Math.floor(Math.random() * l);
result += charlist[index];
charlist.splice(index, 1);
}
return result;
}
function calc(n, proportions) {
var percents = [],
counts = [],
errors = [],
fnmap = [],
errorSum,
worstIndex;
fnmap[1] = "min";
fnmap[-1] = "max";
proportions.each(function(i, val) {
percents[i] = val / proportions.sum() * n;
counts[i] = Math.round(percents[i]);
errors[i] = counts[i] - percents[i];
});
errorSum = counts.sum() - n;
while (errorSum != 0) {
adjust = errorSum < 0 ? 1 : -1;
worstIndex = errors.indexOf(Math[fnmap[adjust]].apply(0, errors));
counts[worstIndex] += adjust;
errors[worstIndex] = counts[worstIndex] - percents[worstIndex];
errorSum += adjust;
}
return expand(counts);
}
document.body.onload = function() {
document.getElementById('output').innerHTML = calc(99, [25.1, 24.9, 25.9, 24.1]);
};
</script>
</body>
</html>
I am trying to create a very simple distribution chart and I want to display the counts of tests score percentages in their corresponding 10's ranges.
I thought about just doing the grouping on the Math.Round((d.Percentage/10-0.5),0)*10 which should give me the 10's value....but I wasn't sure the best way to do this given that I would probably have missing ranges and all ranges need to appear even if the count is zero. I also thought about doing an outer join on the ranges array but since I'm fairly new to Linq so for the sake of time I opted for the code below. I would however like to know what a better way might be.
Also note: As I tend to work with larger teams with varying experience levels, I'm not all that crazy about ultra compact code unless it remains very readable to the average developer.
Any suggestions?
public IEnumerable<TestDistribution> GetDistribution()
{
var distribution = new List<TestDistribution>();
var ranges = new int[] { 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110 };
var labels = new string[] { "0%'s", "10%'s", "20%'s", "30%'s", "40%'s", "50%'s", "60%'s", "70%'s", "80%'s", "90%'s", "100%'s", ">110% "};
for (var n = 0; n < ranges.Count(); n++)
{
var count = 0;
var min = ranges[n];
var max = (n == ranges.Count() - 1) ? decimal.MaxValue : ranges[n+1];
count = (from d in Results
where d.Percentage>= min
&& d.Percentage<max
select d)
.Count();
distribution.Add(new TestDistribution() { Label = labels[n], Frequency = count });
}
return distribution;
}
// ranges and labels in a list of pairs of them
var rangesWithLabels = ranges.Zip(labels, (r,l) => new {Range = r, Label = l});
// create a list of intervals (ie. 0-10, 10-20, .. 110 - max value
var rangeMinMax = ranges.Zip(ranges.Skip(1), (min, max) => new {Min = min, Max = max})
.Union(new[] {new {Min = ranges.Last(), Max = Int32.MaxValue}});
//the grouping is made by the lower bound of the interval found for some Percentage
var resultsDistribution = from c in Results
group c by
rangeMinMax.FirstOrDefault(r=> r.Min <= c.Percentage && c.Percentage < r.Max).Min into g
select new {Percentage = g.Key, Frequency = g.Count() };
// left join betweem the labels and the results with frequencies
var distributionWithLabels =
from l in rangesWithLabels
join r in resultsDistribution on l.Range equals r.Percentage
into rd
from r in rd.DefaultIfEmpty()
select new TestDistribution{
Label = l.Label,
Frequency = r != null ? r.Frequency : 0
};
distribution = distributionWithLabels.ToList();
Another solution if the ranges and labels can be created in another way
var ranges = Enumerable.Range(0, 10)
.Select(c=> new {
Min = c * 10,
Max = (c +1 )* 10,
Label = (c * 10) + "%'s"})
.Union(new[] { new {
Min = 100,
Max = Int32.MaxValue,
Label = ">110% "
}});
var resultsDistribution = from c in Results
group c by ranges.FirstOrDefault(r=> r.Min <= c.Percentage && c.Percentage < r.Max).Min
into g
select new {Percentage = g.Key, Frequency = g.Count() };
var distributionWithLabels =
from l in ranges
join r in resultsDistribution on l.Min equals r.Percentage
into rd
from r in rd.DefaultIfEmpty()
select new TestDistribution{
Label = l.Label,
Frequency = r != null ? r.Frequency : 0
};
This works
public IEnumerable<TestDistribution> GetDistribution()
{
var range = 12;
return Enumerable.Range(0, range).Select(
n => new TestDistribution
{
Label = string.Format("{1}{0}%'s", n*10, n==range-1 ? ">" : ""),
Frequency =
Results.Count(
d =>
d.Percentage >= n*10
&& d.Percentage < ((n == range - 1) ? decimal.MaxValue : (n+1)*10))
});
}