Take this simple grouped column chart. The data values are passed in this format:
{
"Group1": [group1_subvalue1, group1_subvalue2.....],
"Group2": [group2_subvalue1, group2_subvalue2.....],
....
}
I'd like to sort each group individually, i.e.: for each state, order the columns from taller to narrower. But I'm unable to find a way to do it. In the given example, the array for each group must be in the same order (first people under 5 y.o, 5 to 13 after that, and so on), so I don't know how to face the problem.
To show how to sort each group individually, I took Mike Bostock's example to adjusted it accordingly.
You have to do the following things:
You sort the within-group categories (in the example it's the d.ages Array)
d.ages.sort(function(a, b) { return d3.ascending(a.value, b.value); });
x1 is your scale for the within group categories. Instead of mapping the categories to the same relative position, you just map the index of your sorted array to the x position. d3.range() creates an array which is your domain.
x1.domain(d3.range(0, data[0].ages.length)).rangeRoundBands([0, x0.rangeBand()]);
When you draw, you just call the index instead of the category
.attr("x", function(d, i) { return x1(i); })
The full working example is here.
I have also added how to sort the groups.
Related
I want to show the most recent 10 bins for box plot.
If a filter is applied to the bar chart or line chart, the box plot should show the most recent 10 records according to those filters.
I made dimension by date(ordinal). But I am unable to get the result.
I didn’t get how to do it with a fake group. I am new to dc.js.
The pic of scenario is attached. Let me know if anyone need more detail to help me.
in image i tried some solution by time scale.
You can do this with two fake groups, one to remove the empty box plots, and one to take the last N elements of the resulting data.
Removing empty box plots:
function remove_empty_array_bins(group) {
return {
all: function() {
return group.all().filter(d => d.value.length);
}
};
}
This just filters the bins, removing any where the .value array is of length 0.
Taking the last N elements:
function cap_group(group, N) {
return {
all: function() {
var all = group.all();
return all.slice(all.length - N);
}
};
}
This is essentially what the cap mixin does, except without creating a bin for "others" (which is somewhat tricky).
We fetch the data from the original group, see how long it is, and then slice that array from all.length - N to the end.
Chain these fake together when passing them to the chart:
chart
.group(cap_group(remove_empty_array_bins(closeGroup), 5))
I'm using 5 instead of 10 because I have a smaller data set to work with.
Demo fiddle.
This example uses a "real" time scale rather than ordinal dates. There are a few ways to do ordinal dates, but if your group is still sorted from low to high dates, this should still work.
If not, you'll have to edit your question to include an example of the code you are using to generate the ordinal date group.
I have one json example below:
{"Year":2018,"Month":1,"ApplicationName":"application1","ASI":12.0,"AEI":11.0},
{"Year":2018,"Month":2,"ApplicationName":"application2","ASI":24.0,"AEI":12.0}
I want to show a ring chart with two slices:
total of ASI
total of AEI
How can I get crossfilter to produce the two bins for the two columns?
The accessor used by reduceSum is a general function; you can put anything you want in there.
Thus,
group.reduceSum(function(d) { return d.ASI + d.AEI; });
will instruct the group to sum up all the ASIs and all the AEIs from the rows which fall into each bin.
I have three Row Charts and my code calculates and updates the percentages for each chart whenever a user first lands on the page or clicks a rectangle bar of a chart. This is how it calculates the percentages
posChart:
% Position= unique StoreNumber counts per Position / unique StoreNumber counts for all POSITIONs
deptChart:
% Departments= POSITION counts per DEPARTMENT/POSITION counts for all DEPARTMENTs
stateChart:
% States= unique StoreNumber counts per STATE / unique StoreNumber counts for all STATEs
What I want is when a user clicks a rectangle bar of a rowChart such as “COUNTS BY STATE”, it should NOT update/recalculate the percentages for that chart (it should not affect its own percentages), however, percentages should be recalculated for the other two charts i.e. “COUNTS BY DEPARTMENT” and “COUNTS BY POSITION”. The Same scenario holds for the other charts as well. This is what I want
If a user clicks a
“COUNTS BY DEPARTMENT” chart --> recalculate percentages for “COUNTS BY POSITION” and “COUNTS BY STATE” charts
“COUNTS BY POSITION” chart --> recalculate percentages for “COUNTS BY DISTRIBUTOR” and “COUNTS BY STATE” charts
Please Help!!
link:http://jsfiddle.net/mfi_login/z860sz69/
Thanks for the reply.
There is a problem with the solution you provided. I am looking for the global total for all filters but I don’t want those totals to be changed when user clicks on a current chart's rectangular bar.
e.g.
if there are two different POSITIONS (Supervisor, Account Manager) with the same StoreNumber (3), then I want StoreNumber to be counted as 1 not 2
If we take an example of Account Manager % calculation (COUNTS BY POSITION chart)
total unique StoreNumbers=3
Total Account Manager POSITIONs=2
% = 2/3=66%
Is there a way to redraw the other two charts without touching the current one?
It seems to me that what you really want is to use the total of the chart's groups, not the overall total. If you use the overall total then all filters will be observed, but if you use the total for the current group, it will not observe any filters on the current chart.
This will have the effect you want - it's not about preventing any calculations, but about making sure each chart is affected only by the filters on the other charts.
So, instead of bin_counter, let's define sum_group and sum_group_xep:
function sum_group(group, acc) {
acc = acc || function(kv) { return kv.value; };
return d3.sum(group.all().filter(function(kv) {
return acc(kv) > 0;
}), acc);
}
function sum_group_xep(group) {
return sum_group(group, function(kv) {
return kv.value.exceptionCount;
});
}
And we'll use these for each chart, so e.g.:
posChart
.valueAccessor(function (d) {
//gets the total unique store numbers for selected filters
var value=sum_group_xep(posGrp)
var percent=value>0?(d.value.exceptionCount/value)*100:0
//this returns the x-axis percentages
return percent
})
deptChart
.valueAccessor(function (d) {
total=sum_group(deptGrp)
var percent=d.value?(d.value/total)*100:0
return percent
})
stateChart
.valueAccessor(function (d) {
var value=sum_group_xep(stateGrp);
return value>0?(d.value.exceptionCount/value)*100:0
})
... along with the other 6 places these are used. There's probably a better way to organize this without so much duplication of code, but I didn't really think about it!
Fork of your fiddle: http://jsfiddle.net/gordonwoodhull/yggohcpv/8/
EDIT: Reductio might have better shortcuts for this, but I think the principle of dividing by the total of the values in the current chart's group, rather than using a groupAll which observes all filters, is the right start.
I'm stuck on a seamingly easy problem with dc.js and crossfilter.
I have a datatable on a dimension and it shows my data correctly. The problem is that I want to show the 'worst' data items but the datatable picks the 'best' items in the filters.
The following is a scetch of the current table code.
var alertnessDim = ndx.dimension(function(d) {return d.alertness1;});
dc.dataTable(".dc-data-table")
.dimension(alertness1)
.group(function (d) {
return d.DATE.getYear();
})
.columns([
function (d) {
return d.DATE;
},
function (d) {
return d['FLEET'];
},
function (d) {
return d.alertness1;
}
])
.sortBy(function (d) {
return d.alertness1;
})
.order(d3.ascending);
This connects to the crossfilter properly and it sorts the items in the correct order, but the 25 items it is showing are the ones with the highest alertness values, not the lowest.
Anyone have any ideas on how to solve this, preferbly without creating another dimension?
You are right to be confused here. You would think this would be a supported use case but it is not, as far as I can tell. The data table uses dimension.top so it is always going to take the highest values.
So I don't think there is a way around using a special dimension with opposite ordering/keys. For the other charts you could use group.order in order (heh) to get it to return the lowest values. But the data table doesn't use a group because it's not reducing its values.
It's confusing that the data table also has an order parameter which doesn't help here.
Hope that is acceptable. Otherwise I think you'd have to poke around in the code. Pull Requests always welcome! (preferably with tests)
One quick way to achieve descending sort on a specific column, is to sort by its negative value:
.sortBy(function(d){ return -d.ALARM_OCCURRENCE; }); // show highest alarm count first
I have two datasets that have similar columns/dimensions but are grouped differently by row and contain different measures.
Ex:
Dataset 1
Year Category SubCategory Value01 Value02
2000 Cars Sport 10 11
2000 Cars Family 15 16
2000 Boats Sport 20 21
2000 Boats Family 25 26
...
Dataset 2
Year Category ValueA ValueB
2000 Cars 100 101
2000 Boats 200 201
...
Dataset 1 has its own crossfilter object, Dataset 2 has a separate crossfilter object. I have multiple dc.js charts, some tied to the dataset 1, some to dataset 2.
When a dc.js chart filters dataset 1 on a column/dimension that also exists in dataset 2, I want to apply that same filter to dataset 2. How can this be achieved?
I don't think there is any automatic way to do this in crossfilter or dc.js. But if you're willing to roll your own dimension wrapper, you could supply that instead of the original dimension objects and have that forward to all the underlying dimensions.
EDIT: based on #Aravind's fiddle below, here is a "dimension mirror" that works, at least for this simple example:
function mirror_dimension() {
var dims = Array.prototype.slice.call(arguments, 0);
function mirror(fname) {
return function(v) {
dims.forEach(function(dim) {
dim[fname](v);
});
};
}
return {
filter: mirror('filter'),
filterExact: mirror('filterExact'),
filterRange: mirror('filterRange'),
filterFunction: mirror('filterFunction')
};
}
It's a bit messy using this. For each dimension you want to mirror from crossfilter A to crossfilter B, you'll need to create a mirror dimension on crossfilter B, and vice versa:
// Creating the dimensions
subject_DA = CFA.dimension(function(d){ return d.Subject; });
name_DA = CFA.dimension(function(d){ return d.Name; });
// mirror dimensions to receive events from crossfilter B
mirror_subject_DA = CFA.dimension(function(d){ return d.Subject; });
mirror_name_DA = CFA.dimension(function(d){ return d.Name; });
subject_DB = CFB.dimension(function(d){ return d.Subject; });
name_DB = CFB.dimension(function(d){ return d.Name; });
// mirror dimensions to receive events from crossfilter A
mirror_subject_DB = CFB.dimension(function(d){ return d.Subject; });
mirror_name_DB = CFB.dimension(function(d){ return d.Name; });
Now you tie them together when passing them off to the charts:
// subject Chart
subjectAChart
.dimension(mirror_dimension(subject_DA, mirror_subject_DB))
// ...
// subject Chart
subjectBChart
.dimension(mirror_dimension(subject_DB, mirror_subject_DA))
// ...
nameAChart
.dimension(mirror_dimension(name_DA, mirror_name_DB))
// ...
nameBChart
.dimension(mirror_dimension(name_DB, mirror_name_DA))
// ...
Since all the charts are implicitly on the same chart group, the redraw events will automatically get propagated between them when they are filtered. And each filter action on one crossfilter will get applied to the mirror dimension on the other crossfilter.
Maybe not something I'd recommend doing, but as usual, it can be made to work.
Here's the fiddle: https://jsfiddle.net/gordonwoodhull/7dwn4y87/8/
#Gordon's suggestion is a good one.
I usually approach this differently, by combining the 2 tables into a single table (add ValueA and ValueB to each row of Data Set 1) and then using custom groupings to only aggregate ValueA and Value B once for each unique Year/Category combination. Each group would need to keep a map of keys it has seen before and the count for each of those keys, only aggregating the value of ValueA or ValueB if it is a new combination of keys. This does result in complicated grouping logic, but allows you to avoid needing to coordinate between 2 Crossfilter objects.
Personally, I just find complex custom groupings easier to test and maintian than coordination logic, but that's not the case for everyone.