I am having a really hard time understanding d3.layout.stack() when groups are not listed manually. In the below example, similar to what I've found in other questions, groups are listed in [] as "Apple", etc., but as far as I understand this has to be inputted manually. I am seeking a way to not have to manually input "Apple", "Blueberry", etc.
var dataset = d3.layout.stack()(["Apple", "Blueberry", "Lettuce", "Orange"].map(function(fruit) {
return data.map(function(d) {
return {x: d.orchard, y: +d[fruit]};
});
}));
I've tried inserting a line in my data object as below, called 'names':
[{names='Apple','Blueberry','Lettuce','Orange'}, {Apple=1.0, Orange=2.0, Lettuce=1.0, orchard=小明, Blueberry=1.0}, {Apple=1.0, Orange=1.0, Lettuce=1.0, orchard=小陈, Blueberry=1.0}, {Apple=1.0, Orange=1.0, Lettuce=1.0, orchard=小虎, Blueberry=1.0}, {Orange=1.0, Lettuce=1.0, orchard=小桃, Blueberry=1.0, Apple=1.0}]
Is there a way to code something similar to below?
var dataset = d3.layout.stack()([d3.keys(names)].map(function(fruit) {
Should I be focused more on inserting a unique list of names into my data object, or do so by parsing my data in my d3 code itself to accumulate a list of unique group names?
I am wondering, if the d3.keys logic makes sense, if it can be applied to the below context too, instead of enumerating each case:
legend.append("text")
.attr("x", width + 5)
.attr("y", 9)
.attr("dy", ".35em")
.style("text-anchor", "start")
.text(function(d, i) {
for(var j =0; j<4; j++){
switch (i) {
case j: return d3.keys[j]
// switch (i) {
//
// case 0: return "orange"
// case 1: return "apple"
// case 2: return "blueberry"
// case 3: return "lettuce"
}
}
});
I ended up just converting the entire graph to d3 v5. Below is some notes based off a lot of sources I looked at mixed with my own work:
Better practice for stacked bar is to use
.data(d3.stack().keys(keys)(data))
where
var keys = d3.keys(data[0]).filter(function(d){
return d != "orchard";
});
or in other words:
var keys = d3.keys(data[0]).filter(d => d != "orchard")
This is useful for data that is pre-parsed in javascript. Say you have just columns in a csv:
var keys = csv.columns.slice(0);
is useful, but same philosophy for stacking applies.
Slight issue: if you have new categories arising later on in the data, i.e. a new fruit pineapple is part of data[1] but not data[0], key will not identify pineapple. It only responds to the data object of the first entry.
To not rely on data[1], data[0], etc., and "accumulate" keys for data[0], data[1], etc. while maintaining the same filter:
var key = [];
for(var i =0; i < d3.keys(data).length; i++){
var joinin = d3.keys(data[i]).filter(d => d != "orchard")
var key = key.concat(joinin)
// console.log(key)
}
There's most likely a better way of writing that code, but the explanation is:
If you wrote something like this you'd get keys for only one set of data:
var key = d3.keys(data[2]).filter(function(d){
return d != "orchard";
});
If you wrote this you get the keys for each iteration of data:
var key = [];
for(var i =0; i < d3.keys(data).length; i++){
var key = d3.keys(data[i]).filter(d => d != "orchard")
console.log(key)
key.push(key);
}
So the trick is to use a for loop to get each iteration of data but concat that into one singular list, which has no repeats.
[EDIT] What if you wanted the value given to each key? Again, this is for a data structure like this, which is a little unconventional:
data = [
{0:
{"Apple": 1}
{"orchard": xx}
}
{1:
{"Apple": 2}
{"orchard": xx}
}
]
You can use the below, where key will return ["Apple"], and key_values will return [1, 2]. Basically the filter d > 0 prevents any strings, so like names of orchards "xx". Does the same as filtering out orchards.
var key = [];
var key_values = [];
for(var i =0; i < d3.keys(data).length; i++){
var key_value = d3.entries(data[i]).map(d => d.value).filter(d => d > 0)
var key_values = key_values.concat(key_value)
var joinin = d3.keys(data[i]).filter(d => d != "orchard")
var key = key.concat(joinin)
}
(EDIT2) About the legend..
I just replaced my code with, and I know d3v5 can simplify the below(?),
var legend = svg.selectAll(".legend")
.data(color.domain())
.enter()
.append("g")
.attr("class","legend")
.attr("transform",function(d,i) {
return "translate(1300," + i * 15 + ")";
});
In my case input variables are discrete, like "Apple" etc., not integers, so we use scaleOrdinal and just use "keys" for the domain. You don't really need to write that though, I think it defaults to the input list of discrete variables? Not sure why it works.
var color = d3.scaleOrdinal()
.domain(keys)
// .range (whatever you want)
Related
I have the following google apps script. It functions to look in my PENDING-IP tab and check the status (column G). If the status is "Screened - Ready for Review," then it moves the entire row of data to the SCREENED - READY FOR REVIEW tab.
My problem is this - the data in columns L, M, and N of the PENDING tab is a checkbox, but when it pushes it to the SCREENED tab, it changes it to TRUE or FALSE. Is there a way to modify my script to push the data with validation to retain the checkboxes? Thank you!
function screened() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName('PENDING-IP'); //source sheet
var testrange = sheet.getRange('G:G'); //range to check
var testvalue = (testrange.getValues());
var csh = ss.getSheetByName('SCREENED - READY FOR REVIEW'); //destination sheet
var data = [];
var j =[];
//Condition check in G:G; If true copy the same row to data array
for (i=0; i<testvalue.length;i++) {
if ( testvalue[i] == 'Screened - Ready for Review') {
data.push.apply(data,sheet.getRange(i+1,1,1,25).getValues());
//Copy matched ROW numbers to j
j.push(i);
}
}
//Copy data array to destination sheet
csh.getRange(csh.getLastRow()+1,1,data.length,data[0].length).setValues(data);
//Delete matched rows in the source sheet
for (i=0;i<j.length;i++){
var k = j[i]+1;
sheet.deleteRow(k);
//Alter j to account for deleted rows
if (!(i == j.length-1)) {
j[i+1] = j[i+1]-i-1;
}
}
}
Try this
I haven't actually tested this except in a simpler situation
function screened() {
var ss=SpreadsheetApp.getActive();
var sheet=ss.getSheetByName('PENDING-IP');
var testrange=sheet.getRange(1,7,sheet.getLastRow(),1);
var testvalue=testrange.getValues();
var valids=testrange.getDataValidations();
var csh = ss.getSheetByName('SCREENED - READY FOR REVIEW'); //destination sheet
var data = [];
var valid = [];
var j =[];
var d=0;
for (var i=0;i<testvalue.length;i++) {
if (testvalue[i][0] == 'Screened - Ready for Review') {
data.push(sheet.getRange(i+1-d,1,1,25).getValues());//I am not sure but I could see having to put [0] at the end here
valid.push(sheet.getRange(i+1-d,1,1,sh.getLastColumn()).getDataValidations());//I am not sure but I could see having to put [0] at the end here
sheet.deleteRow(i+1-d++);//this should delete matching rows
}
}
csh.getRange(csh.getLastRow()+1,1,data.length,data[0].length).setValues(data);
csh.getRange(csh.getLastRow()+1,1,data.length,data[0].length).setValues(valid);
}
What I've found is that you can treat the validations the same way you treat the data by replacing getValues() with getDataValidations() and setValues() with setValidations();
My dataset is similar to this.
name,code,proc,subP,date,m,a,b,o,t
BW,1333,29,1,2015-12-29 02:30:00,10,0,0,0,10
BW,1333,29,1,2015-12-29 12:00:00,25,0,0,0,25
BW,1333,29,1,2015-12-30 12:00:00,26,0,0,0,26
BW,1333,29,2,2015-12-31 12:00:00,27,0,0,0,27
BW,1333,29,2,2016-01-01 12:00:00,26.1,4.9,1.8,0,32.8
BW,1333,29,2,2016-01-02 12:00:00,26.4,4.9,1.9,0,33.2
BW,1333,29,2,2016-01-03 12:00:00,26.2,4.9,1.9,0,33
...
NS,1212,11,1,2016-07-28 15:30:00,1.6,3.7,4.4,0,9.7
NS,1212,11,1,2016-07-29 12:00:00,17.4,2.3,0,0,19.7
NS,1212,11,1,2016-07-30 12:00:00,21,5,14.1,0,40.1
NS,1212,11,2,2016-07-31 11:12:00,18.1,3.5,6.1,0,27.7
NS,1212,11,2,2016-07-31 12:00:00,0.1,0.2,0.2,0,0.5
NS,1212,11,2,2016-08-01 12:00:00,0.1,2.7,2.6,0,5.4
I'm using a composite line chart to represent this data.
I'm using selectMenu to filter 'BW' and 'NS' records. But the composite chart remains unchanged when I use selectmenu.
I split the data into two seperate files (one with 'BW' records and the other with 'NS' records) to try to implement the data selection functionality provided in dc.js Series Chart example . That didnt work either.
Here is a fiddle with one of the charts.
Ultimately, I would like to filter multiple charts(composite, series and bubble) by 'BW' and 'NS' records with or without a select menu.
How can I implement this?
The problem isn't with your selectMenu or your composite chart, it's with the way you are reducing data.
The first parameter to group.reduce() is the add function, the second parameter is the remove function, and the final one is the initialize function.
You usually don't want to set p directly from v as you are doing:
var grp = dim.group().reduce(
function(p,v){
p.m = +v.m;
p.a = +v.a;
p.b = +v.b;
return p;
},
function(p,v){
p.m = +v.m;
p.a = +v.a;
p.b = +v.b;
return p;
},
function(){return {m:0, a:0, b:0}; });
That will only work if every key is perfectly unique and no two items ever fall in the same bin.
And you almost never want to set p from v in the remove function. That will prevent anything from ever being removed.
Instead, add and subtract the values like so:
var grp = dim.group().reduce(
function(p,v){
p.m = p.m + +v.m;
p.a = p.a + +v.a;
p.b = p.b + +v.b;
return p;
},
function(p,v){
p.m = p.m - +v.m;
p.a = p.a - +v.a;
p.b = p.b - +v.b;
return p;
},
function(){return {m:0, a:0, b:0}; });
Here's a working fork of your fiddle.
I would simply like to autopopulate the Point Segment with all the segments linked to a specific Street, when the Street name is inputted.
When Street name is inputted into column C, Column D should have a dropdown containing only the point segments of that street.
While I realize this can simply be achieved by creating a filter in the Data tab, I am trying to create a form which does not allow this and therefore need to script it.
This is the Google sheet:
https://docs.google.com/spreadsheets/d/1QbTqPegE_GLj9V6x5uCNNXAoi0v12Pmaelhc7uaMknE/edit?usp=sharing
I have written this code, however I am having trouble filtering by Street.
function setDataValid_(range, sourceRange) {
var rule = SpreadsheetApp.newDataValidation().requireValueInRange(sourceRange,
true).build();
range.setDataValidation(rule);
}
function onEdit() {
var aSheet = SpreadsheetApp.getActiveSheet();
var aCell = aSheet.getActiveCell();
var aColumn = aCell.getColumn();
if (aColumn == 3 && aSheet.getName() == 'Sheet1') {
var range = aSheet.getRange(aCell.getRow(), aColumn + 1);
var sourceRange = aSheet.getRange('Sheet1!B2:B5131');
setDataValid_(range, sourceRange)
}
}
Any help would be much appreciated.
You're close, but you should use requireValueInList instead of requireValueInRange. You were passing sourceRange, which is equal to all of the point segments.
To accomplish the filtering, you need to look at all of the street values. If the street value matches the selection, then save the adjacent point segment to a separate list. Once you've saved all those point segments, then pass it to requireValueInList. To do this, you need to take advantage of getValues() to get the range values as an array and loop through it.
I've made a few other modifications:
In onEdit, you should use the an event object
Changed your variable names so they would be easier to understand
Added a check to make sure that the Street cell isn't blank (no need to trigger the action when you delete a value from the cell)
Added a function getPointSegments that does the filtering
Removed the setDataValid_ function as it made your code less readable, and in my opinion, wasn't worthy of being its own function
function onEdit(event) {
var eventSheet = event.range.getSheet();
var eventCell = event.range;
var eventColumn = eventCell.getColumn();
var eventValue = eventCell.getValue();
if (eventColumn == 3 && eventSheet.getName() == "Sheet1" && eventValue != "") {
var pointRange = eventSheet.getRange(eventCell.getRow(), eventColumn + 1);
var pointSegments = getPointSegments_(eventSheet, eventValue);
var rule = SpreadsheetApp.newDataValidation().requireValueInList(pointSegments, true).build();
pointRange.setDataValidation(rule);
}
}
function getPointSegments_(sheet, selectedStreet) {
var streetsAndPoints = sheet.getRange("A2:B").getValues();
var pointSegments = [];
for (var i=0; i<streetsAndPoints.length; i++) {
var street = streetsAndPoints[i][0];
var pointSegment = streetsAndPoints[i][1];
if (street === selectedStreet)
pointSegments.push(pointSegment);
}
return pointSegments;
}
Lastly, be sure that your data validations in the Street field look like this (and I would actually suggest "Reject input" on invalid data).
Okay, so I've seen this ticket and this question and have tried several examples already. Maybe I'm just dense, but I really haven't been able to crack this one.
I have a time series of events that has gaps in it. By default, dc.js connects a straight line over the gap (making it look like things are represented there when they really shouldn't be). For example, in this graph we have data as follows:
{"time":"2014-06-09T18:45:00.000Z","input":17755156,"output":250613233.333333},
{"time":"2014-06-09T18:46:00.000Z","input":18780286.6666667,"output":134619822.666667},
{"time":"2014-06-09T18:47:00.000Z","input":20074614.6666667,"output":203239834.666667},
{"time":"2014-06-09T18:48:00.000Z","input":22955373.3333333,"output":348996205.333333},
{"time":"2014-06-09T18:49:00.000Z","input":19119089.3333333,"output":562631022.666667},
{"time":"2014-06-09T18:50:00.000Z","input":15404272,"output":389916332},
{"time":"2014-06-09T18:51:00.000Z","input":null,"output":null},
{"time":"2014-06-09T21:25:20.000Z","input":5266038.66666667,"output":62598396},
{"time":"2014-06-09T21:26:20.000Z","input":6367678.66666667,"output":84494096},
{"time":"2014-06-09T21:27:20.000Z","input":5051610.66666667,"output":88812540},
{"time":"2014-06-09T21:28:20.000Z","input":5761069.33333333,"output":79098036},
{"time":"2014-06-09T21:29:20.000Z", "input":5110277.33333333,"output":45816729.3333333}
Even though there's only two actual groups of data, there's a line on that graph connecting them. How do I make dc.js line graphs draw 0 where there is no data at all. I've tried using .defined(function(d) { return !isNaN(d.x);}) and .defined(function(d) { return d.y != null; }) and such, but this is just iterating through data which isn't there.
It's tricky trying to preserve nulls when using crossfilter, because crossfilter is all about aggregation.
Remember that reduceSum will add any values it finds, starting from zero, and 0 + null === 0.
In your case, it looks like you're not actually aggregating, since your timestamps are unique, so you could do something like this:
var input = time.group().reduce(
function(p, d) {
if(d.input !== null)
p += d.input;
else p = null;
return p;
},
function(p, d) {
if(d.input !== null)
p -= d.input;
else p = null;
return p;
},
function(){ return 0; }
);
Yeah, that's a lot more complicated than reduceSum, and it may get even more complicated if more than one datum falls into a bucket. (Not sure what you'd want to do there - is it possible for a data point to be partly defined?)
With the reduction defined this way, null reduces to null and dc.js is able to find the gaps:
Fork of your fiddle (thanks!): http://jsfiddle.net/gordonwoodhull/omLko77k/3/
Edit: counting nulls
If you're doing a "real" reduction where there is more than one value in a bin, I think you'll need to count the number of non-null values as well as keeping a running sum.
When there are no non-null values, the sum should be null.
Reusing our code a bit better this time:
function null_counter(field) {
return {
add: function(p, d) {
if(d[field] !== null) {
p.nvalues++;
p.sum += d[field];
}
return p;
},
remove: function(p, d) {
if(d[field] !== null) {
p.nvalues--;
p.sum -= d[field];
if(!p.nvalues)
p.sum = null;
}
return p;
},
init: function() {
return {nvalues: 0, sum: null};
}
}
}
Applied like this (and getting the fields right this time):
var input_reducer = null_counter('input');
var input = time.group().reduce(
input_reducer.add,
input_reducer.remove,
input_reducer.init
);
var output_reducer = null_counter('output');
var output = time.group().reduce(
output_reducer.add,
output_reducer.remove,
output_reducer.init
);
Since we're reducing to an object with two values {nvalues, sum}, we need to make all our accessors a little more complicated:
.valueAccessor(function(kv) { return kv.value.sum; })
.defined(function(d){
return (d.data.value.sum !== null);
})
chart.stack(output, "Output bits",
function(kv) { return kv.value.sum; });
Updated fork: http://jsfiddle.net/gordonwoodhull/omLko77k/9/
I am trying to build a choropleth that is not exactly a choropleth in dc.js. What I am trying to do is color the map base on coloring condition and ultimately this will interact with other charts and filters as well. My csv looks like this:
country,id,condition,value
AU,1,yes,19
US,2,no,23
US,2,no,30
US,2,no,4
IN,3,yes,14
SG,4,yes,2
NZ,5,no,6
NZ,5,no,20
and this is my approach so far, producing the count of occurrences.
var ndx = crossfilter(data)
var countryDimension = ndx.dimension(function (d){
return d.country
});
var colors = d3.scale.ordinal().domain(['yes','no']).range(["green","blue"])
worldMap.width(mapWidth)
.height(mapHeight)
.dimension(countryDimension)
.group(countryDimension.group())
.projection(project)
.colors(colors)
.colorCalculator(function(d){
return d ? worldMap.colors()(d) : '#d8d8d8';
})
.overlayGeoJson(geoJson.features, "id", function(d){
return d.id;
})
.title(function(d){
return 'Country: ' + d.key + '\nCondition: ' + d.value;
});
I am quite new to this amazing world of d3 and dc.js. Although I have been reading the documentation and forums I cannot figure out how I could make it so that a map is drawn, and the countries with the condition 'yes' is colored green and countries with the condition 'no' is colored blue. So pretty much if i do console.log(d.value) it should return either 'yes' or 'no'. I don't get what I have to do with my 'group'.
If every country has the same value for condition every time it is listed in the data, then in some sense the data is denormalized. That's fine, because crossfilter works best with a single array of data.
Of course it means that the choropleth won't respond to brushing on other charts, since the value is not affected by how many rows are currently filtered. But it will be able to filter other charts.
Count yesses
There are a couple of ways to do this. One way to do it is to count the number of yesses, and set the value according the count:
var yesnoGroup = countryDimension.group().reduceSum(function(d) {
return d.condition === 'yes' ? 1 : 0;
});
worldMap.valueAccessor(function(kv) {
return kv.value ? 'yes' : 'no';
})
Grab first value
However this would probably cause countries to turn blue when they are filtered out by the other charts. So you could also use a "grab first value and hold onto it" strategy like this:
var yesnoGroup = countryDimension.group().reduce(
function(p, v) { // add
return v.condition;
},
function(p, v) { // remove
return p; // ignore remove event
},
function() { // initialize
return null; // no value
});
A little bit ugly and a weird way to use crossfilter, but that's just because crossfilter expects the data to have some effect on the reduced value, and it doesn't here.
EDIT: Three states
Based on the conversation below, I understand you're actually looking for three states: no, zero, and yes. (This makes more sense than the solutions above, but I'll leave those for posterity.) Here are two completely different ways to solve the no/zero/yes problem.
Both of these solutions use the following three-way color scale:
var colors = d3.scale.ordinal().domain(['no', 'zero', 'yes']).range(["blue", "grey", "green"])
No/zero/yes as negative/positive numbers
This is clever and simple: we'll just count each no as -1 and each yes as +1. If the sum is zero, we'll draw in grey. The only caveat here is if there are contradictions in the data, you could get a false zero. But that might be better than a false no or yes (?)
var nozeroyesGroup = countryDimension.group().reduceSum(function(d) {
return d.condition === 'no' ? -1 : d.condition === 'yes' : +1 : 0;
});
worldMap.valueAccessor(function(kv) {
return kv.value < 0 ? 'no' : kv.value > 0 ? 'yes' : 'zero';
})
No/yes polarity
We could also remember a count and polarity separately. This is maybe safer but also maybe slower. (Not that you'd notice unless your data is huge.) It's a bit more complicated. Kind of a matter of preference.
var nozeroyesGroup = countryDimension.group().reduce(
function(p, v) { // add
if(p.polarity && p.polarity != v.condition)
console.warn('inconsistent');
p.polarity = v.condition;
++p.count;
return p;
},
function(p, v) { // remove
if(p.polarity != v.condition || p.count <= 0)
console.warn('inconsistent');
--p.count;
return p;
},
function() { // initialize
return {count: 0, polarity: null}; // no value
});
worldMap.valueAccessor(function(kv) {
return kv.value.count ? kv.value.polarity : 'zero';
})