I am developing a swimlane diagram using D3 v4. The diagram is a planning aid depicting Tasks which are carried out over time. The Tasks are up the Y axis and time is along the X axis.
Here is some example data to help understand my problem:
Tasks
[
{
"id": "2a606884-d6b9-4ad1-a5ff-5c816c43fef6",
"description": "Task 01",
"start": "2017-11-07T02:00:00.000Z",
"finish": "2017-11-07T08:00:00.000Z",
"label": "Task 01",
"taskTypeId": "0b936e39-49b9-4cc8-b5c5-b1f1338e9faf",
"taskTypeDescription": "Walk the dog"
},
{
"id": "6713025e-63e2-4ff3-8202-43e17c13431d",
"description": "Task 02",
"start": "2017-11-07T08:00:00.000Z",
"finish": "2017-11-07T12:00:00.000Z",
"label": "Task 02 02",
"taskTypeId": "9af060ba-5abf-4627-8462-7c21281ab487",
"taskTypeDescription": "Wash the car"
},
{
"id": "ff071aa5-e14b-4b32-bd51-7cf079f4c876",
"description": "Task 03",
"start": "2017-11-07T12:00:00.000Z",
"finish": "2017-11-07T14:00:00.000Z",
"label": "Task 03",
"taskTypeId": "8e6a8b11-0e23-4473-8795-ac74fc1efe07",
"taskTypeDescription": "Make the beds"
},
{
"id": "a84219e2-5da9-4119-915b-d84f35fda9d0",
"description": "Task 04",
"start": "2017-11-07T12:00:00.000Z",
"finish": "2017-11-07T14:00:00.000Z",
"label": "Task 04",
"taskTypeId": "a065dfe2-2c68-4467-84a5-1fce7c34513b",
"taskTypeDescription": "Wash up dishes"
}
]
New TaskTypes array nested by Area:
[
{
"key": "Outdoor",
"values": [
{
"id": "a97ad203-37e4-4fb8-8168-c3fdc1980d3d",
"description": "Walk the dog",
"areaId": "19952c5a-b762-4937-a613-6151c8cd9332",
"areaDescription": "Outdoor"
},
{
"id": "0b936e39-49b9-4cc8-b5c5-b1f1338e9faf",
"description": "Wash the car",
"areaId": "19952c5a-b762-4937-a613-6151c8cd9332",
"areaDescription": "Outdoor"
}
]
},
{
"key": "Indoor",
"values": [
{
"id": "8632bd18-8968-4185-95f0-f093f7fc9a02",
"description": "Make the beds",
"areaId": "87d8f755-ef60-4cfa-9a4a-c94cff9f8a22",
"areaDescription": "Indoor"
},
{
"id": "8e6a8b11-0e23-4473-8795-ac74fc1efe07",
"description": "Wash the dishes",
"areaId": "87d8f755-ef60-4cfa-9a4a-c94cff9f8a22",
"areaDescription": "Indoor"
}
]
}
]
So far my diagram lists all the TaskTypes up the Y axis using a simple 1 dimensional array of TaskTypes. Each Task is then positioned, within its row for the Task's TaskType, along the X axis according to the start/finish of the Task. All good.
My yScale is currently like this:
this.yScale = d3
.scaleBand()
.domain(this.taskTypeDescriptions)
.rangeRound([0, this.chartHeight])
.padding(padding);
...where this.taskTypeDescriptions is a simple 1 dimensional array of TaskTypes.
Now I have grouped TaskTypes by Area. In my example data above there are 2 Areas: Outdoor and Indoor. I want to visually group the rows of TaskType by their parent Area.
This appears to be like a grouped bar chart except that all the examples I have seen of these have the same second tier of data repeated for each primary group of data. In my scenario I have many discrete TaskTypes, none of them repeated, but I still want them grouped by their Area. Is this possible at all?
Any suggestions or thoughts very welcome. Thanks.
Related
I would like to understand how to impose a gaussian constraint with central value expected_yield and error expected_y_error on a normfactor modifier. I want to fit observed_data with a single sample MC_derived_sample. My goal is to extract the bu_y modifier such that the integral of MC_derived_sample scaled by bu_y is gaussian-constrained to expected_yield +/- expected_y_error.
My present attempt employs the normsys modifier as follows:
spec = {
"channels": [
{
"name": "singlechannel",
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
},
]
},
],
"observations": [
{
"name": "singlechannel",
"data": observed_data,
}
],
"measurements": [
{
"name": "sig_y_extraction",
"config": {
"poi": "bu_y",
"parameters": [
{"name":"bu_y", "bounds": [[(1 - (5*expected_y_error/expected_yield), 1+(5*expected_y_error/expected_yield)]], "inits":[1.]},
]
}
}
],
"version": "1.0.0"
}
My thinking is that normsys will introduce a gaussian constraint about unity on the sample scaled by expected_yield.
Please can you provide me any feedback as to whether this approach is correct, please?
In addition, suppose I wanted to include a staterror modifier for the Barlow-Beeston lite implementation, would this be the correct way of doing so?
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "BB_lite_uncty", "type": "staterror", "data": np.sqrt(MC_derived_sample)*expected_yield }, #assume poisson error and scale by central value of constraint
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
}
Thanks a lot in advance for your help,
Blaise
Good morning!
I am starting with BotFramework Composer tool using the template RespondingWithCardsSample and I am having problems testing the send of value from one card to another.
On the one hand, I have edited the AdaptivecardJson card with the following basic code.
#adaptivecardjson
- ```
{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.0",
"type": "AdaptiveCard",
"body": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "Input.ChoiceSet",
"placeholder": "Adults",
"choices": [
{
"title": "1",
"value": "1"
},
{
"title": "2",
"value": "2"
},
{
"title": "3",
"value": "3"
},
{
"title": "4",
"value": "4"
}
],
"id": "InputAdultos"
}
]
}
]
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Send"
}
]
}
This card simply contains an input text indicating the number of adults, the send button and inflates the following card:
#AdaptiveCard
[Activity
Attachments = #{json(adaptivecardjson())}
]
Finally, I created another card which simply writes the number of adults received:
# HeroCardAdults(InputAdults)
[HeroCard
text = The number of adults is #{InputAdults}
]
But I just didn't understand how it works and it gives me the following error:
common.lg: Error occurs when evaluating expression bfdactivity-028800 (): Error occurs when evaluating expression HeroCardAdults (): Specified argument was out of the range of valid values.
Parameter name: ‘inputadults’ does not match memory scopes: user, conversation, turn, settings, dialog, class, this
Has it happened to someone else?
Thanks!
Change your template to
# HeroCardAdults(InputAdults)
[HeroCard
text = The number of adults is {InputAdults}
]
or if you want to use memory scopes, set your value to dialog.InputAdults and use this template
# HeroCardAdults
[HeroCard
text = The number of adults is {dialog.InputAdults}
]
I am using stratify to build a d3 tree from a flat data structure. However, some fields are missing when I try to call them with d.data.fieldname
Here is my data structure:
var flatData = [
{"name": "Data Encrypted", "parent": null, "category": "test", "score": null },
{"name": "Malware on Target", "parent": "Data Encrypted", "category": "test", "score": null },
{"name": "Malware executed", "parent": "Data Encrypted", "category": "test", "score": "1" },
{"name": "Files modified", "parent": "Data Encrypted", "category": "test", "score": "1" },
];
I am building the hierarchical data structure with this stratify command:
var treeData = d3.stratify()
.id(function(d) { return d.name; })
.parentId(function(d) { return d.parent; })
(flatData);
The d3 tree is displayed correctly, and I can expand / collapse nodes etc, and display the ID and Name of each node using d.data.id and d.data.name respectively. If I try and use d.data.score or d.data.category to display data I get an 'undefined' error.
Any information that can help me get past this issue would be greatly appreciated.
While amcharts shows the India map correctly (showing the disputed regions as part of India) when displaying only India (http://jsfiddle.net/zxhseguw/5/)
"dataProvider": {
"map": "indiaLow",
"areas": [ {
"id": "IN-KA",
"value": 4447100,
}, {
"id": "IN-UP",
"value": 38763
}]
},
it shades it differently when rendering it on world map (http://jsfiddle.net/zxhseguw/6/)
"dataProvider": {
"map": "worldLow",
"areas": [ {
"id": "IN",
"value": 4447100,
}, {
"id": "AU",
"value": 387633
}]
},
I wonder, if there is a way to make it render India correctly, just like its possible in Google Charts by setting origin='India'
I'm assuming you're referring to the region around Kashmir, correct? Try using worldIndiaLow instead of worldLow, which includes more of that disputed area as part of India.
"dataProvider": {
"map": "worldIndiaLow",
"areas": [ {
"id": "IN",
"value": 4447100,
}, {
"id": "AU",
"value": 387633
}]
},
Updated fiddle
Currently we have a problem to perform a query (or more precisely to design a mapping) in elasticsearch, which help us to perform a query over a relational problem, that we didn't get solved with our non-document orientated thinking from sql.
We want to create a many-to-many relation between different Elasticsearch entries. We need this to edit an entry once and keep all using’s updated to this.
To describe the problem, we'll use the following simple data model:
Broadcast Content
------------ ---------
Id Id
Day Title
Contents [] Description
So we have two different types to index, broadcasts and contents.
A broadcast can have many contents and single contents could also be part of different broadcasts (e.g. repetition).
JSON like:
index/broadcasts
{
"Id": "B1",
"Day": "2014-10-15",
"Contents": [
"C1",
"C2"
]
}
{
"Id": "B2",
"Day": "2014-10-16",
"Contents": [
"C1",
"C3"
]
}
index/contents
{
"Id": "C1",
"Title": "Wake up",
"Description": "Morning show with Jeff Bridges"
}
{
"Id": "C2",
"Title": "Have a break!",
"Description": "Everything about Android"
}
{
"Id": "C3",
"Title": "Late Night Disaster",
"Description": "Comedy show"
}
Now we want to rename the "Late Night Disaster" into something more precisely and keep all references up to date.
How could we approach this? Are there fourther options in ES, like includes in RavenDB?
Nested objects or child-parent relations didn't helped us so far.
What about denormalizing? seems difficult if we come from the SQL mindset, but give you a try, even with millions of documents, LUCENE indexing can help, and renaming will be a batch job.
[
{
"Id": "B1",
"Day": "2014-10-15",
"Contents": [
{
"Title": "Wake up",
"Description": "Morning show with Jeff Bridges"
},
{
"Title": "Have a break!",
"Description": "Everything about Android"
}
]
},
{
"Id": "B2",
"Day": "2014-10-16",
"Contents": [
{
"Title": "Wake up",
"Description": "Morning show with Jeff Bridges"
},
{
"Title": "Late Night Disaster",
"Description": "Comedy show"
}
]
}
]