JSONATA: Creating Array from Nested Array of Objects - jsonata

I'm new to JSONATA, so this is probably a pretty easy formatting problem that I don't know yet. I need to take a large object and reduce it down into something more manageable. I've been able to do this with objects, but am running into a problem with a particular data structure. It's metadata from a list of images, where the keywords are in an array of objects each with the structure {"name": "keyword"}. Like this:
{
"body": {
"files": [
{
"id": 101936854,
"title": "Taco Salad",
"keywords": [
{
"name": "background"
},
{
"name": "baked"
},
{
"name": "beef"
}
]
},
{
"id": 412961542,
"title": "Fiji",
"keywords": [
{
"name": "beach"
},
{
"name": "sea"
},
{
"name": "tree"
}
]
}
]
}
}
When I use the query
$zip(body.files.id,body.files.title,body.files.[keywords.name])
{
"id": $[0],
"title": $[1],
"keywords": $[2]
}
I get what I want, but only the first object, like so:
{
"id": 101936854,
"title": "Taco Salad",
"keywords": [
"background",
"baked",
"beef"
]
}
If I add the . to output all the objects, I get back multiple objects but only the first value of name in the array. Like so:
$zip(body.files.id,body.files.title,body.files.[keywords.name]).
{
"id": $[0],
"title": $[1],
"keywords": $[2]
}
Gets:
[
{
"id": 101936854,
"title": "Taco Salad",
"keywords": "background"
},
{
"id": 412961542,
"title": "Fiji",
"keywords": "beach"
}
]
I believe this is because the input objects in that array all have the same key of name. So, I need to somehow get all the values of name and put them in one array called keywords.

If I'm not mistaken, you want something like this:
body.files.{
"id": id,
"title": title,
"keywords": keywords.name
}
or, you can play with the transform operator:
(body ~> |files|{ "keywords": keywords.name }|).files
which produces this output for your example:
[
{
"id": 101936854,
"title": "Taco Salad",
"keywords": [
"background",
"baked",
"beef"
]
},
{
"id": 412961542,
"title": "Fiji",
"keywords": [
"beach",
"sea",
"tree"
]
}
]
See it live here https://stedi.link/ybzLADi and here https://stedi.link/TLk7Loq
P.S. if you do want to use $zip for it, I think you'd have to avoid arrays of arrays to prevent JSONata from flattening results, one possible implementation:
$zip(
body.files.id,
body.files.title,
body.files.{"keywords": keywords.name}
).{
"id": $[0], "title": $[1], "keywords": $[2].keywords
}

Related

Delete existing Records if they are not in sent array Rails 5 API

I need help on how to delete records that exist in the DB but not in array sent in a request;
My Array:
[
{ "id": "509",
"name": "Motions move great",
"body": "",
"subtopics": [
{
"title": "Tywan",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportations Gracious",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportation part",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
},
{
"name": "Motions kkk",
"body": "",
"subtopics": [
{
"title": "Transportations",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
}
]
Below is my implementation: where am going wrong?
#topics = #course.topics.map{|m| m.id()}
#delete= #topics
puts #delete
if Topic.where.not('id IN(?)', #topics).any?
#topics.each do |topic|
topic.destroy
end
end
it's not clear to me where, in your code, you pick the ids sent in the array you showed before... so I'm assuming like this:
objects_sent = [
{ "id": "509",
"name": "Motions move great",
"body": "",
"subtopics": [
{
"title": "Tywan",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportations Gracious",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportation part",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
},
{
"name": "Motions kkk",
"body": "",
"subtopics": [
{
"title": "Transportations",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
}
]
since you have your array like this, the only information you need to query on database is the ids (also, assuming the id's in the array are the id's on database, otherwise it wouldn't make sense). You can get them like this:
sent_ids = objects_sent.map{|o| o['id'].to_i}
Also, it seems to me that, for the code you showed, you want to destroy them based on a specific course. There would be 2 ways to do that. First, using the relationship (I prefer like this one):
#course.topics.where.not(id: sent_ids).destroy_all
Or you can do the query directly on the Topic model, but passing the course_id param:
Topic.where(course_id: #course.id).where.not(id: sent_ids).destroy_all
ActiveRecord is smart enough to mount that query correctly in both ways. Give it a test and see which works better for you

Gaussian constraint in `normfactor`

I would like to understand how to impose a gaussian constraint with central value expected_yield and error expected_y_error on a normfactor modifier. I want to fit observed_data with a single sample MC_derived_sample. My goal is to extract the bu_y modifier such that the integral of MC_derived_sample scaled by bu_y is gaussian-constrained to expected_yield +/- expected_y_error.
My present attempt employs the normsys modifier as follows:
spec = {
"channels": [
{
"name": "singlechannel",
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
},
]
},
],
"observations": [
{
"name": "singlechannel",
"data": observed_data,
}
],
"measurements": [
{
"name": "sig_y_extraction",
"config": {
"poi": "bu_y",
"parameters": [
{"name":"bu_y", "bounds": [[(1 - (5*expected_y_error/expected_yield), 1+(5*expected_y_error/expected_yield)]], "inits":[1.]},
]
}
}
],
"version": "1.0.0"
}
My thinking is that normsys will introduce a gaussian constraint about unity on the sample scaled by expected_yield.
Please can you provide me any feedback as to whether this approach is correct, please?
In addition, suppose I wanted to include a staterror modifier for the Barlow-Beeston lite implementation, would this be the correct way of doing so?
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "BB_lite_uncty", "type": "staterror", "data": np.sqrt(MC_derived_sample)*expected_yield }, #assume poisson error and scale by central value of constraint
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
}
Thanks a lot in advance for your help,
Blaise

Sending values between cards with BotFramework Composer

Good morning!
I am starting with BotFramework Composer tool using the template RespondingWithCardsSample and I am having problems testing the send of value from one card to another.
On the one hand, I have edited the AdaptivecardJson card with the following basic code.
#adaptivecardjson
- ```
{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.0",
"type": "AdaptiveCard",
"body": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "Input.ChoiceSet",
"placeholder": "Adults",
"choices": [
{
"title": "1",
"value": "1"
},
{
"title": "2",
"value": "2"
},
{
"title": "3",
"value": "3"
},
{
"title": "4",
"value": "4"
}
],
"id": "InputAdultos"
}
]
}
]
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Send"
}
]
}
This card simply contains an input text indicating the number of adults, the send button and inflates the following card:
#AdaptiveCard
[Activity
Attachments = #{json(adaptivecardjson())}
]
Finally, I created another card which simply writes the number of adults received:
# HeroCardAdults(InputAdults)
[HeroCard
text = The number of adults is #{InputAdults}
]
But I just didn't understand how it works and it gives me the following error:
common.lg: Error occurs when evaluating expression bfdactivity-028800 (): Error occurs when evaluating expression HeroCardAdults (): Specified argument was out of the range of valid values.
Parameter name: ‘inputadults’ does not match memory scopes: user, conversation, turn, settings, dialog, class, this
Has it happened to someone else?
Thanks!
Change your template to
# HeroCardAdults(InputAdults)
[HeroCard
text = The number of adults is {InputAdults}
]
or if you want to use memory scopes, set your value to dialog.InputAdults and use this template
# HeroCardAdults
[HeroCard
text = The number of adults is {dialog.InputAdults}
]

Combine json response in nifi

We are calling invokehttp processes and getting response which json. Example
{
"id": "h569gcjhcm",
"doi": {
"id": "10.17632/h569gcjhcm.1",
"status": "allocated",
"prefix": "10.17632"
},
"name": "Data for: Flooding of the Caspian Sea at the intensification of Northern Hemisphere Glaciations",
"description": "Supplementary data for the Jeirankechmez section in Azerbaijan.\n\n- Appendix A contains all paleomagnetic data and interpretations of the Jeirankechmez section. This .dir file can be imported into the paleomagnetism.org webportal under \"Interpretation Portal\", \"Advanced Options\", \"Import Application Save\". For further details on the use of paleomagnetism.org please refer to the article by Koymans et al. (2016) - https://doi.org/10.1016/j.cageo.2016.05.007.\n- Appendix B contains the magnetic susceptibility data for the analysed samples, including geographic coordinates and stratigraphic levels.\n- Appendix C contains the 40Ar/39Ar data for the three analysed volcanic ash layers. ",
"version": 1,
"publish_date": "2019-01-29T12:51:38.090Z",
"data_licence": {
"id": "01d9c749-3c4d-4431-9df3-620b2dcfe144",
"short_name": "CC BY 4.0",
"full_name": "Creative Commons Attribution 4.0 International",
"description": "This dataset is licensed under a Creative Commons Attribution 4.0 International licence.\n\nWhat does this mean?\nYou can share, copy and modify this dataset so long as you give appropriate credit, provide a link to the CC BY license, and indicate if changes were made, but you may not do so in a way that suggests the rights holder has endorsed you or your use of the dataset. Note that further permission may be required for any content within the dataset that is identified as belonging to a third party.",
"url": "http://creativecommons.org/licenses/by/4.0",
"category": "Creative"
},
"contributors": [
{
"first_name": "Christiaan",
"last_name": "van Baak"
},
{
"first_name": "Marius",
"last_name": "Stoica"
},
{
"first_name": "Arjen",
"last_name": "Grothe"
},
{
"first_name": "Gareth",
"last_name": "Davies"
},
{
"profile_id": "72970719-95c8-341b-80d2-afa9e7154baf",
"first_name": "Wout",
"last_name": "Krijgsman"
},
{
"profile_id": "3a4bfe2c-4098-3859-9b88-789fa993e05a",
"first_name": "Keith",
"last_name": "Richards"
},
{
"profile_id": "f1660f3c-ebbd-3289-8240-1f4ea7913df4",
"first_name": "Klaudia",
"last_name": "Kuiper"
},
{
"first_name": "Elmira",
"last_name": "Aliyeva"
}
],
"versions": [
{
"version": 1,
"publish_date": "2019-01-29T12:51:38.090Z",
"available": true
}
],
"files": [
{
"filename": "Appendix_A_Jeirankechmez_pmag_interpretations.dir",
"id": "f2f4cba7-2411-4737-a9b2-f094db30dca1",
"content_details": {
"id": "994bc865-5300-4d76-a373-e528ccd830e8",
"sha256_hash": "2427c4b077372760973ce8224694f2a2ee5383c7f022ad818164d847a20e27cc",
"sha1_hash": "73792dc6d6eb2c1de1e04926ba5d4420dd0aaece",
"content_type": "application/x-director",
"size": 917022,
"created_date": "2019-01-03T00:00:00.000Z"
"download_expiry_time": "2019-01-29T13:52:25.729Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
},
{
"filename": "Appendix_B_Sample_locations_susceptibility.xlsx",
"id": "64241bf0-5279-49e8-a505-be9075b910e1",
"content_details": {
"id": "af8809d0-8e63-4599-abaa-e7af9ad39959",
"sha256_hash": "0588f44a0cbd477aa2798323e57ce0b2d4a118e767c0b1ffdc9eb1017e4d23c2",
"sha1_hash": "02e89f6f197ebf495e1e2c3d1aab250efc7545e7",
"content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"size": 24770,
"created_date": "2019-01-03T00:00:00.000Z"
,
"download_expiry_time": "2019-01-29T13:52:25.732Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
},
{
"filename": "Appendix_C_ArAr_data.xlsx",
"id": "2e912027-ff3f-48ad-98b9-b643b59ba0e3",
"content_details": {
"id": "4960377c-060d-41f6-b7af-150617d8ebeb",
"sha256_hash": "235dc32c1e99f350ee5c99908a5f5d72d1aeeab02f78c2e0181d585bd1880fa6",
"sha1_hash": "6483156e4577948cac5d2679eee862c76faed1c9",
"content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"size": 18510,
"created_date": "2019-01-03T00:00:00.000Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
}
],
"articles": [
{
"id": "10.1016/j.gloplacha.2019.01.007",
"title": "Flooding of the Caspian Sea at the intensification of Northern Hemisphere Glaciations",
"doi": "10.1016/j.gloplacha.2019.01.007",
"journal": {
"issn": "0921-8181",
"name": "Global and Planetary Change",
"url": "http://www.sciencedirect.com/science/journal/09218181"
}
}
],
"categories": [
{
"id": "http://com/vocabulary/OmniScience/Concept-170590667",
"label": "Geology"
},
{
"id": "http://data.elsevier.com/vocabulary/OmniScience/Concept-473860195",
"label": "Strontium Isotope"
}
],
"institutions": [ ],
"metrics": {
},
"available": true,
"related_links": [ ]
}
I am using $contributors.profile_id from above json to call new endpoint(invokeshttp) (https://api.xxx.com/profile/$.profile_id)
Json response for this
"contributors": [
{
“profile_id”:”cedferfiherhforhforf”
"first_name": “xxx”,
"last_name": "van Baak”,
“other_ids”:[] ,
“Other info”: “deeded” }
I have to call this endpoint depending upon number of object in contributor(let say we have 5 object in contributor ,so I have to call this endpoint 5 time)and combine these 5 response together
Then I have to merge the response(above response to the main response )
just an example:
EvaluateJsonPath to extract "id" into attribute, later join by this attribute
SplitJson to split your json by "contributors"
call endpoint
MergeContent merge by "id" and with count after SplitJson

Filter objects in geojson based on a specific key

I try to edit a geojson file to keep only objects that have the key "name".
The filter works but I can't find a way to keep the other objects and, specifically, the geometry and redirect the whole stuff to a new geojson file. Is there a way to display the whole object after filtering one of its children objects?
Here is an example of my data. The first object has the "name" property and the second hasn't:
{
"features": [
{
"type": "Feature",
"id": "way/24824633",
"properties": {
"#id": "way/24824633",
"highway": "tertiary",
"lit": "yes",
"maxspeed": "50",
"name": "Rue de Kleinbettingen",
"surface": "asphalt"
},
"geometry": {
"type": "LineString",
"coordinates": [
[
5.8997935,
49.6467825
],
[
5.8972561,
49.6467445
]
]
}
},
{
"type": "Feature",
"id": "way/474396855",
"properties": {
"#id": "way/474396855",
"highway": "path"
},
"geometry": {
"type": "LineString",
"coordinates": [
[
5.8020608,
49.6907648
],
[
5.8020695,
49.6906054
]
]
}
}
]
}
Here is what I tried, using jq
cat file.geojson | jq '.features[].properties | select(has("name"))'
The "geometry" is also a child of "features" but I can't find a way to make the selection directly from the "features" level. Is there some way to do that? Or a better path to the solution?
So, the required ouput is:
{
"type": "Feature",
"id": "way/24824633",
"properties": {
"#id": "way/24824633",
"highway": "tertiary",
"lit": "yes",
"maxspeed": "50",
"name": "Rue de Kleinbettingen",
"surface": "asphalt"
},
"geometry": {
"type": "LineString",
"coordinates": [
[
5.8997935,
49.6467825
],
[
5.8972561,
49.6467445
]
]}}
You can assign the filtered list back to .features:
jq '.features |= map(select(.properties|has("name")))'

Resources