Server side template is replaced with uploaded document - ruby

I'm am trying to create an envelope for embedded signing with two sever-side templates and a pdf document I generated all in one request. One of the server-side templates has a DOB tab that will be filled in with the data specified in the request.
I've reviewed some other questions related to the docusign api and composite templates with documents here and here I've used these examples to format my request.
Here is the request: NOTE: I'm using the multipart-post gem here to create the request, so some of this request is pseudo HTTP with some multipart-post info. I'm not sure how to get a fully formed HTTP request out in order to post it here.
POST http://localhost/restapi/v2/accounts/2/envelopes
X-DocuSign-Authentication: [omitted]
-------------RubyMultipartPost
Content-Type: application/json
Content-Disposition: form-data;
name="post_body"
{
"compositeTemplates": [
{
"serverTemplates": [
{
"sequence": 1,
"templateId": "3093E017-2E18-4A30-A104-0201C601CE5F"
}
],
"inlineTemplates": [
{
"sequence": 1,
"recipients": {
"signers": [
{
"email": "jond#gmail.com",
"name": "Jon D. Doe",
"recipientId": "1",
"roleName": "Proposed Insured",
"clientUserId": "jond#gmail.com",
"tabs": {
"textTabs": [
],
"checkboxTabs": [
],
"fullNameTabs": [
],
"dateTabs": [
{
"tabLabel": "DOB",
"name": null,
"value": "1-21-1974",
"documentId": null,
"selected": null
}
]
}
}
]
}
}
],
"document": {
"documentId": "1",
"name": "test.pdf"
}
},
{
"serverTemplates": [
{
"sequence": 2,
"templateId": "63CA2E24-BBB8-499F-A884-021580DF54AF"
}
],
"inlineTemplates": [
{
"sequence": 2,
"recipients": {
"signers": [
{
"email": "jond#gmail.com",
"name": "Jon D. Doe",
"recipientId": "1",
"roleName": "Proposed Insured",
"clientUserId": "jond#gmail.com",
"tabs": {
"textTabs": [
],
"checkboxTabs": [
],
"fullNameTabs": [
],
"dateTabs": [
{
"tabLabel": "DOB",
"name": null,
"value": "1-21-1974",
"documentId": null,
"selected": null
}
]
}
}
]
}
}
]
}
],
"status": "sent"
}
-------------RubyMultipartPost
Content-Disposition: file;filename="test.pdf";documentid=1;name="file1"
Content-Length: 34967
Content-Type: application/pdf
Content-Transfer-Encoding: binary
[pdf bytes omitted]
The resulting embedded view only has two documents in it. The 1st is the test.pdf I uploaded in the request and the 2nd is the server-side template specified by templateId=63CA2E24-BBB8-499F-A884-021580DF54AF
I'm not sure what I'm doing wrong .. according to the other questions I referenced above, this should work.

Each compositeTemplate object can only supply files from a single source -- and if a compositeTemplate object specifies a document, then that's considered the document source for that compositeTemplate object. In that case, any documents that happen to be part of a serverTemplate within the same compositeTemplate object will be ignored. This seems to be what you're experiencing.
The request you posted only contains two compositeTemplate objects, so the resulting Envelope only contains documents from two document sources:
The document that's specified in the first compositeTemplate object.
The file(s) contained in the serverTemplate that's specified in the second compositeTemplate object.
If you want the Envelope to also contain the file(s) contained in serverTemplate 3093E017-2E18-4A30-A104-0201C601CE5F, you'll need to add a third compositeTemplate object to the request that specifies 3093E017-2E18-4A30-A104-0201C601CE5F as the server template, and uses the inlineTemplate info to provide necessary recipient and tab information (as you've done with the other compositeTemplate objects in the request you posted). [See below for full request example.]
Finally, note that the value of the sequence properties within each compositeTemplate object specify the order in which the serverTemplates and inlineTemplates will be applied within the scope of that specific compositeTemplate -- it does nothing to influence the sequence of documents in the Envelope. It's the order of the compositeTemplate objects in the request JSON/XML itself that determines sequence of documents in the resulting Envelope.
The following request will result in an Envelope that contains the following files (in this order):
Files from template 3093E017-2E18-4A30-A104-0201C601CE5F
The document that's specified by the request (documentId=1)
Files from template 63CA2E24-BBB8-499F-A884-021580DF54AF
Request body (JSON part):
{
"compositeTemplates": [
{
"serverTemplates": [
{
"sequence": 1,
"templateId": "3093E017-2E18-4A30-A104-0201C601CE5F"
}
],
"inlineTemplates": [
{
"sequence": 2,
"recipients": {
"signers": [
{
"email": "jond#gmail.com",
"name": "Jon D. Doe",
"recipientId": "1",
"roleName": "Proposed Insured",
"clientUserId": "jond#gmail.com",
"tabs": {
"dateTabs": [
{
"tabLabel": "DOB",
"name": null,
"value": "1-21-1974",
"documentId": null,
"selected": null
}
]
}
}
]
}
}
]
},
{
"serverTemplates": [
{
"sequence": 1,
"templateId": "3093E017-2E18-4A30-A104-0201C601CE5F"
}
],
"inlineTemplates": [
{
"sequence": 2,
"recipients": {
"signers": [
{
"email": "jond#gmail.com",
"name": "Jon D. Doe",
"recipientId": "1",
"roleName": "Proposed Insured",
"clientUserId": "jond#gmail.com",
"tabs": {
"dateTabs": [
{
"tabLabel": "DOB",
"name": null,
"value": "1-21-1974",
"documentId": null,
"selected": null
}
]
}
}
]
}
}
],
"document": {
"documentId": "1",
"name": "test.pdf"
}
},
{
"serverTemplates": [
{
"sequence": 1,
"templateId": "63CA2E24-BBB8-499F-A884-021580DF54AF"
}
],
"inlineTemplates": [
{
"sequence": 2,
"recipients": {
"signers": [
{
"email": "jond#gmail.com",
"name": "Jon D. Doe",
"recipientId": "1",
"roleName": "Proposed Insured",
"clientUserId": "jond#gmail.com",
"tabs": {
"dateTabs": [
{
"tabLabel": "DOB",
"name": null,
"value": "1-21-1974",
"documentId": null,
"selected": null
}
]
}
}
]
}
}
]
}
],
"status": "sent"
}

Related

NiFi: ReplaceText alternatives to modify JSON

My NiFi application receives two kinda different types of JSON's.
First of them looks like:
[
{
"campaign": {
"resourceName": "customers/8952771329/campaigns/11381694617",
"status": "ENABLED",
"name": "Saint_Spring_Active Minerals_oct-nov_2020_trueview_skip_5766500views",
"id": "11381694617"
},
"metrics": {
"interactionEventTypes": [
"VIDEO_VIEW"
],
"clicks": "6",
"videoQuartileP100Rate": 0.44493171079034244,
"videoQuartileP25Rate": 0.9747718298919024,
"videoQuartileP50Rate": 0.7339309987701469,
"videoQuartileP75Rate": 0.5337562301767105,
"videoViewRate": 0.4471109114825628,
"videoViews": "27872",
"viewThroughConversions": "0",
"contentBudgetLostImpressionShare": 0.0000013066088274492382,
"contentImpressionShare": 0.0999,
"contentRankLostImpressionShare": 0.9001,
"conversionsValue": 0,
"conversions": 0,
"costMicros": "9338700950",
"ctr": 0.00009624947864865732,
"currentModelAttributedConversions": 0,
"currentModelAttributedConversionsValue": 0,
"engagementRate": 0,
"engagements": "0",
},
"segments": {
"device": "CONNECTED_TV",
"date": "2020-12-20"
}
}
]
And second:
[
{
"adGroup": {
"resourceName": "customers/5404177717/adGroups/110501283582",
"campaign": "customers/5404177717/campaigns/11628802542"
},
"metrics": {
"interactionEventTypes": [
"CLICK"
],
"clicks": "1",
"averageCpm": 95497428.02172929,
"gmailForwards": "0",
"gmailSaves": "0",
"gmailSecondaryClicks": "0",
"impressions": "4418",
"interactionRate": 0.00022634676324128565,
"interactions": "1"
},
"adGroupAd": {
"resourceName": "customers/5404177717/adGroupAds/110501283582~480227690139",
"status": "ENABLED",
"ad": {
"resourceName": "customers/5404177717/ads/480227690139",
"id": "480227690139",
"name": "20 sec perek"
},
"adGroup": "customers/5404177717/adGroups/110501283582"
},
"segments": {
"device": "DESKTOP",
"date": "2020-11-21"
}
}
]
I already have 2 tables in my database to save this data. I have an attribute table.name just to not create same block where's only table name is different.
My next block is FlattenJson. After this i'm using ReplaceText with search value (replacement value is empty string): (customers\\\/${client.customer.id}\\\/campaigns\\\/|customers\\\/${client.customer.id}\\\/adGroups\\\/).
Why this? From this line: "adGroup": "customers/5404177717/adGroups/110501283582" i only need last value 110501283582 as ad_group_id. And from this line: "campaign": "customers/5404177717/campaigns/11628802542" i only need 11628802542. ${client.customer.id} can be different, so i'm using EL features.
Also i need to change json value name adGroup to ad.group.id, for this i'm also using ReplaceText.
Can i do it faster without two ReplaceText processors?
Look at the following processors...I think using them can be an alternative:
JoltTransformJSON:
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.JoltTransformJSON/
UpdateRecord:
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.UpdateRecord/index.html

Compare two JSON arrays using two or more columns values in Dataweave 2.0

I had a task where I needed to compare and filter two JSON arrays based on the same values using one column of each array. So I used this answer of this question.
However, now I need to compare two JSON arrays matching two, or even three columns values.
I already tried to use one map inside other, however, it isn't working.
The examples could be the ones in the answer I used. Compare db.code = file.code, db.name = file.nm and db.id = file.identity
var db = [
{
"CODE": "A11",
"NAME": "Alpha",
"ID": "C10000"
},
{
"CODE": "B12",
"NAME": "Bravo",
"ID": "B20000"
},
{
"CODE": "C11",
"NAME": "Charlie",
"ID": "C30000"
},
{
"CODE": "D12",
"NAME": "Delta",
"ID": "D40000"
},
{
"CODE": "E12",
"NAME": "Echo",
"ID": "E50000"
}
]
var file = [
{
"IDENTITY": "D40000",
"NM": "Delta",
"CODE": "D12"
},
{
"IDENTITY": "C30000",
"NM": "Charlie",
"CODE": "C11"
}
]
See if this works for you
%dw 2.0
output application/json
var file = [
{
"IDENTITY": "D40000",
"NM": "Delta",
"CODE": "D12"
},
{
"IDENTITY": "C30000",
"NM": "Charlie",
"CODE": "C11"
}
]
var db = [
{
"CODE": "A11",
"NAME": "Alpha",
"ID": "C10000"
},
{
"CODE": "B12",
"NAME": "Bravo",
"ID": "B20000"
},
{
"CODE": "C11",
"NAME": "Charlie",
"ID": "C30000"
},
{
"CODE": "D12",
"NAME": "Delta",
"ID": "D40000"
},
{
"CODE": "E12",
"NAME": "Echo",
"ID": "E50000"
}
]
---
file flatMap(v) -> (
db filter (v.IDENTITY == $.ID and v.NM == $.NAME and v.CODE == $.CODE)
)
Using flatMap instead of map to flatten otherwise will get array of arrays in the output which is cleaner unless you are expecting a possibility of multiple matches per file entry, in which case I'd stick with map.
You can compare objects in DW directly, so the solution you linked can be modified to the following:
%dw 2.0
import * from dw::core::Arrays
output application/json
var db = [
{
"CODE": "A11",
"NAME": "Alpha",
"ID": "C10000"
},
{
"CODE": "B12",
"NAME": "Bravo",
"ID": "B20000"
},
{
"CODE": "C11",
"NAME": "Charlie",
"ID": "C30000"
},
{
"CODE": "D12",
"NAME": "Delta",
"ID": "D40000"
},
{
"CODE": "E12",
"NAME": "Echo",
"ID": "E50000"
}
]
var file = [
{
"IDENTITY": "D40000",
"NM": "Delta",
"CODE": "D12"
},
{
"IDENTITY": "C30000",
"NM": "Charlie",
"CODE": "C11"
}
]
---
db partition (e) -> file contains {IDENTITY:e.ID,NM:e.NAME,CODE:e.CODE}
You can make use of filter directly and using contains
db filter(value) -> file contains {IDENTITY: value.ID, NM: value.NAME, CODE: value.CODE}
This tells you to filter the db array based on if the file contains the object {IDENTITY: value.ID, NM: value.NAME, CODE: value.CODE}. However, this will not work if objects in the file array has other fields that you will not use for comparison. Using above, you can update filter condition to check if an object in file array exist (using data selector) where the condition applies. You can use below to check that.
db filter(value) -> file[?($.IDENTITY==value.ID and $.NM == value.NAME and $.CODE == value.CODE)] != null

Combine json response in nifi

We are calling invokehttp processes and getting response which json. Example
{
"id": "h569gcjhcm",
"doi": {
"id": "10.17632/h569gcjhcm.1",
"status": "allocated",
"prefix": "10.17632"
},
"name": "Data for: Flooding of the Caspian Sea at the intensification of Northern Hemisphere Glaciations",
"description": "Supplementary data for the Jeirankechmez section in Azerbaijan.\n\n- Appendix A contains all paleomagnetic data and interpretations of the Jeirankechmez section. This .dir file can be imported into the paleomagnetism.org webportal under \"Interpretation Portal\", \"Advanced Options\", \"Import Application Save\". For further details on the use of paleomagnetism.org please refer to the article by Koymans et al. (2016) - https://doi.org/10.1016/j.cageo.2016.05.007.\n- Appendix B contains the magnetic susceptibility data for the analysed samples, including geographic coordinates and stratigraphic levels.\n- Appendix C contains the 40Ar/39Ar data for the three analysed volcanic ash layers. ",
"version": 1,
"publish_date": "2019-01-29T12:51:38.090Z",
"data_licence": {
"id": "01d9c749-3c4d-4431-9df3-620b2dcfe144",
"short_name": "CC BY 4.0",
"full_name": "Creative Commons Attribution 4.0 International",
"description": "This dataset is licensed under a Creative Commons Attribution 4.0 International licence.\n\nWhat does this mean?\nYou can share, copy and modify this dataset so long as you give appropriate credit, provide a link to the CC BY license, and indicate if changes were made, but you may not do so in a way that suggests the rights holder has endorsed you or your use of the dataset. Note that further permission may be required for any content within the dataset that is identified as belonging to a third party.",
"url": "http://creativecommons.org/licenses/by/4.0",
"category": "Creative"
},
"contributors": [
{
"first_name": "Christiaan",
"last_name": "van Baak"
},
{
"first_name": "Marius",
"last_name": "Stoica"
},
{
"first_name": "Arjen",
"last_name": "Grothe"
},
{
"first_name": "Gareth",
"last_name": "Davies"
},
{
"profile_id": "72970719-95c8-341b-80d2-afa9e7154baf",
"first_name": "Wout",
"last_name": "Krijgsman"
},
{
"profile_id": "3a4bfe2c-4098-3859-9b88-789fa993e05a",
"first_name": "Keith",
"last_name": "Richards"
},
{
"profile_id": "f1660f3c-ebbd-3289-8240-1f4ea7913df4",
"first_name": "Klaudia",
"last_name": "Kuiper"
},
{
"first_name": "Elmira",
"last_name": "Aliyeva"
}
],
"versions": [
{
"version": 1,
"publish_date": "2019-01-29T12:51:38.090Z",
"available": true
}
],
"files": [
{
"filename": "Appendix_A_Jeirankechmez_pmag_interpretations.dir",
"id": "f2f4cba7-2411-4737-a9b2-f094db30dca1",
"content_details": {
"id": "994bc865-5300-4d76-a373-e528ccd830e8",
"sha256_hash": "2427c4b077372760973ce8224694f2a2ee5383c7f022ad818164d847a20e27cc",
"sha1_hash": "73792dc6d6eb2c1de1e04926ba5d4420dd0aaece",
"content_type": "application/x-director",
"size": 917022,
"created_date": "2019-01-03T00:00:00.000Z"
"download_expiry_time": "2019-01-29T13:52:25.729Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
},
{
"filename": "Appendix_B_Sample_locations_susceptibility.xlsx",
"id": "64241bf0-5279-49e8-a505-be9075b910e1",
"content_details": {
"id": "af8809d0-8e63-4599-abaa-e7af9ad39959",
"sha256_hash": "0588f44a0cbd477aa2798323e57ce0b2d4a118e767c0b1ffdc9eb1017e4d23c2",
"sha1_hash": "02e89f6f197ebf495e1e2c3d1aab250efc7545e7",
"content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"size": 24770,
"created_date": "2019-01-03T00:00:00.000Z"
,
"download_expiry_time": "2019-01-29T13:52:25.732Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
},
{
"filename": "Appendix_C_ArAr_data.xlsx",
"id": "2e912027-ff3f-48ad-98b9-b643b59ba0e3",
"content_details": {
"id": "4960377c-060d-41f6-b7af-150617d8ebeb",
"sha256_hash": "235dc32c1e99f350ee5c99908a5f5d72d1aeeab02f78c2e0181d585bd1880fa6",
"sha1_hash": "6483156e4577948cac5d2679eee862c76faed1c9",
"content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"size": 18510,
"created_date": "2019-01-03T00:00:00.000Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
}
],
"articles": [
{
"id": "10.1016/j.gloplacha.2019.01.007",
"title": "Flooding of the Caspian Sea at the intensification of Northern Hemisphere Glaciations",
"doi": "10.1016/j.gloplacha.2019.01.007",
"journal": {
"issn": "0921-8181",
"name": "Global and Planetary Change",
"url": "http://www.sciencedirect.com/science/journal/09218181"
}
}
],
"categories": [
{
"id": "http://com/vocabulary/OmniScience/Concept-170590667",
"label": "Geology"
},
{
"id": "http://data.elsevier.com/vocabulary/OmniScience/Concept-473860195",
"label": "Strontium Isotope"
}
],
"institutions": [ ],
"metrics": {
},
"available": true,
"related_links": [ ]
}
I am using $contributors.profile_id from above json to call new endpoint(invokeshttp) (https://api.xxx.com/profile/$.profile_id)
Json response for this
"contributors": [
{
“profile_id”:”cedferfiherhforhforf”
"first_name": “xxx”,
"last_name": "van Baak”,
“other_ids”:[] ,
“Other info”: “deeded” }
I have to call this endpoint depending upon number of object in contributor(let say we have 5 object in contributor ,so I have to call this endpoint 5 time)and combine these 5 response together
Then I have to merge the response(above response to the main response )
just an example:
EvaluateJsonPath to extract "id" into attribute, later join by this attribute
SplitJson to split your json by "contributors"
call endpoint
MergeContent merge by "id" and with count after SplitJson

How can i create FHIR ValueSet with hierarchical content?

I want create ValueSet with link to my CodeSystem.
ValueSet expantion with enclosed contains should be like:
{
...
"expansion": {
"parameter": [
{
"name": "total",
"valueString": "282"
}
],
"contains": [
{
"code": "0",
"display": "asd",
"contains": [
{
"code": "High"
},
{
"code": "Okso",
"display": "1"
}
]
}
]
}
...
}
I tried to create CodeSystem with hierarchical concepts, and ValueSet with compose.include to my CodeSystem. But i getted expantion with 1 level contents like this:
"contains": [
{
"code": "0",
"display": "asd"
}, {
"code": "High"
}, {
"code": "Okso",
"display": "1"
}
]
I tried change different attributes CodeSystem, but this bring me no result.
Help me please with a simple example!

Extracting data from Deeply Nested JSON in Ruby on Rails

Have a Json out put like
{
"query": {
"results": {
"industry": [
{
"id": "112",
"name": "Agricultural Chemicals",
"company": [
{
"name": "Adarsh Plant",
"symbol": "ADARSHPL"
},
{
"name": "Agrium Inc",
"symbol": "AGU"
}
},
]
{
"id": "914",
"name": "Water Utilities",
"company": [
{
"name": "Acque Potabili",
"symbol": "ACP"
},
{
"name": "Water Resources Group",
"symbol": "WRG"
}
]
}
]
}
}
}
Need the out put like - Company Name, Company Symbol, Company id,
Company id name
and example of output would be
Adarsh Plant, ADARSHPL, 112, Agricultural Chemicals
Agrium Inc, AGU, 112, Agricultural Chemicals
Acque Potabili, ACP, 914, Water Utilities
Water Resources Group, WRG, 914, Water Utilities
Any suggestions
There's a typo in your sample json, but we'll talk about it later.
Assuming your json data converted to hash object already like this:
json={
"query"=> {
"results"=> {
"industry"=> [
{
"id"=> "112",
"name"=> "Agricultural Chemicals",
"company"=> [
{
"name"=> "Adarsh Plant",
"symbol"=> "ADARSHPL"
},
{
"name"=> "Agrium Inc",
"symbol"=> "AGU"
}
]
},
{
"id"=> "914",
"name"=> "Water Utilities",
"company"=> [
{
"name"=> "Acque Potabili",
"symbol"=> "ACP"
},
{
"name"=> "Water Resources Group",
"symbol"=> "WRG"
}
]
}
]
}
}
}
You can use inject and map to handle the two level array of industry, inject will iterate the outer array:
json["query"]["results"]["industry"].inject([]){|m,o|
m += o["company"].map{|x| [x["name"],x["symbol"],o["id"],o["name"]]}
}
result is an array of arrays with the order as you wish:
=> [["Adarsh Plant", "ADARSHPL", "112", "Agricultural Chemicals"],
["Agrium Inc", "AGU", "112", "Agricultural Chemicals"],
["Acque Potabili", "ACP", "914", "Water Utilities"],
["Water Resources Group", "WRG", "914", "Water Utilities"]]
If you want get a string delimited by comma, you could chain on .flatten.join(",") at the end.
json["query"]["results"]["industry"].inject([]){|m,o|
m += o["company"].map{|x| [x["name"],x["symbol"],o["id"],o["name"]]}
}.flatten.join(",")
Result:
=> Adarsh Plant,ADARSHPL,112,Agricultural Chemicals,Agrium Inc,AGU,112,Agricultural Chemicals,Acque Potabili,ACP,914,Water Utilities,Water Resources Group,WRG,914,Water Utilities
The typo of your json data:
In the middle }, ] { should be changed to ] },{ .
Convert json to hash
https://stackoverflow.com/a/7964378/3630826

Resources