Issues getting flow to send the correct json in body when using powerautomate's http request - power-automate

I'm using a PowerAutomate Flow to call a native SmartSheet API that does a POST. The POST IS working but my MULTI_PICKLIST type field is not being populated correctly in SmartSheet due to the double quotes.
The API is: concat('https://api.smartsheet.com/2.0/sheets/', variables('vSheetID'), '/rows')
In the Body section of the http rest API I form my JSON and the section of interest looks like this:
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
#{variables('vServices')}
]
}
}
My variable vServices raw output looks like:
{
"body":
{
"name": "vServices",
"value": "Test1, Test2"
}
}
The format needs to be like this (it works using PostMan).
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
"Test1","Test2"
]
}
}
As a step in formatting my vServices variable I tried to do a replace function to replace the ',' with a '","' but this ultimately ends up as a \",\"
Any suggestion on how to get around this? Again, ultimately I need the desired JSON Body to read but haven't been able to achieve this in the Body section:
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
"Test1","Test2"
]
}
}
vs this (when using replace function).
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
"Test1\",\"Test2"
]
}
}
Thank you in advance,

I resolved my issue by taking the original variable, sending it to a compose step that did a split on the separator of a comma. I then added a step to set a new variable to the output of the compose step. This left me with a perfectly setup array in the exact format I needed! This seemed to resolve any of the issues I was having with double quotes and escape sequences.

Related

Terraform - Place variable inside EOF tag

I have a terraform file that I'm reusing to create several AWS Eventbridge (as triggers for some lambdas).
On a different part of the file I'm able to use For Each method to create several eventbridge and naming them accordingly. My problem is that I'm not being able to do the same thing inside EOF tag (that need to be different on each Eventbridge) since it takes everything as a string.
I would need to replace the ARN in "prefix": "arn:aws:medialive:us-west-2:11111111111:channel:3434343" with a variable. How could I do that?
This is the EOF part of the terraform code:
event_pattern = <<EOF
{
"source": ["aws.medialive"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["medialive.amazonaws.com"],
"eventName": ["StopChannel"],
"responseElements": {
"arn": [{
"prefix": "arn:aws:medialive:us-west-2:11111111111:channel:3434343"
}]
}
}
}
EOF
}
It's called a Heredoc String, not an EOF tag. "EOF" just happens to be the string you are using to tag the beginning and ending of a multi-line string. You could use anything there that doesn't occurr in your actual multiline string. You could be replacing "EOF" with "MYMULTILINESTRING".
To place the value of a variable in a Heredoc String in Terraform, you do the exact same thing you would do with other strings in Terraform: You use String Interpolation.
event_pattern = <<EOF
{
"source": ["aws.medialive"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["medialive.amazonaws.com"],
"eventName": ["StopChannel"],
"responseElements": {
"arn": [{
"prefix": "${var.my_arn_variable}"
}]
}
}
}
EOF
}

Azure Data Factory REST API paging with Elasticsearch

During developing pipeline which will use Elasticsearch as a source I faced with issue related paging. I am using SQL Elasticsearch API. Basically, I've started to do request in postman and it works well. The body of request looks following:
{
"query":"SELECT Id,name,ownership,modifiedDate FROM \"core\" ORDER BY Id",
"fetch_size": 20,
"cursor" : ""
}
After first run in response body it contains cursor string which is pointer to next page. If in postman I send the request and provide cursor value from previous request it return data for second page and so on. I am trying to archive the same result in Azure Data Factory. For this I using copy activity, which store response to Azure blob. Setup for source is following.
copy activity source configuration
This is expression for body
{
"query": "SELECT Id,name,ownership,modifiedDate FROM \"#{variables('TableName')}\" WHERE ORDER BY Id","fetch_size": #{variables('Rows')}, "cursor": ""
}
I have no idea how to correctly setup pagination rule. The pipeline works properly but only for the first request. I've tried to setup Headers.cursor and expression $.cursor but this setup leads to an infinite loop and pipeline fails with the Elasticsearch restriction.
I've also tried to read document at https://learn.microsoft.com/en-us/azure/data-factory/connector-rest#pagination-support but it seems pretty limited in terms of usage examples and difficult for understanding.
Could somebody help me understand how to build the pipeline with paging abilities utilization?
Responce with the cursor looks like:
{
"columns": [
{
"name": "companyId",
"type": "integer"
},
{
"name": "name",
"type": "text"
},
{
"name": "ownership",
"type": "keyword"
},
{
"name": "modifiedDate",
"type": "datetime"
}
],
"rows": [
[
2,
"mic Inc.",
"manufacture",
"2021-03-31T12:57:51.000Z"
]
],
"cursor": "g/WuAwFaAXNoRG5GMVpYSjVWR2hsYmtabGRHTm9BZ0FBQUFBRUp6VGxGbUpIZWxWaVMzcGhVWEJITUhkbmJsRlhlUzFtWjNjQUFBQUFCQ2MwNWhaaVIzcFZZa3Q2WVZGd1J6QjNaMjVSVjNrdFptZDP/////DwQBZgljb21wYW55SWQBCWNvbXBhbnlJZAEHaW50ZWdlcgAAAAFmBG5hbWUBBG5hbWUBBHRleHQAAAABZglvd25lcnNoaXABCW93bmVyc2hpcAEHa2V5d29yZAEAAAFmDG1vZGlmaWVkRGF0ZQEMbW9kaWZpZWREYXRlAQhkYXRldGltZQEAAAEP"
}
I finally find the solution, hopefully, it will be useful for the community.
Basically, what needs to be done it is split the solution into four steps.
Step 1 Make the first request as in the question description and stage file to blob.
Step 2 Read blob file and get the cursor value, set it to variable
Step 3 Keep requesting data with a changed body
{"cursor" : "#{variables('cursor')}" }
Pipeline looks like this:
pipeline
Configuration of pagination looks following
pagination . It is a workaround as the server ignores this header, but we need to have something which allows sending a request in loop.

Insert more than one record using GraphQL Mutation

I would like to insert more than one record using GraphQL Mutation but it is giving error. here is the code which I have used to perform this.
input BusinessImageInput {
business_id: Int
image_url: String
}
mutation MyMutation($images: [BusinessImageInput!]) {
insert_business_images(objects: [$images]) {
affected_rows
}
}
And here is variable which i want to pass as paramter.
{"images": [
{
"business_id": 15,
"image_url": "https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcTVzlb1cEw8E0LeLJzk9c0OQV-N387Nt2Kn5w&usqp=CAU"
},
{
"business_id": 15,
"image_url": "https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcTVzlb1cEw8E0LeLJzk9c0OQV-N387Nt2Kn5w&usqp=CAU"
}
]
}
Here is the error
{
"errors": [
{
"extensions": {
"path": "$.query",
"code": "bad-request"
},
"message": "not a valid GraphQL query"
}
]
}
Please help me out.
There is one glaring issue in your code. This line
insert_business_images(objects: [$images]) {
should be
insert_business_images(objects: $images) {
Notice the removed square brackets.
If that does not help, then we'll need more information, such as:
what error do you get?
which implementation of GraphQL are you using both client-side and server-side?
what does the GraphQL code (and possibly resolvers) look like on the server? You have only given us the client-side of the equation.
It's as simple as
mutation MyMutation($images: [BusinessImageInput!]) {
insert_business_images(images: $images) {
or
mutation MyMutation($objects: [BusinessImageInput!]) {
insert_business_images(objects: $objects) {
depends on server insert_business_images mutation definition, the name of argument (images or objects ?) - use explorer ... and [as you can see above] usually input arg and variable are same-named, they only differs with $ prefix.
https://graphql.org/learn/queries/#variables
Also you must follow server input types.

How do I use FreeFormTextRecordSetWriter

I my Nifi controller I want to configure the FreeFormTextRecordSetWriter, but I have no Idea what I should put in the "Text" field. I'm getting the text from my source (in my case GetSolr), and just want to write this, period.
Documentation and mailinglist do not seem to tell me how this is done, any help appreciated.
EDIT: Here the sample input + output I want to achieve (as you can see: not ransformation needed, plain text, no JSON input)
EDIT: I now realize, that I can't tell GetSolr to return just CSV data - but I have to use Json
So referencing with attribute seems to be fine. What the documentation omits is, that the ${flowFile} attribute should containt the complete flowfile that is returned.
Sample input:
{
"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 0,
"params": {
"q": "*:*",
"_": "1553686715465"
}
},
"response": {
"numFound": 3194,
"start": 0,
"docs": [
{
"id": "{402EBE69-0000-CD1D-8FFF-D07756271B4E}",
"MimeType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"FileName": "Test.docx",
"DateLastModified": "2019-03-27T08:05:00.103Z",
"_version_": 1629145864291221504,
"LAST_UPDATE": "2019-03-27T08:16:08.451Z"
}
]
}
}
Wanted output
{402EBE69-0000-CD1D-8FFF-D07756271B4E}
BTW: The documentation says this:
The text to use when writing the results. This property will evaluate the Expression Language using any of the fields available in a Record.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
I want to use my source's text, so I'm confused
You need to use expression language as if the record's fields are the FlowFile's attributes.
Example:
Input:
{
"t1": "test",
"t2": "ttt",
"hello": true,
"testN": 1
}
Text property in FreeFormTextRecordSetWriter:
${t1} k!${t2} ${hello}:boolean
${testN}Num
Output(using ConvertRecord):
test k!ttt true:boolean
1Num
EDIT:
Seems like what you needed was reading from Solr and write a single column csv. You need to use CSVRecordSetWriter. As for the same,
I should tell you to consider to upgrade to 1.9.1. Starting from 1.9.0, the schema can be inferred for you.
otherwise, you can set Schema Access Strategy as Use 'Schema Text' Property
then, use the following schema in Schema Text
{
"name": "MyClass",
"type": "record",
"namespace": "com.acme.avro",
"fields": [
{
"name": "id",
"type": "int"
}
]
}
this should work
I'll edit it into my answer. If it works for you, please choose my answer :)

How to get name/confidence individually from classify_text?

Most of the other methods in the language api, such as analyze_syntax, analyze_sentiment etc, have the ability to return the constituent elements like
sentiment.score
sentiment.magnitude
token.part_of_speech.tag
etc etc etc....
but I have not found a way to return name and confidence in isolation from classify_text. It doesn't look like it's possible but that seems weird. Am missing something? Thanks
The language.documents.classifyText method returns a ClassificationCategory object which contains name and confidence. If you only want one of the fields you can filter by categories/name or categories/confidence. As an example I executed:
POST https://language.googleapis.com/v1/documents:classifyText?fields=categories%2Fname&key={YOUR_API_KEY}
{
"document": {
"content": "this is a test for a StackOverflow question. I get an error because I need more words in the document and I don't know what else to say",
"type": "PLAIN_TEXT"
}
}
Which returns:
{
"categories": [
{
"name": "/Science/Computer Science"
},
{
"name": "/Computers & Electronics/Programming"
},
{
"name": "/Jobs & Education"
}
]
}
Direct link to API explorer for interactive testing of my example (change content, filters, etc.)

Resources