InfluxDB Task to Calculate/Insert Value - expression

I am trying to calculate/insert a value into my InfluxDB 2.0.8 on a regular interval....can this be done with a task?
For a over simplified example: How can I evaluate 1+1 once an hour and insert the result into the database?

I assume you are well aware of the general intrinsics of tasks and how to set up a task. The only thing missing then is how to generate data rows that do not result from querying. The method array.rows() comes handy for this.
Here is an example - I did not try it.
import "array"
option task = {name: "oneplusone", every: 1h}
array.from(rows: [
{_time: now(), _measurement: "calculation", _field: "result", _value: 1+1}
])|> to(bucket: "mybucket")

Related

Powerautomate Parsing JSON Array

I've seen the JSON array questions here and I'm still a little lost, so could use some extra help.
Here's the setup:
My Flow calls a sproc on my DB and that sproc returns this JSON:
{
"ResultSets": {
"Table1": [
{
"OrderID": 9518338,
"BasketID": 9518338,
"RefID": 65178176,
"SiteConfigID": 237
}
]
},
"OutputParameters": {}
}
Then I use a PARSE JSON action to get what looks like the same result, but now I'm told it's parsed and I can call variables.
Issue is when I try to call just, say, SiteConfigID, I get "The output you selected is inside a collection and needs to be looped over to be accessed. This action cannot be inside a foreach."
After some research, I know what's going on here. Table1 is an Array, and I need to tell PowerAutomate to just grab the first record of that array so it knows it's working with just a record instead of a full array. Fair enough. So I spin up a "Return Values to Virtual Power Agents" action just to see my output. I know I'm supposed to use a 'first' expression or a 'get [0] from array expression here, but I can't seem to make them work. Below are what I've tried and the errors I get:
Tried:
first(body('Parse-Sproc')?['Table1/SiteConfigID'])
Got: InvalidTemplate. Unable to process template language expressions in action 'Return_value(s)_to_Power_Virtual_Agents' inputs at line '0' and column '0': 'The template language function 'first' expects its parameter be an array or a string. The provided value is of type 'Null'. Please see https://aka.ms/logicexpressions#first for usage details.'.
Also Tried:
body('Parse-Sproc')?['Table1/SiteconfigID']
which just returns a null valued variable
Finally I tried
outputs('Parse-Sproc')?['Table1']?['value'][0]?['SiteConfigID']
Which STILL gives me a null-valued variable. It's the worst.
In that last expression, I also switched the variable type in the return to pva action to a string instead of a number, no dice.
Also, changed 'outputs' in that expression for 'body' .. also no dice
Here is a screenie of the setup:
To be clear: the end result i'm looking for is for the system to just return "SiteConfigID" as a string or an int so that I can pipe that into a virtual agent.
I believe this is what you need as an expression ...
body('Parse-Sproc')?['ResultSets']['Table1'][0]?['SiteConfigID']
You can see I'm just traversing down to the object and through the array to get the value.
Naturally, I don't have your exact flow but if I use your JSON and load it up into Parse JSON step to get the schema, I am able to get the result. I do get a different schema to you though so will be interesting to see if it directly translates.

Multiline string interpolation on Google Workflows

I'm trying to use Google Workflows to do some automated BigQuery scheduling tasks. The requirement is to run a query on multiple datasets as the following:
- execute_query_job:
call: execute_query_job
args:
query_text: >-
SELECT
* EXCEPT(row_number)
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY uuid) row_number
FROM
`project.${database_id}.table`)
WHERE
row_number = 1
however, this doesn't work since the string is interpreted as is and no interpolation happened.
The ${} syntax will not span over multiple lines and ansible syntax with {{ var }} also did not work.
Try to change the query to a single line in a similar fashion as:
- execute_query_job:
call: execute_query_job
args:
query_text: ${"SELECT * EXCEPT(row_number) FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY uuid) row_number FROM `project."+database_id+".table`) WHERE row_number = 1"}
Notice that as per the Workflow's docs:
Variables can be assigned to a particular value or to the result of an expression.
If that doesn't work notice that making a POST request to the BigQuery's API jobs.insert method will allow you to specify a JobConfiguration, where you could change the defaultDataset field and change this values for each different dataset at each iteration. The following sample shows how to make iterations based on the values of an array in Workflows.
You may want take a look at the official document. you could concat the variable by multiple lines.
- assign_vars:
assign:
- string: "say"
- string: ${string+" "+"hello"}
- string: ${string+" "+"to the world"}
This is a necessary feature...
But in the meantime I have a more elegant solution:
- assign_vars:
assign:
- multilineVar: |
#!/bin/bash
echo Hi ${workflowVar}!
- workflowVar: StackOverflow
- multilineExapanded: ${ text.replace_all(multilineVar, ${workflowVar}, workflowVar) }

Need linq query

I have 3 fields(Name, Code, displayNmae) in the list, here I need to get a list as output in which I get all the fields but need to split displayname field by colon and then add list again for each splitted value which is display name.
Hence in output list I will have the 5 row as total display name are 5 in 2 rows.
Need the linq query for this problem.
Name Code displayName
Napkins_tableware - Napkins and tableware - 3_ply:conventional_napkins
hand-towel - Hand and Towel - 2_ply:towel roll:coloured
Output should be like this :
Name Code displayName
Napkins_tableware - Napkins and tableware - 3_ply
Napkins_tableware - Napkins and tableware - conventional_napkin
hand-towel - Hand and Towel - 2_ply
hand-towel - Hand and Towel - towel roll
hand-towel - Hand and Towel - coloured
Solution which I tried in C#
foreach(ProductDetailsWithFilters qs in CategoryProductList())
{
foreach(string friendlyname in qs.displayName.Split(':'))
{
qs.displayName = friendlyname;
tempCategoryProductList.Add(qs);
}
}
If you're translating to LINQ, when you have nested foreach loops those correspond to 'from' clauses in query syntax (or in dot syntax, subsequent ones become SelectMany, see below.) The following should be close to what you want:
var query =
from qs in CategoryProductList()
from friendlyName in qs.displayName.Split(':')
select new ProductDetailsWithFilters(qs.Code, qs.Category, friendlyName);
Note: Because functional programming should be side-effect-free, it's better to select a new instance ProductDetailsWithFilters than it is to modify the existing one in your query. My presumption is that you can call a constructor to build a new one of these.
For you to modify the existing property like your loop implementation does, you would have to iterate over the result -- LINQ doesn't provide such a thing in the framework. Such side-effects often lead to hard-to-find bugs.
To do the equivalent of the above query with a SelectMany and dot-syntax:
var query = CategoryProductList()
.SelectMany(
qs => qs.DisplayName.Split(':'),
(qs, friendlyName) => new ProductDetailsWithFilters(qs.Code, qs.Category, friendlyName));
Both lead to functionally identical code. In this case, I tend to prefer the query-syntax over the dot-syntax partly because because there are several SelectMany overloads and handling the projection involves repeating the variables across both lambda expressions. If you had another "from" to add, the query-syntax hides the management of 'transparent identifiers' that you would otherwise have to deal with in dot-syntax equivalent code. Whatever your preference, you now have both.
It's worth noting that queries are lazy -- they do nothing until you iterate over their result. So it's really what you do with the result from here that is the interesting part - store it .ToList(), directly data-bind it to a UI, use it to service a web-API, etc...

How to WRITE a structure?

How can I do the following:
data: ls_header type BAPIMEPOHEADER.
" fill it
write ls_header.
currently I'm getting an error because write can not parse the complex type to a string. Is there a simple way to get this code running in abap?
You could use something like:
DATA: g_struct TYPE bapimepoheader.
DO.
ASSIGN COMPONENT sy-index OF STRUCTURE g_struct TO FIELD-SYMBOL(<f>).
IF sy-subrc NE 0.
EXIT.
ENDIF.
WRITE: / <f>.
ENDDO.
Perhaps not exactly the answer you expect: If you list each field.
This can be done quite easy via the Pattern-mask in SE38:
Select the Write-pattern:
Enter the structure you want:
Select the fields
Confirm with "Copy"
Confirm and you get
WRITE: bapimepoheader-po_number,
bapimepoheader-comp_code,
bapimepoheader-doc_type,
bapimepoheader-delete_ind,
bapimepoheader-status,
bapimepoheader-creat_date,
bapimepoheader-created_by,
bapimepoheader-item_intvl,
bapimepoheader-vendor,
bapimepoheader-langu,
bapimepoheader-langu_iso,
bapimepoheader-pmnttrms,
bapimepoheader-dscnt1_to,
bapimepoheader-dscnt2_to,
bapimepoheader-dscnt3_to,
bapimepoheader-dsct_pct1,
bapimepoheader-dsct_pct2,
bapimepoheader-purch_org,
bapimepoheader-pur_group,
bapimepoheader-currency,
bapimepoheader-currency_iso,
bapimepoheader-exch_rate,
bapimepoheader-ex_rate_fx,
bapimepoheader-doc_date,
bapimepoheader-vper_start,
bapimepoheader-vper_end,
bapimepoheader-warranty,
bapimepoheader-quotation,
bapimepoheader-quot_date,
bapimepoheader-ref_1,
bapimepoheader-sales_pers,
bapimepoheader-telephone,
bapimepoheader-suppl_vend,
bapimepoheader-customer,
bapimepoheader-agreement,
bapimepoheader-gr_message,
bapimepoheader-suppl_plnt,
bapimepoheader-incoterms1,
bapimepoheader-incoterms2,
bapimepoheader-collect_no,
bapimepoheader-diff_inv,
bapimepoheader-our_ref,
bapimepoheader-logsystem,
bapimepoheader-subitemint,
bapimepoheader-po_rel_ind,
bapimepoheader-rel_status,
bapimepoheader-vat_cntry,
bapimepoheader-vat_cntry_iso,
bapimepoheader-reason_cancel,
bapimepoheader-reason_code,
bapimepoheader-retention_type,
bapimepoheader-retention_percentage,
bapimepoheader-downpay_type,
bapimepoheader-downpay_amount,
bapimepoheader-downpay_percent,
bapimepoheader-downpay_duedate,
bapimepoheader-memory,
bapimepoheader-memorytype,
bapimepoheader-shiptype,
bapimepoheader-handoverloc,
bapimepoheader-shipcond,
bapimepoheader-incotermsv,
bapimepoheader-incoterms2l,
bapimepoheader-incoterms3l.
Now you can make a simple replacement of bapimepoheader with ls_header and you have an output of all fields of the structure.
Maybe this is not elegant and you must adapt your report, if the structure changes. But I like this way, because often I don't need all fields and I can select the fields in an easy way.
I know two ways, one is procedural, the other is oop.
Here is the procedural approach.
Select the structure's fields (or whatever else You might need ) from the data-dictionary table DD03L into a local internal table.
Loop over the table into a work-area
Check, whether current field is a flat single datatype, and if so,
Assign component workarea-fieldname of structure ls_header into anyfieldsymbol
Write anyfieldsymbol
Do You need the code ?
Class CL_ABAP_CONTAINER_UTILITIES was specially introduced for that by SAP.
Use FILL_CONTAINER_C method for output the structure in a WRITE manner:
DATA: ls_header type BAPIMEPOHEADER.
CALL METHOD CL_ABAP_CONTAINER_UTILITIES=>FILL_CONTAINER_C
EXPORTING
IM_VALUE = ls_header
IMPORTING
EX_CONTAINER = DATA(container)
EXCEPTIONS
ILLEGAL_PARAMETER_TYPE = 1
others = 2.
WRITE container.
You can write your structure to a string and then output the string. Same method idoc segments are created.

How do I create a compound multi-index in rethinkdb?

I am using Rethinkdb 1.10.1 with the official python driver. I have a table of tagged things which are associated to one user:
{
"id": "PK",
"user_id": "USER_PK",
"tags": ["list", "of", "strings"],
// Other fields...
}
I want to query by user_id and tag (say, to find all the things by user "tawmas" with tag "tag"). Starting with Rethinkdb 1.10 I can create a multi-index like this:
r.table('things').index_create('tags', multi=True).run(conn)
My query would then be:
res = (r.table('things')
.get_all('TAG', index='tags')
.filter(r.row['user_id'] == 'USER_PK').run(conn))
However, this query still needs to scan all the documents with the given tag, so I would like to create a compound index based on the user_id and tags fields. Such an index would allow me to query with:
res = r.table('things').get_all(['USER_PK', 'TAG'], index='user_tags').run(conn)
There is nothing in the documentation about compound multi-indexes. However, I
tried to use a custom index function combining the requirements for compound
indexes and multi-indexes by returning a list of ["USER_PK", "tag"] pairs.
My first attempt was in python:
r.table('things').index_create(
'user_tags',
lambda each: [[each['user_id'], tag] for tag in each['tags']],
multi=True).run(conn)
This makes the python driver choke with a MemoryError trying to parse the index function (I guess list comprehensions aren't really supported by the driver).
So, I turned to my (admittedly, rusty) javascript and came up with this:
r.table('things').index_create(
'user_tags',
r.js(
"""(function (each) {
var result = [];
var user_id = each["user_id"];
var tags = each["tags"];
for (var i = 0; i < tags.length; i++) {
result.push([user_id, tags[i]]);
}
return result;
})
"""),
multi=True).run(conn)
This is rejected by the server with a curious exception: rethinkdb.errors.RqlRuntimeError: Could not prove function deterministic. Index functions must be deterministic.
So, what is the correct way to define a compound multi-index? Or is it something
which is not supported at this time?
Short answer:
List comprehensions don't work in ReQL functions. You need to use map instead like so:
r.table('things').index_create(
'user_tags',
lambda each: each["tags"].map(lambda tag: [each['user_id'], tag]),
multi=True).run(conn)
Long answer
This is actually a somewhat subtle aspect of how RethinkDB drivers work. So the reason this doesn't work is that your python code doesn't actually see real copies of the each document. So in the expression:
lambda each: [[each['user_id'], tag] for tag in each['tags']]
each isn't ever bound to an actual document from your database, it's bound to a special python variable which represents the document. I'd actually try running the following just to demonstrate it:
q = r.table('things').index_create(
'user_tags',
lambda each: print(each)) #only works in python 3
And it will print out something like:
<RqlQuery instance: var_1 >
the driver only knows that this is a variable from the function, in particular it has no idea if each["tags"] is an array or what (it's actually just another very similar abstract object). So python doesn't know how to iterate over that field. Basically exactly the same problem exists in javascript.

Resources