I'm trying to use Google Workflows to do some automated BigQuery scheduling tasks. The requirement is to run a query on multiple datasets as the following:
- execute_query_job:
call: execute_query_job
args:
query_text: >-
SELECT
* EXCEPT(row_number)
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY uuid) row_number
FROM
`project.${database_id}.table`)
WHERE
row_number = 1
however, this doesn't work since the string is interpreted as is and no interpolation happened.
The ${} syntax will not span over multiple lines and ansible syntax with {{ var }} also did not work.
Try to change the query to a single line in a similar fashion as:
- execute_query_job:
call: execute_query_job
args:
query_text: ${"SELECT * EXCEPT(row_number) FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY uuid) row_number FROM `project."+database_id+".table`) WHERE row_number = 1"}
Notice that as per the Workflow's docs:
Variables can be assigned to a particular value or to the result of an expression.
If that doesn't work notice that making a POST request to the BigQuery's API jobs.insert method will allow you to specify a JobConfiguration, where you could change the defaultDataset field and change this values for each different dataset at each iteration. The following sample shows how to make iterations based on the values of an array in Workflows.
You may want take a look at the official document. you could concat the variable by multiple lines.
- assign_vars:
assign:
- string: "say"
- string: ${string+" "+"hello"}
- string: ${string+" "+"to the world"}
This is a necessary feature...
But in the meantime I have a more elegant solution:
- assign_vars:
assign:
- multilineVar: |
#!/bin/bash
echo Hi ${workflowVar}!
- workflowVar: StackOverflow
- multilineExapanded: ${ text.replace_all(multilineVar, ${workflowVar}, workflowVar) }
Related
I am trying to calculate/insert a value into my InfluxDB 2.0.8 on a regular interval....can this be done with a task?
For a over simplified example: How can I evaluate 1+1 once an hour and insert the result into the database?
I assume you are well aware of the general intrinsics of tasks and how to set up a task. The only thing missing then is how to generate data rows that do not result from querying. The method array.rows() comes handy for this.
Here is an example - I did not try it.
import "array"
option task = {name: "oneplusone", every: 1h}
array.from(rows: [
{_time: now(), _measurement: "calculation", _field: "result", _value: 1+1}
])|> to(bucket: "mybucket")
Using JDBC at Jmeter I need to escape single quote performed on variable.
My original query is:
select id from [Teams] where name = '${team}'.
But, when I got a team like: Ain M'lila, the query is not executed
What I tried, and not working is:
DECLARE #NevName nvarchar
SET #NevName = REPLACE({${team}, '''', ''''''')
select id from [test8].[Team] where name = #NevName
Any solution is appreciated
In order to escape a single quote you need to add another single quote to it
In particular your case you can escape ' with an extra ' using __groovy() function like:
${__groovy(vars.get('team').replaceAll("\'"\, "''"),)}
Demo:
thanks to Dmitri T for his efforts, but the solution for me was:
JSR223 PreProcessor: and inside:
String userString = vars.get("team");
String changedUserString = userString.replace("'","''");
vars.put("teamChanged", changedUserString);
and then used as into the query:
select id from [test8].[Team] where name = '${teamChanged}'
I'm trying to define the array in Mule YAML configuration to dynamically retrieve the key based on the value.
For example, I have a flow variable code= finance. Use code to loop through the list to fetch the key roles (below example). likewise, if the variable has 'emp1' should fetch the key employee.
Edited questions to give more clarity.
YAML configuration:
list:
roles:
- admin
- finance
- hr
- sales
employee:
- emp1
- emp2
- emp3
Tried redefining the YAML file as described on this Page https://www.w3schools.io/file/yaml-multiline-arrays/ as below (see the --- to distinguish this as a list in YAML so that I can use (p('list') in dw to loop through. Mule did not like it either.
list:
---
roles:
- admin
- finance
- hr
- sales
employee:
- emp1
- emp2
- emp3
Mule not liking it or how to define multi-line array and fetch the key dynamically?
Any advice and thoughts?
Please let me know if the question is not clear. Thanks
I understand that you want to find the key in the input YAML for a given value. You can use the function findKeyForValue() below. It transverse the objects and arrays to find the value, then return the key to which value belongs. It may require changes for more complex structures.
%dw 2.0
output application/dw
import * from dw::core::Arrays
fun findKeyForValue(x, v) = do {
fun findKeyInternal(x,v,lastKeyName) =
x match {
case o is Object -> namesOf(o) flatMap findKeyInternal(o[$], v, $) filter $ != null
case a is Array -> if (a some ($ == v)) lastKeyName else null
else -> $
}
var result=findKeyInternal(x, v, "")
---
if (sizeOf(result)>0) result[0] else null
}
---
findKeyForValue(vars.code, "emp2")
Output for the input in the question:
"employee"
The splitBy function splits a given string based on a value that matches part of that string.
If you change your DataWeave expression to:
%dw 2.0
output json
---
p('list.roles')
the output will be and array of strings:
[
"admin",
"finance",
"hr",
"sales"
]
So, what your DataWeave expression is trying to do is to apply the splitBy function over an array, which won't work.
In order to make your DataWeave expression work, you need to apply the filter directly, like in the following DataWeave expression:
%dw 2.0
output application/java
---
p('list.roles') filter $ == vars.code
This will return an array of strings containing none or one item, depending on the vars.code value. In order to return null or the role, you can use the following DataWeave expression:
%dw 2.0
output application/java
---
((p('list.roles') default []) filter $ == vars.code)[0]
This is for the technical part.
It seems that what you are trying to achieve is to check if the role contained in vars.code exists in the input payload (if vars.code value exists in list.roles, the DataWeave expression will return the exact same vars.code value). If that's the case, the DataWeave expression could be adjusted to return a boolean value telling if the role exists in the role list. You can achieve this using the following DataWeave expression:
%dw 2.0
output application/java
---
isEmpty((p('list.roles') default []) filter $ == vars.code)
Image Here
I have a column that stores C# codes, how do I remove the last 3 parameters of the "FunctionA"? Note that the column contains multiple functions, but I only need to replace "FunctionA" using PL/SQL, I know REGEXP_REPLACE might do the trick, but I can't seem to find a way to match/replace it.
Before:
Test=FunctionA(varID, 1234,"", A.B,"","","last");
Test=FunctionA(varID, 9876,"", C.D);
Test=FunctionB(varID, 5555,"", E.F,"","","last");
After:
Test=FunctionA(varID, 1234,"", A.B);
Test=FunctionA(varID, 9876,"", C.D);<- should not affect this
Test=FunctionB(varID, 5555,"", E.F,"","","last");<- should not affect this
Try finding this pattern:
(,[^,]*,[^,]*,[^,]*\);)$
And then replace with just );. Here is a sample query:
SELECT
REGEXP_REPLACE ('Test=FunctionA(varID, 1234,"", A.B,"","","last");',
'(,[^,]*,[^,]*,[^,]*\);)$', ');') AS output
FROM dual
WHERE col LIKE 'Test=FunctionA(%'
Test=FunctionA(varID, 1234,"", A.B);
Demo
Edit: I added a WHERE clause which checks the function name.
First time using pg gem to access postgres database. I've connected successfully and can run queries using #exec, but now building a simple query with #exec_params does not seem to be replacing parameters. I.e:
get '/databases/:db/tables/:table' do |db_name, table_name|
conn = connect(db_name)
query_result = conn.exec_params("SELECT * FROM $1;", [table_name])
end
results in #<PG::SyntaxError: ERROR: syntax error at or near "$1" LINE 1: SELECT * FROM $1; ^ >
This seems like such a simple example to get working - am I fundamentally misunderstanding how to use this method?
You can use placeholders for values, not for identifiers (such as table and column names). This is the one place where you're stuck using string interpolation to build your SQL. Of course, if you're using string wrangling for your SQL, you must be sure to properly quote/escape things; for identifiers, that means using quote_ident:
+ (Object) quote_ident(str)
Returns a string that is safe for inclusion in a SQL query as an identifier. Note: this is not a quote function for values, but for identifiers.
So you'd say something like:
table_name = conn.quote_ident(table_name)
query_result = conn.exec("SELECT * FROM #{table_name}")