I am developing my composer application using TDD approach, so it's important that all the code can run in the embedded runtime used in tests.
My stumbling block is that I cannot make queries using ORDER BY clause to work in tests.
This is a snippet of my model:
asset AssetStatement identified by statementRevisionId {
o String statementRevisionId
o String statementId
o DateTime createdAt
o String effectiveDate regex=/^[0-9]{4}-[0-9]{2}-[0-9]{2}$/
o Integer revision range=[0,]
o Boolean actual
o AssetStatementInfo info
}
And this is the query:
query FindActualAssetStatement {
description: "Finds actual asset statement with given (non-unique) id"
statement:
SELECT com.assetrisk.AssetStatement
WHERE (actual == true AND statementId == _$id)
ORDER BY revision DESC
LIMIT 1
}
If I remove ORDER BY line the query runs, but when it's there I am getting the following exception:
Error: Cannot sort on field(s) "revision" when using the default index
at validateSort (node_modules/pouchdb-find/lib/index.js:472:13)
at node_modules/pouchdb-find/lib/index.js:1138:5
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
The same error happens even if I use asset's unique key field for sorting.
This seems to be a known feature of PouchDB:
https://github.com/pouchdb/pouchdb/issues/6399
However, I don't seem to have access to the underlying database object in the embedded composer environment to configure indices for tests to work.
Is there a way this could be made to work?
Currently, we bypass this by modifying the test command to comment out the ORDER BY statements before the tests and uncomment them after:
"test": "sed -i '' -e 's, ORDER BY,// ORDER BY,g' ./queries.qry && mocha -t 0 --recursive && sed -i '' -e 's,// ORDER BY, ORDER BY,g' ./queries.qry"
It is a bit hacky, and does not solve the complex issue of PouchDB not handling indices, but it's probably better than not testing at all.
Related
I want to use timescaledb in my spring boot application like:
#Query("""select time_bucket(':bucket' minutes) as bucket, avg(data) as value
from h_table ht
where
...
group by bucket""", nativeQuery=true)
fun getByDeviceAndCode(#Param("bucket") bucket:Int,
...) : List<ResultData>
but I've got an error:
ERROR: invalid input syntax for type interval: ":bucket minutes"
how can I pass :bucket parameter to sql?
thx,
Zamek
I think you have a mistake in the ending quote, and the time column seems missing. It has to be as follows (where time_col should be replaced with the name of your time column):
select time_bucket(':bucket minutes', time_col) as bucket, avg(data) as value ...
https://docs.timescale.com/api/latest/hyperfunctions/time_bucket/#sample-usage
If you still get the error, try to cast it explicitly to a postgres interval with something like this:
select time_bucket(cast(':bucket minutes' as interval), time_col) as bucket, avg(data) as value ...
To avoid problems, I would also change the type of bucket to a String and include the time unit (minutes) inside it. This way it will be treated as a whole string and avoid issues with single quotes:
select time_bucket(cast(:bucket as interval), time_col) as bucket, avg(data) as value ...
I'm trying to use Google Workflows to do some automated BigQuery scheduling tasks. The requirement is to run a query on multiple datasets as the following:
- execute_query_job:
call: execute_query_job
args:
query_text: >-
SELECT
* EXCEPT(row_number)
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY uuid) row_number
FROM
`project.${database_id}.table`)
WHERE
row_number = 1
however, this doesn't work since the string is interpreted as is and no interpolation happened.
The ${} syntax will not span over multiple lines and ansible syntax with {{ var }} also did not work.
Try to change the query to a single line in a similar fashion as:
- execute_query_job:
call: execute_query_job
args:
query_text: ${"SELECT * EXCEPT(row_number) FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY uuid) row_number FROM `project."+database_id+".table`) WHERE row_number = 1"}
Notice that as per the Workflow's docs:
Variables can be assigned to a particular value or to the result of an expression.
If that doesn't work notice that making a POST request to the BigQuery's API jobs.insert method will allow you to specify a JobConfiguration, where you could change the defaultDataset field and change this values for each different dataset at each iteration. The following sample shows how to make iterations based on the values of an array in Workflows.
You may want take a look at the official document. you could concat the variable by multiple lines.
- assign_vars:
assign:
- string: "say"
- string: ${string+" "+"hello"}
- string: ${string+" "+"to the world"}
This is a necessary feature...
But in the meantime I have a more elegant solution:
- assign_vars:
assign:
- multilineVar: |
#!/bin/bash
echo Hi ${workflowVar}!
- workflowVar: StackOverflow
- multilineExapanded: ${ text.replace_all(multilineVar, ${workflowVar}, workflowVar) }
Need assistance with QueryDSL predicate composition - how to write QueryDSL predicate to compare two arrays(find any UUID matches between two arrays) using && operator like this:
select '{e48f54d5-9845-4987-a53d-e0ecfe3dbb43}'::uuid[] && '{e48f54d5-9845-4987-a53d-e0ecfe3dbb43,4e9a43f2-cb23-4f1b-9f7f-c09687d97570}'::uuid[];
Using:
Cockroach - v20.1.7,
QueryDSL - v4.3.1
Tried the following way:
private BooleanBuilder createPredicates(QPlayer player, List<UUID> otherUuids) {
predicates.and(player.listOfUuids.any().in(otherUuids)); // player.listOfUuids is type of ListPath<java.util.UUID, ComparablePath<java.util.UUID>>
return predicates;
}
But it raise exception:
java.lang.IllegalStateException: name property not available for path of type COLLECTION_ANY. Use getElement() to access the generic path element.
Also tried to create booleanTemplate like this:
predicates.and(Expressions.booleanTemplate("{0} && '{{1}}'::uuid[]", player.listOfUuids, StringUtils.join(",", otherUuids)));
It returns such a SQL:
select ... where player.business_unit_ids && '{$1}'::uuid[]
But execution of it raise exception:
io.r2dbc.postgresql.ExceptionFactory$PostgresqlNonTransientResourceException: [08P01] received too many type hints: 1 vs 0 placeholders in query
Because it interpretates extra '{' and '}' which required to use to wrap it in uuid array as another placeholder. And it doesn't respect special symbol escaping or unicode also.
Any thoughts how two array comparison might be achieved using QueryDSL?
Figured out how to add desired predicate with && overlap operator:
predicates.and(Expressions.booleanTemplate("{0} && {1}::uuid[]", arg0, String.format("{%s}", arg1.stream().map(UUID::toString).collect(joining(",")))))
And it's working based on example query:
select '{e48f54d5-9845-4987-a53d-e0ecfe3dbb43,e48f54d5-9845-4987-a53d-e0ecfe3dbb45}'::uuid[] && '{e48f54d5-9845-4987-a53d-e0ecfe3dbb40,e48f54d5-9845-4987-a53d-e0ecfe3dbb45}'::uuid[];
I didn't found that QueryDSL will support && overlap operator in Ops.class that I will be able to write this predicate different way.
I have a custom gem which creates a AR query with input that comes from an elasticsearch instance.
# record_ids: are the returned ids of the ES results
# order: is the order of the of the ids that ES returns
search_class.where(search_class.primary_key => record_ids).order(order)
Right now the implementation is that I build the order string directly into the order variable so it looks like this: ["\"positions\".\"id\" = 'fcdc924a-21da-440e-8d20-eec9a71321a7' DESC"]
This works fine but throws a deprecation warning which ultimately will not work in rails6.
DEPRECATION WARNING: Dangerous query method (method whose arguments are used as raw SQL) called with non-attribute argument(s): "\"positions\".\"id\" = 'fcdc924a-21da-440e-8d20-eec9a71321a7' DESC". Non-attribute arguments will be disallowed in Rails 6.0. This method should not be called with user-provided values, such as request parameters or model attributes. Known-safe values can be passed by wrapping them in Arel.sql()
So I tried couple of different approaches but all of them with no success.
order = ["\"positions\".\"id\" = 'fcdc924a-21da-440e-8d20-eec9a71321a7' DESC"]
# Does not work since order is an array
.order(Arel.sql(order))
# No errors but only returns an ActiveRecord_Relation
# on .inspect it returns `PG::SyntaxError: ERROR: syntax error at or near "["`
.order(Arel.sql("#{order}"))
# .to_sql: ORDER BY [\"\\\"positions\\\".\\\"id\\\" = 'fcdc924a-21da-440e-8d20-eec9a71321a7' DESC\"]"
order = ['fcdc924a-21da-440e-8d20-eec9a71321a7', ...]
# Won't work since its only for integer values
.order("idx(ARRAY#{order}, #{search_class.primary_key})")
# .to_sql ORDER BY idx(ARRAY[\"fcdc924a-21da-440e-8d20-eec9a71321a7\", ...], id)
# Only returns an ActiveRecord_Relation
# on .inspect it returns `PG::InFailedSqlTransaction: ERROR:`
.order("array_position(ARRAY#{order}, #{search_class.primary_key})")
# .to_sql : ORDER BY array_position(ARRAY[\"fcdc924a-21da-440e-8d20-eec9a71321a7\", ...], id)
I am sort of stuck since rails forces attribute arguments in the future and an has no option to opt out of this. Since the order is a code generated array and I have full control of the values I am curious how I can implement this. Maybe someone had this issue before an give some useful insight or idea?
You could try to apply Arel.sql to the elements of the array, that should work, ie
search_class.where(search_class.primary_key => record_ids)
.order(order.map {|i| i.is_a?(String) ? Arel.sql(i) : i})
First time using pg gem to access postgres database. I've connected successfully and can run queries using #exec, but now building a simple query with #exec_params does not seem to be replacing parameters. I.e:
get '/databases/:db/tables/:table' do |db_name, table_name|
conn = connect(db_name)
query_result = conn.exec_params("SELECT * FROM $1;", [table_name])
end
results in #<PG::SyntaxError: ERROR: syntax error at or near "$1" LINE 1: SELECT * FROM $1; ^ >
This seems like such a simple example to get working - am I fundamentally misunderstanding how to use this method?
You can use placeholders for values, not for identifiers (such as table and column names). This is the one place where you're stuck using string interpolation to build your SQL. Of course, if you're using string wrangling for your SQL, you must be sure to properly quote/escape things; for identifiers, that means using quote_ident:
+ (Object) quote_ident(str)
Returns a string that is safe for inclusion in a SQL query as an identifier. Note: this is not a quote function for values, but for identifiers.
So you'd say something like:
table_name = conn.quote_ident(table_name)
query_result = conn.exec("SELECT * FROM #{table_name}")