If I run execute("SELECT * FROM users WHERE id = 1")[0][0], I'll get back the first field from the first row of that resultset.
If I run prepare("SELECT * FROM users WHERE id = ?").execute(1)[0][0], which to me seems like it should return an identical result, I get the error [] NoMethodError.
I can't for the life of me figure out why and the documentation seems really sparse. What's going on?
Your two execute methods are not identical and are returning different things.
In the first instance, you are calling execute on Database, which returns a simple array. Then you correctly index into it with [].
However, prepare returns a Statement object which you then call execute on. This instead returns a ResultSet, which doesn't have the same semantics as an array.
You might be looking for execute!.
Related
I have a stored procedure in the database, this stored procedure returns a table with the columns deviceId, operatorId, serialNumber, status, tailId, lastCheckIn, lastDownload, filename. Really what I want to do is to grab from the result, all the instances that have a deviceId, that is contained in a list that is defined on the side. There are 2 ways I came up to solve this.
The first one, which is the one that is the most straightforward to me is using the #query annotation, and running another query on the resultant table of that stored procedure. Thing is, the examples that I found online are more inclined towards passing in parameters to the stored procedure, instead of running a query with the results of it. I just want to know if my first idea is valid and how to execute it if so. The second one is to pass in the list as a parameter, but not exactly sure how to manipulate it in the stored procedure or if it's able to interpret this structures at all.
The list that I want to compare to the deviceId columns is of type <List<UUID>>. And to give more context on my enviroment, I'm working with Azure SQL DB's.
Thanks.
In my JDBC template, I want to execute a PostgreSQL function for certain rows. Currently, I do:
JdbcTemplate t = ...;
String q = "select my_function(table.identifier) from table where ...`;
template.query(q, new Object[]{...}, result -> null);
Here result -> null is a ResultSetExtractor which just ignores the result set and results null. I am doing this because I don't really care about the result of the my_function function. I just want to execute it for the selected rows.
Is there a cleaner way to do this? While it works perfectly, it's kinda hackisch, in my opinion at least.
If you want to avoid that the result is sent to the client at all, use PL/pgSQL's PERFORM in a DO statement:
DO $$BEGIN PERFORM myfunction(...) FROM ...; END;$$;
You simply put PERFORM in the place where you would normally write SELECT.
I will admit that this is also kind of a hack, but at least it has a clear performance advantage.
We are trying to use a simple user-defined function (UDF) in the where clause of a query in Azure Cosmos DB, but it's not working correctly. The end goal is to query all results where the timestamp _ts is greater than or equal to yesterday, but our first step is to get a UDF working.
The abbreviated data looks like this:
[
{
"_ts": 1500000007
}
{
"_ts": 1500000005
}
]
Using the Azure Portal's Cosmos DB Data Explorer, a simple query like the following will correctly return one result:
SELECT * FROM c WHERE c._ts >= 1500000006
A simple query using our UDF incorrectly returns zero results. This query is below:
SELECT * FROM c WHERE c._ts >= udf.getHardcodedTime()
The definition of the function is below:
function getHardcodedTime(){
return 1500000006;
}
And here is a screenshot of the UDF for verification:
As you can see, the only difference between the two queries is that one query uses a hard-coded value while the other query uses a UDF to get a hard-coded value. The problem is that the query using the UDF returns zero results instead of returning one result.
Are we using the UDF correctly?
Update 1
When the UDF is updated to return the number 1, then we get a different count of results each time.
New function:
function getHardcodedTime(){
return 1;
}
New query: SELECT count(1) FROM c WHERE c._ts >= udf.getHardcodedTime()
Results vary with 7240, 7236, 7233, 7264, etc. (This set is the actual order of responses from Cosmos DB.)
By your description, the most likely cause is that the UDF version is slow and returns a partial result with continuation token instead of final result.
The concept of continuation is explained here as:
If a query's results cannot fit within a single page of results, then the REST API returns a continuation token through the x-ms-continuation-token response header. Clients can paginate results by including the header in subsequent results.
The observed count variation could be caused by the query stopping for next page at slightly different times. Check the x-ms-continuation header to know if that's the case.
Why is UDF slow?
UDF returning a constant is not the same as constant for the query. Correct me if you know better, but from CosmosDB side, it does not know that the specific UDF is actually deterministic, but assumes it could evaluate to any value and hence has to be executed for each document to check for a match. It means it cannot use an index and has to do a slow full scan.
What can you do?
Option 1: Follow continuations
If you don't care about the performance, you could keep using the UDF and just follow the continuation until all rows are processed.
Some DocumentDB clients can do this for you (ex: .net API), so this may be the fastest fix if you are in a hurry. But beware, this does not scale (keeps getting slower and cost more and more RU) and you should not consider this a long-term solution.
Option 2: Drop UDF, use parameters
You could pass the hardcodedTime instead as parameter. This way the query execution engine would know the value, could use a matching index and give you correct results without the hassle of continuations.
I don't know which API you use, but related reading just in case: Parameterized SQL queries.
Option 3: Wrap in stored proc
If you really-really must control the hardcodedTime in UDF then you could implement a server-side procedure which would:
query the hardcodedTime from UDF
query the documents, passing the hardcodedTime as parameter
return results as SP output.
It would use the UDF AND index, but brings a hefty overhead in the amount of code required. Do your math if keeping UDF is worth the extra effort in dev and maintenance.
Related documentation about SPs: Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs
I'm wondering if anyone has any clarification on the difference between the following statements using sqlite3 gem with ruby 1.9.x:
#db.execute("INSERT INTO table(a,b,c) VALUES (?,?,?)",
some_int, other_int, some_string)
and
#db.execute("INSERT INTO table(a,b,c) VALUES (#{some_int},"+
+"#{some_int}, #{some_string})")
My problem is: When I use the first method for insertion, I can't query for the "c" column using the following statement:
SELECT * FROM table WHERE c='some magic value'
I can use this:
"SELECT * FROM table WHERE c=?", "some magic value"
but what I really want to use is
"SELECT * FROM table WHERE c IN ('#{options.join("','")}')"
And this doesn't work with the type of inserts.
Does anyone know what the difference is at the database level that is preventing the IN from working properly?
I figured this out quite a while ago, but forgot to come back and point it out, in case someone finds this question at another time.
The difference turns out to be blobs. Apparently when you use the first form above (the substitution method using (?,?)) SQLite3 uses blogs to enter the data. However, if you construct an ordinary SQL statement, it's inserted as a regular string and the two aren't equivalent.
Insert is not possible to row query but row query used in get data that time this one working.
SQLite in you used in mobile app that time not work bat this row query you write in SQLite Browse in that work
I am looking for a way to have a query that returns a tuple first sorted by a column, then grouped by another (in that order). Simply .sort_by().group_by() didn't appear to work. Now I tried the following, which made the return value go wrong (I just got the orm object, not the initial tuple), but read for yourself in detail:
Base scenario:
There is a query which queries for test orm objects linked from the test3 table through foreign keys.
This query also returns a column named linked that either contains true or false. It is originally ungrouped.
my_query = session.query(test_orm_object)
... lots of stuff like joining various things ...
add_column(..condition that either puts 'true' or 'false' into the column..)
So the original return value is a tuple (the orm object, and additionally the true/false column).
Now this query should be grouped for the test orm objects (so the test.id column), but before that, sorted by the linked column so entries with true are preferred during the grouping.
Assuming the current unsorted, ungrouped query is stored in my_query, my approach to achieve this was this:
# Get a sorted subquery
tmpquery = my_query.order_by(desc('linked')).subquery()
# Read the column out of the sub query
my_query = session.query(tmpquery).add_columns(getattr(tmpquery.c,'linked').label('linked'))
my_query = my_query.group_by(getattr(tmpquery.c, 'id')) # Group objects
The resulting SQL query when running this is (it looks fine to me btw - the subquery 'anon_1' is inside itself properly sorted, then fetched and its id aswell as the 'linked' column is extracted (amongst a few other columns SQLAlchemy wants to have apparently), and the result is properly grouped):
SELECT anon_1.id AS anon_1_id, anon_1.name AS anon_1_name, anon_1.fk_test3 AS anon_1_fk_test3, anon_1.linked AS anon_1_linked, anon_1.linked AS linked
FROM (
SELECT test.id AS id, test.name AS name, test.fk_test3 AS fk_test3, CASE WHEN (anon_2.id = 87799534) THEN 'true' ELSE 'false' END AS linked
FROM test LEFT OUTER JOIN (SELECT test3.id AS id, test3.fk_testvalue AS fk_testvalue
FROM test3)
AS anon_2 ON anon_2.fk_testvalue = test.id ORDER BY linked DESC
)
AS anon_1 GROUP BY anon_1.id
I tested it in phpmyadmin, where it gave me, as expected, the id column (for the orm object id), then the additional columns SQL_Alchemy seems to want there, and the linked column. So far, so good.
Now my expected return values would be, as they were from the original unsorted, ungrouped query:
A tuple: 'test' orm object (anon_1.id column), 'true'/'false' value (linked column)
The actual return value of the new sorted/grouped query is however (the original query DOES indeed return a touple before the code above is applied):
'test' orm object only
Why is that so and how can I fix it?
Excuse me if that approach turns out to be somewhat flawed.
What I actually want is, have the original query simply sorted, then grouped without touching the return values. As you can see above, my attempt was to 'restore' the additional return value again, but that didn't work. What should I do instead, if this approach is fundamentally wrong?
Explanation for the subquery use:
The point of the whole subquery is to force SQLAlchemy to execute this query separately as a first step.
I want to order the results first, and then group the ordered results. That seems to be hard to do properly in one step (when trying manually with SQL I had issues combining order and group by in one step as I wanted).
Therefore I don't simply order, group, but I order first, then subquery it to enforce that the order step is actually completed first, and then I group it.
Judging from manual PHPMyAdmin tests with the generated SQL, this seems to work fine. The actual problem is that the original query (which is now wrapped as the subquery you were confused about) had an added column, and now by wrapping it up as a subquery, that column is gone from the overall result. And my attempt to readd it to the outer wrapping failed.
It would be much better if you provided examples. I don't know if these columns are in separate tables or what not. Just looking at your first paragraph, I would do something like this:
a = session.query(Table1, Table2.column).\
join(Table2, Table1.foreign_key == Table2.id).\
filter(...).group_by(Table2.id).order_by(Table1.property.desc()).all()
I don't know exactly what you're trying to do since I need to look at your actual model, but it should look something like this with maybe the tables/objs flipped around or more filters.