In SQLite, do prepared statements really improve performance? - performance

I have heard that prepared statements with SQLite should improve performance. I wrote some code to test that, and did not see any difference in performance with using them. So, I thought maybe my code was incorrect. Please let me know if you see any errors in how I'm doing this...
[self testPrep:NO dbConn:dbConn];
[self testPrep:YES dbConn:dbConn];
reuse=0
recs=2000
2009-11-09 10:39:18 -0800
processing...
2009-11-09 10:39:32 -0800
reuse=1
recs=2000
2009-11-09 10:39:32 -0800
processing...
2009-11-09 10:39:46 -0800
-(void)testPrep:(BOOL)reuse dbConn:(sqlite3*)dbConn{
int recs = 2000;
NSString *sql;
sqlite3_stmt *stmt;
sql = #"DROP TABLE test";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
sql = #"CREATE TABLE test (id INT,field1 INT, field2 INT,field3 INT,field4 INT,field5 INT,field6 INT,field7 INT,field8 INT,field9 INT,field10 INT)";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
for(int i=0;i<recs;i++){
sql = #"INSERT INTO test (id,field1,field2,field3,field4,field5,field6,field7,field8,field9,field10) VALUES (%d,1,2,3,4,5,6,7,8,9,10)";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
}
sql = #"BEGIN";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
if (reuse){
sql = #"select * from test where field1=?1 and field2=?2 and field3=?3 and field4=?4 and field5=?5 and field6=?6 and field6=?6 and field8=?8 and field9=?9 and field10=?10 and id=?11";
sqlite3_prepare_v2(dbConn, [sql UTF8String], -1, &stmt, NULL);
}
NSLog(#"reuse=%d",reuse);
NSLog(#"recs=%d",recs);
NSDate *before = [NSDate date];
NSLog([before description]);
NSLog(#"processing...");
for(int i=0;i<recs;i++){
if (!reuse){
sql = #"select * from test where field1=?1 and field2=?2 and field3=?3 and field4=?4 and field5=?5 and field6=?6 and field6=?6 and field8=?8 and field9=?9 and field10=?10 and id=?11";
sqlite3_prepare_v2(dbConn, [sql UTF8String], -1, &stmt, NULL);
}
sqlite3_bind_int(stmt, 1, 1);
sqlite3_bind_int(stmt, 2, 2);
sqlite3_bind_int(stmt, 3, 3);
sqlite3_bind_int(stmt, 4, 4);
sqlite3_bind_int(stmt, 5, 5);
sqlite3_bind_int(stmt, 6, 6);
sqlite3_bind_int(stmt, 7, 7);
sqlite3_bind_int(stmt, 8, 8);
sqlite3_bind_int(stmt, 9, 9);
sqlite3_bind_int(stmt, 10, 10);
sqlite3_bind_int(stmt, 11, i);
while(sqlite3_step(stmt) == SQLITE_ROW) {
}
sqlite3_reset(stmt);
}
sql = #"BEGIN";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
NSDate *after = [NSDate date];
NSLog([after description]);
}

Prepared statements improve performance by caching the execution plan for a query after the query optimizer has found the best plan.
If the query you're using doesn't have a complicated plan (such as simple selects/inserts with no joins), then prepared statements won't give you a big improvement since the optimizer will quickly find the best plan.
However, if you ran the same test with a query that had a few joins and used some indexes, you would see the performance difference since the optimizer wouldn't be run every time the query is.

Yes - it makes a huge difference whether your using sqlite3_exec() vs. sqlite3_prepare_v2() / sqlite3_bind_xxx() / sqlite3_step() for bulk inserts.
sqlite3_exec() is only a convenience method. Internally it just calls the same sequence of sqlite3_prepare_v2() and sqlite3_step(). Your example code is calling sqlite3_exec() over-and-over on a literal string:
for(int i=0;i<recs;i++){
sql = #"INSERT INTO test (id,field1,field2,field3,field4,field5,field6,field7,field8,field9,field10) VALUES (%d,1,2,3,4,5,6,7,8,9,10)";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
}
I don't know the inner workings of the SQLite parser, but perhaps the parser is smart enough to recognize that you are using the same literal string and then skips re-parsing/re-compiling with every iteration.
If you try the same experiment with values that change - you'll see a much bigger difference in performance.

Using prepare + step instead of execute huge performance improvements are possible.
In some cases the performance gain is more than 100% in execution time.

Related

How to calculate the number of time an event occurs in any month if I have the weekly schedule?

I am currently writing a flutter app which includes displaying the weekly schedule of a class. I also have to calculate the attendance of each student. To do that, I need the number of time each subject are taught in any given month and I am stumped as I can't think of a way to do that.
I have the weekly schedule of the class and I stored it in Firestore. It is formatted as below,
{Day: 1 , Major: 'EcE', Subject: 'Communication', Year: 5, Period: 1, ...}
Screenshot of a timetable entry
where Day refers to Monday, Tuesday, ...
It appears in the app like this.
My problem is, to track and calculate the attendance of students, I have to know how many times each subjects are taught in a month but I dont think multiplying said times in a week by 4 is viable since days in a month are dynamic. But I don't know how to work with a calendar programmatically so I am currently out of ideas. Any hints and suggestions are appreciated. The suggestions can be in either dart or node js since I can both implement in client side or cloud functions. Thanks a lot.
P.S - I haven't provide any code for now but please ask me for clarifications and I will provide the related codes. I just didn't want to bloat the post.
If I understood your question correctly all you need is to count the number of each weekday occurrence in a given month.
Here you go, in both js and dart:
JS:
var dtNow = new Date(); // replace this for the month/date of your choosing
var dtFirst = new Date(dtNow.getFullYear(), dtNow.getMonth(), 1); // first date in a month
var dtLast = new Date(dtNow.getFullYear(), dtNow.getMonth() + 1, 0); // last date in a month
// we need to keep track of weekday occurrence in a month, a map looks suitable for this
var dayOccurrence = {
"Monday" : 0,
"Tuesday" : 0,
"Wednesday" : 0,
"Thursday" : 0,
"Friday" : 0,
"Saturday" : 0,
"Sunday" : 0
}
var dtLoop = new Date(dtFirst); // temporary date, used for looping
while(dtLoop <= dtLast){
// getDay() method returns the day of the week, from 0(Sunday) to 6(Saturday)
switch(dtLoop.getDay()) {
case 0:
dayOccurrence.Sunday++; break;
case 1:
dayOccurrence.Monday++; break;
case 2:
dayOccurrence.Tuesday++; break;
case 3:
dayOccurrence.Wednesday++; break;
case 4:
dayOccurrence.Thursday++; break;
case 5:
dayOccurrence.Friday++; break;
case 6:
dayOccurrence.Saturday++; break;
default:
console.log("this should not happen");
}
dtLoop.setDate(dtLoop.getDate() + 1);
}
// log the results
var keys = Object.keys(dayOccurrence);
keys.forEach(key=>{
console.log(key + ' : ' + dayOccurrence[key]);
});
And here is the same thing in dart:
void main() {
DateTime dtNow = new DateTime.now(); // replace this for the month/date of your choosing
DateTime dtFirst = new DateTime(dtNow.year, dtNow.month, 1); // first date in a month
DateTime dtLast = new DateTime(dtNow.year, dtNow.month + 1, 0); // last date in a month
// we need to keep track of weekday occurrence in a month, a map looks suitable for this
Map<String, int> dayOccurrence = {
'Monday' : 0,
'Tuesday' : 0,
'Wednesday' : 0,
'Thursday' : 0,
'Friday' : 0,
'Saturday' : 0,
'Sunday' : 0
};
DateTime dtLoop = DateTime(dtFirst.year, dtFirst.month, dtFirst.day);
while(DateTime(dtLoop.year, dtLoop.month, dtLoop.day) != dtLast.add(new Duration(days: 1))){
// weekday is the day of the week, from 1(Monday) to 7(Sunday)
switch(dtLoop.weekday) {
case 1:
dayOccurrence['Monday']++; break;
case 2:
dayOccurrence['Tuesday']++; break;
case 3:
dayOccurrence['Wednesday']++; break;
case 4:
dayOccurrence['Thursday']++; break;
case 5:
dayOccurrence['Friday']++; break;
case 6:
dayOccurrence['Saturday']++; break;
case 7:
dayOccurrence['Sunday']++; break;
default:
print("this should not happen");
}
dtLoop = dtLoop.add(new Duration(days: 1));
}
// log the results
dayOccurrence.forEach((k,v) => print('$k : $v'));
}

Oracle/PLSQL performance tips

I living problem in my query. When i add ROUND and DECODE query takes too long time but when i delete directly return value. When i search for sql advice no any advice. How i can fix this 2 syntax ?
SELECT I.*,
Q.INVOICE_DATE,
Q.SERIES_ID IFS_SERIES_ID,
Q.INVOICE_NO,
Q.IDENTITY,
Q.IDENTITY_NAME,
Q.ASSOCIATION_NO,
Q.NET_CURR_AMOUNT,
Q.VAT_CURR_AMOUNT,
Q.TOTAL_CURR_AMOUNT,
Q.CURRENCY_CODE,
ROUND(Q.CURR_RATE,:"SYS_B_0") CURR_RATE,
Q.PROFILE_ID DEFAULT_PROFILE_ID,
X.XML_CONTENT,
DECODE(NVL(DBMS_LOB.INSTR(X.XML_CONTENT,:"SYS_B_1",:"SYS_B_2",:"SYS_B_3"), :"SYS_B_4"), :"SYS_B_5",:"SYS_B_6", :"SYS_B_7") SINGED_UBL,
X.SCHEMATRON_RESULT,
q.SHIPMENT_ID,
APP.TBN_UTILITY_API.GET_NUMBER(I.COMPANY, I.INVOICE_ID) DESPATCH_REFERENCE,
X.OBJID XML_OBJID,
X.OBJVERSION XML_OBJVERSION,
APP.API_MODULE.GET_DESC(I.MODULE_ID) MODULE_NAME
FROM APP.TREF_INVOICE I,
APP.TREF_INVOICE_INFO_QRY Q,
APP.TREF_XML_ARCHIVE X
WHERE Q.COMPANY = I.COMPANY
AND Q.INVOICE_ID = I.INVOICE_ID
AND X.XML_ARCHIVE_ID(+) = I.XML_ARCHIVE_ID
AND I.COMPANY = :COMPANY
AND I.INVOICE_ID = :INVOICE_ID
You should trace one or more executions of each statement to see exactly what it does. When you profile the trace data you will know what to do.
SQL> select value from v$diag_info where name = 'Default Trace File'/* name of the trace file */;
SQL> exec dbms_monitor.session_trace_enable(null, null, true, false, 'all_executions')
SQL> your query executed under normal circumstances
SQL> exec dbms_monitor.session_trace_disable(null, null)

Oracle calculations within Crystal report

I have this code in a Crystal report. It uses 2 fields, st.pass_total and st.fail_total to calculate the pass ratio. I'd like to replace this Crystal code with PL/SQL code to return just the pass_ratio:
if isnull({st.PASS_TOTAL})
and isnull({st.FAIL_TOTAL}) then pass_ratio:=""
else if (not isnull({st.PASS_TOTAL}))
and isnull({st.FAIL_TOTAL}) then pass_ratio:="100%"
else if (isnull({st.PASS_TOTAL})
or {st.PASS_TOTAL}=0)
and (not isnull({st.FAIL_TOTAL})) then pass_ratio:=""
else pass_ratio:=totext({st.PASS_TOTAL}/({st.PASS_TOTAL}+{st.FAIL_TOTAL})*100)+"%";
This is what I have in PL/SQL, is it correct?
decode((is_null(st.pass_total) AND is_null(st.fail_total)), "",
(not is_null(st.pass_total) AND not is_null(st.fail_total)), "100%",
((is_null(st.pass_total) OR st.pass_total=0) && not is_null(st.fail_total)), "",
(st.pass_total/(st.pass_total+st.fail_total)*100)||"%"))
I also have one that "calculates" the Cutoff value:
if {e.eve_cutoff}=0
or isnull({e.eve_cutoff}) then event_cutoff:="140"
else if {e.eve_cutoff}>0 then event_cutoff:=totext({e.eve_cutoff},0);
This is what I have in PL/SQL, is it correct?
decode(e.eve_cutoff, 0, "140",
e.eve_cutoff, NULL, "140",
eve_cutoff)
Your decode statements have several issues. This syntax can be greatly simplified by using function nvl():
select
case
when nvl(st.pass_total, 0) = 0 then ''
else 100 * st.pass_total / (st.pass_total + nvl(st.fail_total, 0)) ||'%'
end ratio
from st
and:
select decode(nvl(eve_cutoff, 0), 0, '140', eve_cutoff) cutoff from e
[SQLFiddle1] . [SQLFiddle2]
For first select you may also want to round values with function round(), like I did in SQLFiddle -
(if you do not do it you may get overflow error in report).

Unable to do simple insert with Oracle 11g

I'm trying to copy data from a table in Sybase server into the same table in Oracle server (Oracle 11g).
I thought it would be easier to do it using my coldfusion web programming because the 2 different db server.
Unfortunately, I got the following error from Oracle. I don't think my syntax is wrong. because all the comma are there and no missing comma as the error says. I think it may be due to the DATE column that's set as DATE datatype.
Here is the error:
Error Executing Database Query.
[Macromedia][Oracle JDBC Driver][Oracle]ORA-00917: missing comma
The error occurred in C:\Inetpub\wwwroot\test.cfm: line 65
63 : #um_gs_dnrcnt_cfm_pp#,
64 : #um_gs_amt_cfm_pg_pp#,
65 : #um_gs_dnrcnt_cfm_pg_pp#)
66 : </cfquery>
67 : </cfoutput>
--------------------------------------------------------------------------------
SQLSTATE HY000
SQL INSERT INTO um_gift_sum (um_gs_fyr, um_gs_inst, um_gs_dept,
um_gs_dt_of_record, um_gs_fund_type,
um_gs_dnr_type,
um_gs_amt_cash, um_gs_dnrcnt_cash, um_gs_amt_pl,
um_gs_dnrcnt_pl, um_gs_amt_pp, um_gs_dnrcnt_pp,
um_gs_amt_pp_prior, um_gs_dnrcnt_pp_prior,
um_gs_amt_gik, um_gs_dnrcnt_gik,
um_gs_amt_pg_cash,
um_gs_dnrcnt_pg_cash, um_gs_amt_pg_pl,
um_gs_dnrcnt_pg_pl, um_gs_amt_pg_pp,
um_gs_dnrcnt_pg_pp, um_gs_amt_gft_mtch,
um_gs_dnrcnt_gft_mtch, um_gs_amt_cfm_pp,
um_gs_dnrcnt_cfm_pp, um_gs_amt_cfm_pg_pp,
um_gs_dnrcnt_cfm_pg_pp)
VALUES('1995', 'AB', 'MAA', 1995-01-31 00:00:00.0, '1', 'FR', 100.0000, 0,
0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0,
0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0)
Here is my insert statement:
<cfquery name="x" datasource="SybaseDB">
SELECT TOP 10 * FROM um_sum
</cfquery>
<cfoutput query="x">
<cfquery name="Y" datasource="OracleDB">
INSERT INTO um_sum (um_gs_fyr,
m_gs_inst,
um_gs_dept,
um_gs_dt_of_record,
um_gs_fund_type,
um_gs_dnr_type,
etc,
um_gs_dnrcnt_cfm_pp,
um_gs_amt_cfm_pg_pp,
um_gs_dnrcnt_cfm_pg_pp)
VALUES('#um_gs_fyr#',
'#um_gs_inst#',
'#um_gs_dept#',
#um_gs_dt_of_record#, <---- this is date datatype,
I suspect this may be the problem?
'#um_gs_fund_type#',
'#um_gs_dnr_type#',
#um_gs_amt_cash#,
#um_gs_dnrcnt_cash#,
#um_gs_amt_pl#,
#um_gs_dnrcnt_pl#,
#um_gs_amt_pp#,
#um_gs_dnrcnt_pp#,
#um_gs_amt_pp_prior#,
#um_gs_dnrcnt_pp_prior#,
#um_gs_amt_gik#,
#um_gs_dnrcnt_gik#,
#um_gs_amt_pg_cash#,
#um_gs_dnrcnt_pg_cash#,
#um_gs_amt_pg_pl#,
#um_gs_dnrcnt_pg_pl#,
#um_gs_amt_pg_pp#,
#um_gs_dnrcnt_pg_pp#,
#um_gs_amt_gft_mtch#,
#um_gs_dnrcnt_gft_mtch#,
#um_gs_amt_cfm_pp#,
#um_gs_dnrcnt_cfm_pp#,
#um_gs_amt_cfm_pg_pp#,
#um_gs_dnrcnt_cfm_pg_pp#)
</cfquery>
</cfoutput>
this part is not surrounded with single quotes properly.
AA', 1995-01-31 00:00:00.0, '1'
edit (based on comment)
if the single quotes don't fix it, then you could explicitly declare the date format in a to_date() function

Setting PDQ inside an SPL - local scope?

In order to fine tune allocation of PDQ resources depending on the time of day that batch jobs run, we have a utility that sets PDQPRIORITY based on some day of week / hour of day rules, eg:
PDQPRIORITY=$(throttle); export PDQPRIORITY
However, this is fixed at the time the script starts, so long running jobs never get throttled up or down as they progress. To rectify this, we've tried the following:
CREATE PROCEDURE informix.set_pdq() RETURNING VARCHAR(50);
DEFINE pdq, dow SMALLINT;
DEFINE hr SMALLINT;
LET dow = WEEKDAY(CURRENT);
LET hr = TO_CHAR(CURRENT, '%H');
IF (dow == 0 OR dow == 6 OR hr < 8 OR hr > 14) THEN
LET pdq = 100;
SET PDQPRIORITY 100; -- SET PDQ does not accept a variable name arg.
ELIF (hr >= 8 AND hr <= 10) THEN
LET pdq = 40;
SET PDQPRIORITY 40;
ELIF (hr >= 11 AND hr <= 12) THEN
LET pdq = 60;
SET PDQPRIORITY 60;
ELIF (hr >= 13 AND hr <= 14) THEN
LET pdq = 80;
SET PDQPRIORITY 80;
END IF;
RETURN "PDQPriority set to " || pdq;
END PROCEDURE;
At various intervals throughout the SQL, we've added:
EXECUTE PROCEDURE set_pdq();
However, although it doesn't fail, the scope of the SET PDQ seems to be local to the SPL. onstat -g mgm doesn't report any change to the original resources allocated. So adding these set_pdq() calls doesn't seem to have had any effect - the resources allocated at the program start remain fixed.
The code is embedded SQL in shell, ie:
dbaccess -e $DBNAME << EOSQL
SELECT .. INTO TEMP ..;
EXECUTE PROCEDURE set_pdq();
SELECT .. INTO TEMP ..;
--etc
EOSQL
So backticks or $( ) interpolation occurs at the start of the script, when the here document gets passed to dbaccess. (That eliminated the obvious: SET PDQPRIORITY $(throttle);)
Wow, that got wordy quickly. Can anyone suggest any way of achieving this that doesn't involve rewriting these batch jobs completely? Breaking the SQL down into smaller pieces is not an option because of the heavy reliance on temp tables.
As you will have deduced from the inordinate delay between the time when you asked the question and the first attempted answer, this is not trivial.
Part of the problem is, I think, that PDQPRIORITY is captured when a stored procedure is created or its statistics are updated. Indeed, that may be all of the problem. Now, temporary tables cause another set of problems with stored procedures - stored procedures often need reoptimizing when temporary tables are involved (unless, possibly, the SP itself creates the temporary table).

Resources