ColdFusion | Query timeout error questions - caching

Here is my code
<cfquery name="employeeData" datasource="xyz" cachedwithin="#CreateTimeSpan(0,0,60,0)#">
SELECT employee, salary
FROM employee
</cfquery>
<cfquery name="wellPaidEmployee" dbtype="query">
SELECT employee, salary
FROM employeeData WHERE salary > <cfqueryparam cfsqltype="cf_sql_integer" value="10000">
</cfquery>
Condition:
The first query EmployeeData gets timed out due to some issue and throws an error "query timed out"
Question:
On the next call, Will the query EmployeeData run or it will have query timed out error in the cache as we cached it using cachewithin?
What will happen with the wellPaidEmployee on first run and next run?

Only successful db requests are cached so the EmployeeData query will run on next pass.
The wellPaidEmployee query will run if employeeData does not error.

Too long for a comment.
Queries timing out, and caches expiring are two different things.
<cfquery name="employeeData" datasource="xyz" cachedwithin="#CreateTimeSpan(0,0,60,0)#">
SELECT employee, salary
FROM employee
</cfquery>
Will run when it is first hit. It will also save its data for 60 minutes. If that data is accessed again in 60 minutes, the cache timeout will reset back to zero. In theory, if this data is accessed every 60 minutes, it will never hit the database.
As for
<cfquery name="wellPaidEmployee" dbtype="query">
SELECT employee, salary
FROM employeeData WHERE salary > <cfqueryparam cfsqltype="cf_sql_integer" value="10000">
</cfquery>
It does not know, nor does it care if the underlying data came from a cache or not. It will just return the results.
If you are getting a "query timed out" error. That is a completely different problem. There is something wrong with how ColdFusion is connecting to the database OR there is a problem with the database itself.

Related

How to resolve temp table space issue in Oracle

I need help from DBAs here.
I have a query that fetches around 1800 records from DB.
However, it is observed that oracle's temp table space is getting filled up and makes oracle to respond too slow.
I have identified the query that is causing the issue and the query is something like this.
SELECT * FROM A a, B b WHERE a.id = b.fieldId AND b.col1 = :1 AND b.col2 = :2 ORDER BY TO_NUMBER(b.col3) ASC
This is query is returning around 1800 records and DBA segments show that 44 GB out of 50 GB of data is occupied.
I am not sure what could be solution for this.
I am using Oracle 12.1
Please look into this and suggest if i have to rewrite the query.
Thanks in Advance.
It is hard to how resource intensive query is without checking query plan at least.
This might be not your query at all who "ate" all the TEMP space. Here is how to get top 20 session with highest TEMP usage.
select round(u.blocks*8192/1024/1024,2) "TEMP usage, Mb",
s.sid, s.osuser, s.machine, s.module, s.action, s.status, s.event, s.LAST_CALL_ET, s.WAIT_TIME, s.sql_id, s.sql_child_number
from v$session s,
v$sort_usage u
where s.saddr = u.session_addr
order by u.blocks desc
fetch first 20 rows with ties

ColdFusion Oracle Datasource Hangs After "Too Many" Rows

Using ColdFusion 10 on Windows, I have a datasource connected to Oracle 11g. I can submit a query as follows:
<cfquery name="qry_Test" datasource="dsn_orcl" maxrows="100">
SELECT TRANSID FROM TBL_TRANS
</cfquery>
This will return my 100 rows of transaction IDs. But as I increase the number of columns in the query, the amount of rows I can successfully return goes down.
<cfquery name="qry_Test" datasource="dsn_orcl" maxrows="50">
SELECT TRANSID, TRANSDATE FROM TBL_TRANS
</cfquery>
The maxrows=50 setting is arbitrary, but if I exceed a certain number, say 50, the page just hangs and hangs. So, as the query width increases, its depth decreases. Never seen this before.
Anybody ever experienced this?
Instead of using tag for limiting the number of rows returned you can try limiting it from the database side.
Try this code:-
<cfquery name="qry_Test" datasource="dsn_orcl">
SELECT * FROM (SELECT TRANSID, TRANSDATE
FROM TBL_TRANS ORDER BY TRANSID) TB_TRANSACTION
WHERE rownum <= 50
ORDER BY rownum;
</cfquery>
Please let me know if this helps.

h2 index corruption? embedded database loaded with runscript has "invisible" rows

Using h2 in embedded mode, I am restoring an in memory database from a script backup that was previously generated by h2 using the SCRIPT command.
I use this URL:
jdbc:h2:mem:main
I am doing it like this:
FileReader script = new FileReader("db.sql");
RunScript.execute(conn,script);
which, according to the doc, should be similar to this SQL:
RUNSCRIPT FROM 'db.sql'
And, inside my app they do perform the same. But if I run the load using the web console using h2.bat, I get a different result.
Following the load of this data in my app, there are rows that I know are loaded but are not accessible to me via a query. And these queries demonstrate it:
select count(*) from MY_TABLE yields 96576
select count(*) from MY_TABLE where ID <> 3238396 yields 96575
select count(*) from MY_TABLE where ID = 3238396 yields 0
Loading the web console and using the same RUNSCRIPT command and file to load yields a database where I can find the row with that ID.
My first inclination was that I was dealing with some sort of locking issue. I have tried the following (with no change in results):
manually issuing a conn.commit() after the RunScript.execute()
appending ;LOCK_MODE=3 and the ;LOCK_MODE=0 to my URL
Any pointers in the right direction on how I can identify what is going on? I ended up inserting :
Server.createWebServer("-trace","-webPort","9083").start()
So that I could run these queries through the web console to sanity check what was coming back through JDBC. The problem happens consistently in my app and consistently doesn't happen via the web console. So there must be something at work.
The table schema is not exotic. This is the schema column from
select * from INFORMATION_SCHEMA.TABLES where TABLE_NAME='MY_TABLE'
CREATE MEMORY TABLE PUBLIC.MY_TABLE(
ID INTEGER SELECTIVITY 100,
P_ID INTEGER SELECTIVITY 4,
TYPE VARCHAR(10) SELECTIVITY 1,
P_ORDER DECIMAL(8, 0) SELECTIVITY 11,
E_GROUP INTEGER SELECTIVITY 1,
P_USAGE VARCHAR(16) SELECTIVITY 1
)
Any push in the right direction would be really appreciated.
EDIT
So it seems that the database is corrupted in some way just after running the RunScript command to load it. As I was trying to debug to find out what is going on, I tried executing the following:
delete from MY_TABLE where ID <> 3238396
And I ended up with:
Row not found when trying to delete from index "PUBLIC.MY_TABLE_IX1: 95326", SQL statement:
delete from MY_TABLE where ID <> 3238396 [90112-178] 90112/90112 (Help)
I then tried dropping and recreating all my indexes from within the context, but it had no effect on the overall problem.
Help!
EDIT 2
More information: The problem occurs due to the creation of an index. (I believe I have found a bug in h2 and I have working on creating a minimal case that reproduces it). The simple code below will reproduce the problem, if you have the right set of data.
public static void main(String[] args)
{
try
{
final String DB_H2URL = "jdbc:h2:mem:main;LOCK_MODE=3";
Class.forName("org.h2.Driver");
Connection c = DriverManager.getConnection(DB_H2URL, "sa", "");
FileReader script = new FileReader("db.sql");
RunScript.execute(c,script);
script.close();
Statement st = c.createStatement();
ResultSet rs = st.executeQuery("select count(*) from MY_TABLE where P_ID = 3238396");
rs.next();
if(rs.getLong(1) == 0)
System.err.println("It happened");
else
System.err.println("It didn't happen");
} catch (Throwable e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I have reduced the db.sql script to about 5000 rows and it still happens. When I attempted to go to 2500 rows, it stopped happening. If I remove the last line of the db.sql (which is the index creation), the problem will also stop happening. The last line is this:
CREATE INDEX PUBLIC.MY_TABLE_IX1 ON PUBLIC.MY_TABLE(P_ID);
But the data is an important player in this. It still appears to only ever be the one row and the index somehow makes it inaccessible.
EDIT 3
I have identified the minimal data example to reproduce. I stripped the table schema down to a single column, and I found that the values in that column don't seem to matter -- just the number of rows. Here is the contents of (snipped with obvious stuff) of my db.sql generated via the SCRIPT command:
;
CREATE USER IF NOT EXISTS SA SALT '8eed806dbbd1ea59' HASH '6d55cf715c56f4ca392aca7389da216a97ae8c9785de5d071b49de5436b0c003' ADMIN;
CREATE MEMORY TABLE PUBLIC.MY_TABLE(
P_ID INTEGER SELECTIVITY 100
);
-- 5132 +/- SELECT COUNT(*) FROM PUBLIC.MY_TABLE;
INSERT INTO PUBLIC.MY_TABLE(P_ID) VALUES
(1),
(2),
(3),
... snipped you obviously have breaks in the bulk insert here ...
(5143),
(3238396);
CREATE INDEX PUBLIC.MY_TABLE_IX1 ON PUBLIC.MY_TABLE(P_ID);
But that will recreate the problem. [Note that my numbering skips a number every time there was a bulk insert line. So there really is 5132 rows, though you see 5143 select count(*) from MY_TABLE yields 5132]. Also, I seem to be able to recreate the problem in the WebConsole directly now by doing:
drop table MY_TABLE
runscript from 'db.sql'
select count(*) from MY_TABLE where P_ID = 3238396
You have recreated the problem if you get 0 back from the select when you know you have a row in there.
Oddly enough, I seem to be able to do
select * from MY_TABLE order by P_ID desc
and I can see the row at this point. But going directly for the row:
select * from MY_TABLE where P_ID = 3238396
Yields nothing.
I just realized that I should note that I am using h2-1.4.178.jar
The h2 folks have already apparently resolved this.
https://code.google.com/p/h2database/issues/detail?id=566
Just either need to get the code from version control or wait for the next release build. Thanks Thomas.

Variable date depending on time

I got a question. I have a query that will be running as part of a night job. This query is supposed to give me all actions that have taken place during that day. However, and this is the tricky part, it won't always be run on the same time.
Because it is part of a night job, it could happen that on day 1, the query runs at 00:05, and on day 2, it runs on 23:55. This is complicating the query, because I can't say use today's date or use yesterday's date.
I got the following query so far:
select deuda_id from deuda where n_expediente in
(select
(case when to_char(sysdate, 'HH24:MI:SS') between '00:00:00' and '12:00:00' then
(select n_expediente from cartas_enviadas where codigo_carta in ('OIEUR','OIGBP') and f_envio > trunc(sysdate-1)
) else
(select n_expediente from cartas_enviar where codigo_carta in ('OIEUR','OIGBP')
)
) from dual
);
A little explanation: (the database is in Spanish/Italian):
deuda_id is the unique invoice number.
n_expediente is the case number (for this client always unique).
f_envio is the execution date.
codigo_carta is the action type.
Cartas_enviar holds all the actions that are due. When the action is taken, f_envio is entered. Overnight, all actions that have been executed, will be moved from cartas_enviar to cartas_enviadas).
Wat I am trying to do is the following:
I want to look at the current time. If it is before midnight, I want to look at the table cartas_enviar, and take the n_expediente from there, but only if the action is OIEUR or OIGBP. If it is after midnight, I want to look at the table cartas_enviadas, and take the n_expediente from there, but only if the action is OIEUR or OIGBP, and if the action has been executed yesterday.
However, when I am trying to execute this query, I am getting the following error message:
ORA-00905: missing keyword.
Could someone please help me with this query?
PS: It is an Oracle database
The problem is that you can't pull many values like that from subquery of a subquery to pass them back. Try this approach instead:
select deuda_id
from deuda
where
(to_char(sysdate, 'HH24:MI:SS') between '00:00:00' and '12:00:00'
AND n_expediente IN (select n_expediente
from cartas_enviadas
where codigo_carta in ('OIEUR','OIGBP')
and f_envio > trunc(sysdate-1) ))
OR (NOT to_char(sysdate, 'HH24:MI:SS') between '00:00:00' and '12:00:00'
AND n_expediente IN (select n_expediente
from cartas_enviar
where codigo_carta in ('OIEUR','OIGBP')))
;

How to check Oracle database for long running queries

My application, which uses an Oracle database, is going slow or appears to have stopped completely.
How can find out which queries are most expensive, so I can investigate further?
This one shows SQL that is currently "ACTIVE":-
select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
and s.status = 'ACTIVE'
and s.username <> 'SYSTEM'
order by s.sid,t.piece
/
This shows locks. Sometimes things are going slow, but it's because it is blocked waiting for a lock:
select
object_name,
object_type,
session_id,
type, -- Type or system/user lock
lmode, -- lock mode in which session holds lock
request,
block,
ctime -- Time since current mode was granted
from
v$locked_object, all_objects, v$lock
where
v$locked_object.object_id = all_objects.object_id AND
v$lock.id1 = all_objects.object_id AND
v$lock.sid = v$locked_object.session_id
order by
session_id, ctime desc, object_name
/
This is a good one for finding long operations (e.g. full table scans). If it is because of lots of short operations, nothing will show up.
COLUMN percent FORMAT 999.99
SELECT sid, to_char(start_time,'hh24:mi:ss') stime,
message,( sofar/totalwork)* 100 percent
FROM v$session_longops
WHERE sofar/totalwork < 1
/
Try this, it will give you queries currently running for more than 60 seconds. Note that it prints multiple lines per running query if the SQL has multiple lines. Look at the sid,serial# to see what belongs together.
select s.username,s.sid,s.serial#,s.last_call_et/60 mins_running,q.sql_text from v$session s
join v$sqltext_with_newlines q
on s.sql_address = q.address
where status='ACTIVE'
and type <>'BACKGROUND'
and last_call_et> 60
order by sid,serial#,q.piece
v$session_longops
If you look for sofar != totalwork you'll see ones that haven't completed, but the entries aren't removed when the operation completes so you can see a lot of history there too.
Step 1:Execute the query
column username format 'a10'
column osuser format 'a10'
column module format 'a16'
column program_name format 'a20'
column program format 'a20'
column machine format 'a20'
column action format 'a20'
column sid format '9999'
column serial# format '99999'
column spid format '99999'
set linesize 200
set pagesize 30
select
a.sid,a.serial#,a.username,a.osuser,c.start_time,
b.spid,a.status,a.machine,
a.action,a.module,a.program
from
v$session a, v$process b, v$transaction c,
v$sqlarea s
Where
a.paddr = b.addr
and a.saddr = c.ses_addr
and a.sql_address = s.address (+)
and to_date(c.start_time,'mm/dd/yy hh24:mi:ss') <= sysdate - (15/1440) -- running for 15 minutes
order by c.start_time
/
Step 2: desc v$session
Step 3:select sid, serial#,SQL_ADDRESS, status,PREV_SQL_ADDR from v$session where sid='xxxx' //(enter the sid value)
Step 4: select sql_text from v$sqltext where address='XXXXXXXX';
Step 5: select piece, sql_text from v$sqltext where address='XXXXXX' order by piece;
You can use the v$sql_monitor view to find queries that are running longer than 5 seconds. This may only be available in Enterprise versions of Oracle. For example this query will identify slow running queries from my TEST_APP service:
select to_char(sql_exec_start, 'dd-Mon hh24:mi'), (elapsed_time / 1000000) run_time,
cpu_time, sql_id, sql_text
from v$sql_monitor
where service_name = 'TEST_APP'
order by 1 desc;
Note elapsed_time is in microseconds so / 1000000 to get something more readable
You can generate an AWR (automatic workload repository) report from the database.
Run from the SQL*Plus command line:
SQL> #$ORACLE_HOME/rdbms/admin/awrrpt.sql
Read the document related to how to generate & understand an AWR report. It will give a complete view of database performance and resource issues. Once we are familiar with the AWR report it will be helpful to find Top SQL which is consuming resources.
Also, in the 12C EM Express UI we can generate an AWR.
You can check the long-running queries details like % completed and remaining time using the below query:
SELECT SID, SERIAL#, OPNAME, CONTEXT, SOFAR,
TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM V$SESSION_LONGOPS
WHERE OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK != 0
AND SOFAR <> TOTALWORK;
For the complete list of troubleshooting steps, you can check here:Troubleshooting long running sessions
select sq.PARSING_SCHEMA_NAME, sq.LAST_LOAD_TIME, sq.ELAPSED_TIME, sq.ROWS_PROCESSED, ltrim(sq.sql_text), sq.SQL_FULLTEXT
from v$sql sq, v$session se
order by sq.ELAPSED_TIME desc, sq.LAST_LOAD_TIME desc;

Resources