Iterative search of a Teradata clob - for-loop

We have an accountnumber stored in a Clob field in a table...we'll call it tbl_accountdetail. I need to pull back all records from tbl_accountdetail if the account numbers are in the results from another query...we'll call it sourcequery.
I can do this individually for each account number with:
Select * from Tbl_accountdetail where REGEXP_INSTR(CLOB,'accountnumber')>0
Naturally, my first thought was to do a cursor and loop through each account number from the sourcequery.
Declare #accountnumber varchar(30)
Declare Err_Cursor Cursor for
Select accountnumber from ErrorTable;
Open Err_Cursor;
Fetch next from Err_Cursor into #accountnumber;
While ##Fetch_status = 0
Begin
Select * from Tbl_accountdetail where REGEXP_INSTR(CLOB,#accountnumber)>0
Fetch next from Err_Cursor into #accountnumber
End;
Close Err_Cursor;
Deallocate Err_Cursor;
The more I read the more I'm confused about the best/most efficient way to get my desired results.
References to cursors all seem to require them to be included in a stored procedure and based on the simplicity, you wouldn't think this needs to be added to a sp. References to macros all seem to be macros that need to update/insert,etc. which I don't need. All I need to do is return the rows from Tbl_accountdetail that have the accountnumber somewhere in the clob.
I'm new to Teradata and Clobs. Can someone help me with the best way to search the clob? And to do so for a list of values?
Any help/suggestions greatly appreciated.

How is your CLOB data structured? Is the accountnumber field stored in a way that you can extract it using a searchable pattern -- i.e. accountnumber=<10-digit-#>?
If you want to search for multiple accountnumber values, then I think the best way is to extract the accountnumber value(s) from each CLOB and then search on them. Something like:
SELECT *
FROM Tbl_accountdetail
WHERE <extracted_accountnumber> IN (
SELECT account_number
FROM account_table
)
You are right that cursors are only used in stored procedures. The main purpose for them is to process each row of a result set individually and perform any additional logic, which you don't seem to need in your case. You could either put the SQL into a macro or just run it as-is.
Update
Assuming you only have one accountnum value in your CLOB field with a format of "accountnum": "123456789", you should be able to do something like this:
SELECT *
FROM Tbl_accountdetail
WHERE REGEXP_SUBSTR(myclob, '"accountnum":\s+"([0-9]+)"', 1, 1, NULL) IN (
SELECT account_number
FROM account_table
)
This should extract the first accountnumber match in your CLOB field and see if that value also exists in the IN sub-query.
I don't have a TD system to test on, so you may need to fiddle with the arguments a bit. Just replace myclob with the name of your CLOB field and update the sub-query in the IN(). Give that a try and let me know.
SQL Fiddle (Oracle)
Regexp Tester
Teradata - REGEXP_SUBSTR

Related

Process SQL result set entirely

I need to work with a SQL result set in order to do some processing for each column (medians, standard deviations, several control statements included)
The SQL is dynamic so I don't know the number of columns, rows.
First I tried to use temporary tables, views, etc to store the results, however I did not manage to overcome the 30 character limit of Oracle columns when using the below sql:
create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS. MAT_N_NUM_EVENTI) <-- exceeds the 30 character limit
FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= '201301'
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)
Second choice was to use some PL/SQL types to store the entire table information, so I could call it like in other programming languages (e.g. a matrix result[i][j]) but I could not find anything similar.
Third variant, using files for reading and writing: i did not try it yet; i'm still expecting a more elegant pl/sql solution
It's possible that I have the wrong approach here so any advice is more than welcome.
UPDATE: Modifying the input SQL is not an option. The program has to accept any select statement.
Note that you can alias both tables and fields. Using a table alias keeps references to it from producing walls of text in the query. Using one for a field gives it a new name in the output.
SELECT A.LONG_FIELD_NAME_HERE AS SHORTNAME
FROM REALLY_LONG_TABLE_NAME_HERE A
The auto naming adds _1 and _2 etc to differentiate the same column name coming from different table references. This often puts a field already borderline over the limit. Giving the fields names yourself bypasses this.
You can put the alias also in dynamic SQL:
sqlstr := 'create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ + DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI) AS '||SUBSTR('SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI)', 1, 30)
||' FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= ''201301''
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)'

Datatype difference in procedure parameter and sql query inside it

In my backend procedure i have a varchar2 parameter and i am using it in the SQL query to search with number column. Will this cause any kind of performance issues ?
for ex:
Proc (a varchar)
is
select * from table where deptno = a;
end
Here deptno is number column in table and a is varchar .
It might do. The database will resolve the differences in datatype by casting DEPTNO to a VARCHAR2. This will prevent the optimizer from using any (normal) index you have on that column. Depending on the data volumes and distribution, an indexed read may not always be the most efficient access path, in which case the data conversion doesn't matter.
So it does depend. But what are your options if it does matter (you have a highly selective index on that column)?
One solution would be to apply an explicit data conversion in your query:
select * from table
where deptno = to_number(a);
This will cause the query to fail if A contains a value which won't convert to a number.
A better solution would be to change the datatype of A so that the calling program can only pass a numeric value. This throws the responsibility for duff data where it properly belongs.
The least attractive solution is to keep the procedure's signature and the query as is, and build a function-based index on the column:
create index emp_deptchar_fbi on emp(to_char(deptno));
Read the documentation to find out more about function-based indexes.

Executing triggers in Oracle for copying the old values to a Mirror table

We are trying to copy the current row of a table to mirror table by using a trigger before delete / update. Below is the working query
BEFORE UPDATE OR DELETE
ON CurrentTable FOR EACH ROW
BEGIN
INSERT INTO MirrorTable
( EMPFIRSTNAME,
EMPLASTNAME,
CELLNO,
SALARY
)
VALUES
( :old.EMPFIRSTNAME,
:old.EMPLASTNAME,
:old.CELLNO,
:old.SALARY
);
END;
But the problem is we have more than 50 coulmns in the current table and dont want to mention all those column names. Is there a way to select all coulmns like
:old.*
SELECT * INTO MirrorTable FROM CurrentTable
Any suggestions would be helpful.
Thanks,
Realistically, no. You'll need to list all the columns.
You could, of course, dynamically generate the trigger code pulling the column names from DBA_TAB_COLUMNS. But that is going to be dramatically more work than simply typing in 50 column names.
If your table happens to be an object table, :new would be an instance of that object so you could insert that. But it would be rather rare to have an object table.
If your 'current' and 'mirror' tables have EXACTLY the same structure you may be able to use something like
INSERT INTO MirrorTable
SELECT *
FROM CurrentTable
WHERE CurrentTable.primary_key_column = :old.primary_key_column
Honestly, I think that this is a poor choice and wouldn't do it, but it's a more-or-less free world and you're free (more or less :-) to make your own choices.
Share and enjoy.
For what it's worth, I've been writing the same stuff and used this to generate the code:
SQL> set pagesize 0
SQL> select ':old.'||COLUMN_NAME||',' from all_tab_columns where table_name='BIGTABLE' and owner='BOB';
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
...
If you feed all columns, no need to mention them twice (and you may use NULL for empty columns):
INSERT INTO bigtable VALUES (
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
NULL,
NULL);
people writing tables with that many columns should have no desserts ;-)

Display multiple columns SQL query into PL/SQL stored procedure

I have a SQL query from which select multiple columns from "db_cache_advice". I want to create a PL/SQL stored procedure from the script.
Here is the SQL script, can someone show me a small sample from which i can pick up...
select name, size_for_estimate, size_factor, estd_physical_read_factor
from v$db_cache_advice;
If you are asking whether we can select multiple columns in pl/sql stored procedure or not..
then well yes, we can select multiple columns as well..
you have to give multiple variable list with respect to number of column you are selecting -
select name, size_for_estimate, size_factor, estd_physical_read_factor
into l_name, l_size_for_estimate, l_size_factor, l_estd_physical_read_factor
from v$db_cache_advice;
please note variable should be in sync, datatype should be match with column value fetching ...

How do I check for a IN condition against a dynamic list in Oracle?

EDIT: changed the title to fit the code below.
I'm trying to retrieve a list of acceptable values from an Oracle table, then performing a SELECT against another while comparing some fields against said list.
I was trying to do this with cursors (like below), but this fails.
DECLARE
TYPE gcur IS REF CURSOR;
TYPE list_record IS TABLE OF my_table.my_field%TYPE;
c_GENERIC gcur;
c_LIST list_record;
BEGIN
OPEN c_GENERIC FOR
SELECT my_field FROM my_table
WHERE some_field = some_value;
FETCH c_GENERIC BULK COLLECT INTO c_LIST;
-- try to check against list
SELECT * FROM some_other_table
WHERE some_critical_field IN c_LIST;
END
Basically, what I'm trying to do is to cache the acceptable values list into a variable, because I will be checking against it repeatedly later.
How do you perform this in Oracle?
We can use collections to store values to suit your purposes, but they need to be declared as SQL types:
create type list_record is table of varchar2(128)
/
This is because we cannot use PL/SQL types in SQL statements. Alas this means we cannot use %TYPE or %ROWTYPE, because they are PL/SQL keywords.
Your procedure would then look like this:
DECLARE
c_LIST list_record;
BEGIN
SELECT my_field
BULK COLLECT INTO c_LIST
FROM my_table
WHERE some_field = some_value;
-- try to check against list
SELECT * FROM some_other_table
WHERE some_critical_field IN ( select * from table (c_LIST);
END;
"I see that you still had to perform a
SELECT statement to populate the list
for the IN clause."
If the values are in a table there is no other way to get them into a variable :)
"I'm thinking that there's a
significant performance gain using
this over a direct semi-join"
Not necessarily. If you're only using the values once then the sub-query is certainly the better approach. But as you want to use the same values in a number of discrete queries then populating a collection is the more efficient approach.
In 11g Enterprise Edition we have the option to use result set caching. This is a much better solution, but one which is not suited for all tables.
Why pull the list instead of using a semi-join?
SELECT *
FROM some_other_table
WHERE some_critical_field IN (SELECT my_field
FROM my_table
WHERE some_field = some_value);

Resources