How to define for each table, the maximum value of one field of a list? - oracle

I have a list of Oracle table and fields and I would like to define for each table, the maximum value of the field of the list.
Input:
+------+--------+
| TAB | FIELDS |
+------+--------+
| tab1 | field1 |
+------+--------+
| tab2 | field2 |
+------+--------+
Output:
+------+--------+-----------+
| TAB | FIELDS | Max value |
+------+--------+-----------+
| tab1 | field1 | 10 |
+------+--------+-----------+
| tab2 | field2 | 15 |
+------+--------+-----------+
I want to write a PL / SQL function to create the loop but I have very little knowledge in this language. Do you have any examples to show me?
The input table is dynamic, which is why I want to use a loop.
thanks in advance

The input is build with system table like all_column_tab The output must be store in a table.
It is indeed not a great design to store and retrieve data, but I presume something like this should work for you. I've used a VARCHAR2 variable for storing max value instead of a Numeric because to handle MAX for non-numeric fields. Your table that stores the max val should be defined as VARCHAR2 for it to work normally for such cases.
DECLARE
v_maxVal VARCHAR2(400);
begin
FOR rec IN
( SELECT table_name,column_name
FROM user_tab_columns where table_name IN ('TAB1','TAB2')
)
LOOP
EXECUTE IMMEDIATE
'SELECT MAX('||rec.column_name||') FROM '||rec.table_name
INTO v_maxVal ;
INSERT INTO fieldstab(tab,fields,max_val) VALUES
( rec.table_name,rec.column_name,v_maxVal);
END LOOP;
END;
/
DEMO

Related

Inconsistent error ORA-01722: invalid number

I got an error with the message "ORA-01722: invalid number" when I executed my query. As digged in, I found out, I will get that error when I use a numeric value for a column that expects a varchar.
In my case my query had a case statement like
CASE WHEN CHAR_COLUMN= 1 THEN 'SOME VALUE' END
But the behavior was different in different instances of same oracle version. The same query worked in one of our dev oracle server, but not in the other. I am curious if there is any configuration that allows oracle to use numeric value to be used in a character column.
PS: The oracle version that we are using is Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
I never rely on implicit data type conversion but always make it explicit:
CASE WHEN CHAR_COLUMN = TO_CHAR(1) THEN 'SOME VALUE' END
Or even not convert at all:
CASE WHEN CHAR_COLUMN = '1' THEN 'SOME VALUE' END
The reason is that Oracle tends to convert the character string to a number, not the other way round. See the example from the linked manual page:
Text Literal Example
The text literal '10' has data type CHAR. Oracle implicitly converts it to the NUMBER data type if it appears in a numeric expression as in the following statement:
SELECT salary + '10'
FROM employees;
To reproduce the issue:
create table foo (
CHAR_COLUMN varchar2(10)
);
create table bar (
CHAR_COLUMN varchar2(10)
);
insert into foo (CHAR_COLUMN) values ('1');
insert into foo (CHAR_COLUMN) values ('2');
insert into bar (CHAR_COLUMN) values ('1');
insert into bar (CHAR_COLUMN) values ('yellow');
Then, when querying against numeric 1, the first query table works and the second doesn't:
select * from foo where CHAR_COLUMN = 1;
select * from bar where CHAR_COLUMN = 1;
When you ask Oracle to resolve this comparison:
WHEN CHAR_COLUMN = 1
... Oracle converts the query internally to:
WHEN TO_NUMBER(CHAR_COLUMN) = 1
In my example, this can be spotted in the explain plan:
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 3 | 00:00:01 |
| * 1 | TABLE ACCESS FULL | FOO | 1 | 7 | 3 | 00:00:01 |
---------------------------------------------------------------------
Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter(TO_NUMBER("CHAR_COLUMN")=1)

How to return the match record based on lookup table by using hive

Let's say we have a look up table (table_A) and another table (table_B) as follows:
And we want to search string of Table_B from Table_A to return the chemical type and form Table_C, as follows:
How can we implement this by using hive query under hadoop environment?
The challenging part is to search for multiple keywords within same string and create new row for each matched record.
Thank you!
I think you should structure Table_A differently (or keep the current structure but split by comma and use explode in hive) like so:
----------------------------
| Table A |
----------------------------
| Chemical Type | Keyword |
----------------------------
| HF | 100HF |
----------------------------
| HF | 100:HF |
----------------------------
| HCL | HCL200 |
----------------------------
| HCL | 500HCL |
----------------------------
etc...
Then, it seems that you need to perform a cartesian product join:
select distinct b.machine,b.string,a.chemical_type from
Table_A as a, Table_B as b where instr(b.string,a.keyword) > 0;

Oracle CBO when using types [duplicate]

I'm trying to optimize a set of stored procs which are going against many tables including this view. The view is as such:
We have TBL_A (id, hist_date, hist_type, other_columns) with two types of rows: hist_type 'O' vs. hist_type 'N'. The view self joins table A to itself and transposes the N rows against the corresponding O rows. If no N row exists for the O row, the O row values are repeated. Like so:
CREATE OR REPLACE FORCE VIEW V_A (id, hist_date, hist_type, other_columns_o, other_columns_n)
select
o.id, o.hist_date, o.hist_type,
o.other_columns as other_columns_o,
case when n.id is not null then n.other_columns else o.other_columns end as other_columns_n
from
TBL_A o left outer join TBL_A n
on o.id=n.id and o.hist_date=n.hist_date and n.hist_type = 'N'
where o.hist_type = 'O';
TBL_A has a unique index on: (id, hist_date, hist_type). It also has a unique index on: (hist_date, id, hist_type) and this is the primary key.
The following query is at issue (in a stored proc, with x declared as TYPE_TABLE_OF_NUMBER):
select b.id BULK COLLECT into x from TBL_B b where b.parent_id = input_id;
select v.id from v_a v
where v.id in (select column_value from table(x))
and v.hist_date = input_date
and v.status_new = 'CLOSED';
This query ignores the index on id column when accessing TBL_A and instead does a range scan using the date to pick up all the rows for the date. Then it filters that set using the values from the array. However if I simply give the list of ids as a list of numbers the optimizer uses the index just fine:
select v.id from v_a v
where v.id in (123, 234, 345, 456, 567, 678, 789)
and v.hist_date = input_date
and v.status_new = 'CLOSED';
The problem also doesn't exist when going against TBL_A directly (and I have a workaround that does that, but it's not ideal.).Is there a way to get the optimizer to first retrieve the array values and use them as predicates when accessing the table? Or a good way to restructure the view to achieve this?
Oracle does not use the index because it assumes select column_value from table(x) returns 8168 rows.
Indexes are faster for retrieving small amounts of data. At some point it's faster to scan the whole table than repeatedly walk the index tree.
Estimating the cardinality of a regular SQL statement is difficult enough. Creating an accurate estimate for procedural code is almost impossible. But I don't know where they came up with 8168. Table functions are normally used with pipelined functions in data warehouses, a sorta-large number makes sense.
Dynamic sampling can generate a more accurate estimate and likely generate a plan that will use the index.
Here's an example of a bad cardinality estimate:
create or replace type type_table_of_number as table of number;
explain plan for
select * from table(type_table_of_number(1,2,3,4,5,6,7));
select * from table(dbms_xplan.display(format => '-cost -bytes'));
Plan hash value: 1748000095
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8168 | 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 00:00:01 |
-------------------------------------------------------------------------
Here's how to fix it:
explain plan for select /*+ dynamic_sampling(2) */ *
from table(type_table_of_number(1,2,3,4,5,6,7));
select * from table(dbms_xplan.display(format => '-cost -bytes'));
Plan hash value: 1748000095
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 7 | 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 7 | 00:00:01 |
-------------------------------------------------------------------------
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

LOOP update with column size

I tried to create PL/SQL script that would update my table with column size.
My table looks like this:
| ID | TEXT | SIZE |
--------------------
| 1 | .... | null |
| 2 | .... | null |
| 3 | .... | null |
...
I want the PL/SQL script to fill the size column depending of the length of text for a certain document and then delete the contents of the TEXT column.
Here's what I've tried:
DECLARE
cursor s1 is select id from table where size is null;
BEGIN for d1 in s1 loop
update table set size = (select length(TEXT) from table where id = d1) where id=d1;
end loop;
END;
/
Unless there is a good reason, do this in pure SQL (or put the following statement into PL/SQL):
UPDATE t
SET size = LENGTH(text),
text = NULL
WHERE size IS NULL;
This is both easier to read and faster.

Use Oracle unnested VARRAY's instead of IN operator

Let's say users have 1 - n accounts in a system. When they query the database, they may choose to select from m acounts, with m between 1 and n. Typically the SQL generated to fetch their data is something like
SELECT ... FROM ... WHERE account_id IN (?, ?, ..., ?)
So depending on the number of accounts a user has, this will cause a new hard-parse in Oracle, and a new execution plan, etc. Now there are a lot of queries like that and hence, a lot of hard-parses, and maybe the cursor/plan cache will be full quite early, resulting in even more hard-parses.
Instead, I could also write something like this
-- use any of these
CREATE TYPE numbers AS VARRAY(1000) of NUMBER(38);
CREATE TYPE numbers AS TABLE OF NUMBER(38);
SELECT ... FROM ... WHERE account_id IN (
SELECT column_value FROM TABLE(?)
)
-- or
SELECT ... FROM ... JOIN (
SELECT column_value FROM TABLE(?)
) ON column_value = account_id
And use JDBC to bind a java.sql.Array (i.e. an oracle.sql.ARRAY) to the single bind variable. Clearly, this will result in less hard-parses and less cursors in the cache for functionally equivalent queries. But is there anything like general a performance-drawback, or any other issues that I might run into?
E.g: Does bind variable peeking work in a similar fashion for varrays or nested tables? Because the amount of data associated with every account may differ greatly.
I'm using Oracle 11g in this case, but I think the question is interesting for any Oracle version.
I suggest you try a plain old join like in
SELECT Col1, Col2
FROM ACCOUNTS ACCT
TABLE TAB,
WHERE ACCT.User = :ParamUser
AND TAB.account_id = ACCT.account_id;
An alternative could be a table subquery
SELECT Col1, Col2
FROM (
SELECT account_id
FROM ACCOUNTS
WHERE User = :ParamUser
) ACCT,
TABLE TAB
WHERE TAB.account_id = ACCT.account_id;
or a where subquery
SELECT Col1, Col2
FROM TABLE TAB
WHERE TAB.account_id IN
(
SELECT account_id
FROM ACCOUNTS
WHERE User = :ParamUser
);
The first one should be better for perfomance, but you better check them all with explain plan.
Looking at V$SQL_BIND_CAPTURE in a 10g database, I have a few rows where the datatype is VARRAY or NESTED_TABLE; the actual bind values were not captured. In an 11g database, there is just one such row, but it also shows that the bind value is not captured. So I suspect that bind value peeking essentially does not happen for user-defined types.
In my experience, the main problem you run into using nested tables or varrays in this way is that the optimizer does not have a good estimate of the cardinality, which could lead it to generate bad plans. But, there is an (undocumented?) CARDINALITY hint that might be helpful. The problem with that is, if you calculate the actual cardinality of the nested table and include that in the query, you're back to having multiple distinct query texts. Perhaps if you expect that most or all users will have at most 10 accounts, using the hint to indicate that as the cardinality would be helpful. Of course, I'd try it without the hint first, you may not have an issue here at all.
(I also think that perhaps Miguel's answer is the right way to go.)
For medium sized list (several thousand items) I would use this approach:
First:generate a prepared statement with an XMLTABLE in join with your main table.
For instance:
String myQuery = "SELECT ...
+" FROM ACCOUNTS A,"
+ "XMLTABLE('tab/row' passing XMLTYPE(?) COLUMNS id NUMBER path 'id') t
+ "WHERE A.account_id = t.id"
then loop through your data and build a StringBuffer with this content:
StringBuffer idList = "<tab><row><id>101</id></row><row><id>907</id></row> ...</tab>";
eventually, prepare and submit your statement, then fetch the results.
myQuery.setString(1, idList);
ResultSet rs = myQuery.executeQuery();
while (rs.next()) {...}
Using this approach is also possible to pass multi-valued list, as in the select statement
SELECT * FROM TABLE t WHERE (t.COL1, t.COL2) in (SELECT X.COL1, X.COL2 FROM X);
In my experience performances are pretty good, and the approach is flexible enough to be used in very complex query scenarios.
The only limit is the size of the string passed to the DB, but I suppose it is possible to use CLOB in place of String for arbitrary long XML wrapper to the input list;
This binding a variable number of items into an in list problem seems to come up a lot in various form. One option is to concatenate the IDs into a comma separated string and bind that, and then use a bit of a trick to split it into a table you can join against, eg:
with bound_inlist
as
(
select
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a, actual_table b
where a.token = b.token
Bind variable peaking is going to be a problem though.
Does the query plan actually change for larger number of accounts, ie would it be more efficient to move from index to full table scan in some cases, or is it borderline? As someone else suggested, you could use the CARDINALITY hint to indicate how many IDs are being bound, the following test case proves this actually works:
create table actual_table (id integer, padding varchar2(100));
create unique index actual_table_idx on actual_table(id);
insert into actual_table
select level, 'this is just some padding for '||level
from dual connect by level <= 1000;
explain plan for
with bound_inlist
as
(
select /*+ CARDINALITY(10) */
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a, actual_table b
where a.token = b.id;
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 840 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 10 | 840 | 2 (0)| 00:00:01 |
| 3 | VIEW | | 10 | 190 | 2 (0)| 00:00:01 |
|* 4 | CONNECT BY WITHOUT FILTERING| | | | | |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | ACTUAL_TABLE_IDX | 1 | | 0 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | ACTUAL_TABLE | 1 | 65 | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
Another option is to always use n bind variables in every query. Use null for m+1 to n.
Oracle ignores repeated items in the expression_list. Your queries will perform the same way and there will be fewer hard parses. But there will be extra overhead to bind all the variables and transfer the data. Unfortunately I have no idea what the overall affect on performance would be, you'd have to test it.

Resources