sequence generator in plsql code in oracle apps - oracle

I have a sample table like below
PO_HEADER || ITEM || LINE_NUM
1 X
1 Y
1 Z
1 A
1 B
I want to update sequence number's in line_num column , like 1...5. , and when i enter another line , next sequence number should be generated automatically like 6 in line_number column.
I want to write the code to update the sequence numbers in line_num, & also to capture the next sequence number . So that when i enter the new line , I should get the next sequence number

Demo: http://sqlfiddle.com/#!4/4d6ff/1
CREATE TABLE Table1
("PO_HEADER" int, "ITEM" varchar2(1), "LINE_NUM" int)
;
INSERT ALL
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'X', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'Y', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'Z', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'A', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'B', NULL)
SELECT * FROM dual
;
create sequence alamakota;
and now:
update table1 set LINE_NUM = alamakota.nextval;
select * from table1;
| PO_HEADER | ITEM | LINE_NUM |
|-----------|------|----------|
| 1 | X | 1 |
| 1 | Y | 2 |
| 1 | Z | 3 |
| 1 | A | 4 |
| 1 | B | 5 |

People usually use a sequence (Oracle object) which provides uniqueness but is (generally speaking) not gapless. Sequence is easy to implement; you'd either call it during insert, or create a database trigger which would set column value to the next sequence number.
If you insist on "MAX + 1" option, note that it is about to fail in a multi-user environment - two or more users fetch the same MAX value, depending on COMMIT moment and column uniqueness (primary/unique key, unique index) the first one will succeed and everyone else's insert will fail.
There's a workaround - additional table whose sequence number is maintained through an autonomous transaction function.
If you're on 12c, use the identity column. Otherwise, I'd suggest a sequence.

I would not use a sequence for this.
If there is a parent table, to which PO_HEADER is a foreign key, grab a lock on the parent row. Then, SELECT MAX(line_num)+1... for that PO and use it. Do not release the lock on the parent table until you commit your insert to TABLE1.
If you do not have a parent table, you can use DBMS_LOCK to accomplish the same thing. (Allocate a user lock representing the PO and lock it in lieu of a parent table).
Since your application is well designed and all TABLE1 inserts happen through this code, you know it will work. The lock on the parent table ensures two sessions do not get the same next line number because the lock forces them to act one after the other.
If you application is not so well designed, this won't help you.

Related

How to get multiple values in same cell in Oracle

I have a table in Oracle where there are two columns. In the first column, sometimes there are duplicate values that corresspond to a different value in the second column. How can I write a query that shows only unique values of the first column and all possible values from the second column?
The table looks somewhat like below
COLUMN_1 | COLUMN_2
NUMBER_1 | 4
NUMBER_2 | 4
NUMBER_3 | 1
NUMBER_3 | 6
NUMBER_4 | 3
NUMBER_4 | 4
NUMBER_4 | 5
NUMBER_4 | 6
You can use listagg() if you are using Oracle 11G or higher like
SELECT
COLUMN_1,
LISTAGG(COLUMN_2, '|') WITHIN GROUP (ORDER BY COLUMN_2) "ListValues"
FROM table1
GROUP BY COLUMN_1
Else, see this link for an alternative for lower versions
Oracle equivalent of MySQL group_concat

Oracle CBO when using types [duplicate]

I'm trying to optimize a set of stored procs which are going against many tables including this view. The view is as such:
We have TBL_A (id, hist_date, hist_type, other_columns) with two types of rows: hist_type 'O' vs. hist_type 'N'. The view self joins table A to itself and transposes the N rows against the corresponding O rows. If no N row exists for the O row, the O row values are repeated. Like so:
CREATE OR REPLACE FORCE VIEW V_A (id, hist_date, hist_type, other_columns_o, other_columns_n)
select
o.id, o.hist_date, o.hist_type,
o.other_columns as other_columns_o,
case when n.id is not null then n.other_columns else o.other_columns end as other_columns_n
from
TBL_A o left outer join TBL_A n
on o.id=n.id and o.hist_date=n.hist_date and n.hist_type = 'N'
where o.hist_type = 'O';
TBL_A has a unique index on: (id, hist_date, hist_type). It also has a unique index on: (hist_date, id, hist_type) and this is the primary key.
The following query is at issue (in a stored proc, with x declared as TYPE_TABLE_OF_NUMBER):
select b.id BULK COLLECT into x from TBL_B b where b.parent_id = input_id;
select v.id from v_a v
where v.id in (select column_value from table(x))
and v.hist_date = input_date
and v.status_new = 'CLOSED';
This query ignores the index on id column when accessing TBL_A and instead does a range scan using the date to pick up all the rows for the date. Then it filters that set using the values from the array. However if I simply give the list of ids as a list of numbers the optimizer uses the index just fine:
select v.id from v_a v
where v.id in (123, 234, 345, 456, 567, 678, 789)
and v.hist_date = input_date
and v.status_new = 'CLOSED';
The problem also doesn't exist when going against TBL_A directly (and I have a workaround that does that, but it's not ideal.).Is there a way to get the optimizer to first retrieve the array values and use them as predicates when accessing the table? Or a good way to restructure the view to achieve this?
Oracle does not use the index because it assumes select column_value from table(x) returns 8168 rows.
Indexes are faster for retrieving small amounts of data. At some point it's faster to scan the whole table than repeatedly walk the index tree.
Estimating the cardinality of a regular SQL statement is difficult enough. Creating an accurate estimate for procedural code is almost impossible. But I don't know where they came up with 8168. Table functions are normally used with pipelined functions in data warehouses, a sorta-large number makes sense.
Dynamic sampling can generate a more accurate estimate and likely generate a plan that will use the index.
Here's an example of a bad cardinality estimate:
create or replace type type_table_of_number as table of number;
explain plan for
select * from table(type_table_of_number(1,2,3,4,5,6,7));
select * from table(dbms_xplan.display(format => '-cost -bytes'));
Plan hash value: 1748000095
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8168 | 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 00:00:01 |
-------------------------------------------------------------------------
Here's how to fix it:
explain plan for select /*+ dynamic_sampling(2) */ *
from table(type_table_of_number(1,2,3,4,5,6,7));
select * from table(dbms_xplan.display(format => '-cost -bytes'));
Plan hash value: 1748000095
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 7 | 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 7 | 00:00:01 |
-------------------------------------------------------------------------
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

Determine table to select from based on indicator

I have a situation where there are three tables with same table design (same amount of columns, same PK, same FK) but store different value on one date, only one data will be stored per day in each table. Each table has a flag column (Y or N) to determine which should be the value to be reported on the specific day. Only one table's value will be flagged as Y for one CODE in one day.
Example:
Table 1:
Date | Code | Value | Flag
01-DEC, ABC, 111, N
Table 2:
Date | Code | Value | Flag
01-DEC, ABC, 222, N
Table 3:
Date | Code | Value | Flag
01-DEC, ABC, 333, Y
Refer to the example above, the value of CODE ABC for date: 01-DEC should be 333 as it's flagged as Y.
How should the SQL look like?
First off, I would strongly question the data model that has three identical tables that seem to store essentially the same information and serve the same purpose.
You could, assuming that there will be only one Y row, do
SELECT *
FROM( SELECT *
FROM table1
UNION ALL
SELECT *
FROM table2
UNION ALL
SELECT *
FROM table3 )
WHERE flag = 'Y'

find string from delimited values from delimited values (Oracle PL-SQL)

How can write a query for Oracle database such that I can find a comma delimited list of values from a column that contains comma delimited list of values. The :parameter passed to sql statement is also a comma delimited values that user selected.
For e.g
We have a column in tables that contains
1 | 'A','B','C'
2 | 'C','A'
3 | 'A','B'
on the web application interface we have multi select box that shows
A
B
C
and allows user to to select one or more items.
I want rows 1 and 2 to show up if they select A and B, If they select A only the all three should show up b/c all rows 1 to 3 have 'A' value in it.
This example will hopefully help and it matches the values irrespective of which order they appear in the string in the DB record.
Create example table:
CREATE TABLE t
(val VARCHAR2(100));
Insert records:
INSERT INTO t VALUES
('1|''A'',''B'',''C''');
INSERT INTO t VALUES
('2|''C'',''A''');
INSERT INTO t VALUES
('3|''A'',''B''');
Check values:
SELECT * FROM t;
1|'A','B','C'
2|'C','A'
3|'A','B'
Check solution for 'A':
SELECT val
FROM t
WHERE REGEXP_LIKE(val, '(A)');
1|'A','B','C'
2|'C','A'
3|'A','B'
Check solution for A and B
SELECT val
FROM t
WHERE REGEXP_LIKE(val, '(A|B).*(A|B)');
1|'A','B','C'
3|'A','B'
If you want to make sure the 1| part of the result isn't matched by anything then you could query using:
SELECT val
FROM t
WHERE REGEXP_LIKE(val, '(.\|.*)(A)');
and
SELECT val
FROM t
WHERE REGEXP_LIKE(val, '(.\|.*)(A|B).*(A|B)');
Hope this helps...
You could use a where clause with varying number of bind values, depending on the number of selected options:
TEST#PRJ> create table t (c varchar2(100));
TEST#PRJ> insert into t values ('2 | ''C'',''A''');
TEST#PRJ> insert into t values ('3 | ''A'',''B''');
TEST#PRJ> select * from t where c like '%''A''%' and c like '%''B''%';
C
----------------------------------------------------------------------------------------------------------------
1 | 'A','B','C'
3 | 'A','B'
TEST#PRJ> select * from t where c like '%''A''%';
C
----------------------------------------------------------------------------------------------------------------
1 | 'A','B','C'
2 | 'C','A'
3 | 'A','B'
If the values are stored in order you could use a single bind value:
TEST#PRJ> select * from t where c like '%''A''%''B''%';
C
-----------------------------------------------------------------------------------------
1 | 'A','B','C'
3 | 'A','B'

Use Oracle unnested VARRAY's instead of IN operator

Let's say users have 1 - n accounts in a system. When they query the database, they may choose to select from m acounts, with m between 1 and n. Typically the SQL generated to fetch their data is something like
SELECT ... FROM ... WHERE account_id IN (?, ?, ..., ?)
So depending on the number of accounts a user has, this will cause a new hard-parse in Oracle, and a new execution plan, etc. Now there are a lot of queries like that and hence, a lot of hard-parses, and maybe the cursor/plan cache will be full quite early, resulting in even more hard-parses.
Instead, I could also write something like this
-- use any of these
CREATE TYPE numbers AS VARRAY(1000) of NUMBER(38);
CREATE TYPE numbers AS TABLE OF NUMBER(38);
SELECT ... FROM ... WHERE account_id IN (
SELECT column_value FROM TABLE(?)
)
-- or
SELECT ... FROM ... JOIN (
SELECT column_value FROM TABLE(?)
) ON column_value = account_id
And use JDBC to bind a java.sql.Array (i.e. an oracle.sql.ARRAY) to the single bind variable. Clearly, this will result in less hard-parses and less cursors in the cache for functionally equivalent queries. But is there anything like general a performance-drawback, or any other issues that I might run into?
E.g: Does bind variable peeking work in a similar fashion for varrays or nested tables? Because the amount of data associated with every account may differ greatly.
I'm using Oracle 11g in this case, but I think the question is interesting for any Oracle version.
I suggest you try a plain old join like in
SELECT Col1, Col2
FROM ACCOUNTS ACCT
TABLE TAB,
WHERE ACCT.User = :ParamUser
AND TAB.account_id = ACCT.account_id;
An alternative could be a table subquery
SELECT Col1, Col2
FROM (
SELECT account_id
FROM ACCOUNTS
WHERE User = :ParamUser
) ACCT,
TABLE TAB
WHERE TAB.account_id = ACCT.account_id;
or a where subquery
SELECT Col1, Col2
FROM TABLE TAB
WHERE TAB.account_id IN
(
SELECT account_id
FROM ACCOUNTS
WHERE User = :ParamUser
);
The first one should be better for perfomance, but you better check them all with explain plan.
Looking at V$SQL_BIND_CAPTURE in a 10g database, I have a few rows where the datatype is VARRAY or NESTED_TABLE; the actual bind values were not captured. In an 11g database, there is just one such row, but it also shows that the bind value is not captured. So I suspect that bind value peeking essentially does not happen for user-defined types.
In my experience, the main problem you run into using nested tables or varrays in this way is that the optimizer does not have a good estimate of the cardinality, which could lead it to generate bad plans. But, there is an (undocumented?) CARDINALITY hint that might be helpful. The problem with that is, if you calculate the actual cardinality of the nested table and include that in the query, you're back to having multiple distinct query texts. Perhaps if you expect that most or all users will have at most 10 accounts, using the hint to indicate that as the cardinality would be helpful. Of course, I'd try it without the hint first, you may not have an issue here at all.
(I also think that perhaps Miguel's answer is the right way to go.)
For medium sized list (several thousand items) I would use this approach:
First:generate a prepared statement with an XMLTABLE in join with your main table.
For instance:
String myQuery = "SELECT ...
+" FROM ACCOUNTS A,"
+ "XMLTABLE('tab/row' passing XMLTYPE(?) COLUMNS id NUMBER path 'id') t
+ "WHERE A.account_id = t.id"
then loop through your data and build a StringBuffer with this content:
StringBuffer idList = "<tab><row><id>101</id></row><row><id>907</id></row> ...</tab>";
eventually, prepare and submit your statement, then fetch the results.
myQuery.setString(1, idList);
ResultSet rs = myQuery.executeQuery();
while (rs.next()) {...}
Using this approach is also possible to pass multi-valued list, as in the select statement
SELECT * FROM TABLE t WHERE (t.COL1, t.COL2) in (SELECT X.COL1, X.COL2 FROM X);
In my experience performances are pretty good, and the approach is flexible enough to be used in very complex query scenarios.
The only limit is the size of the string passed to the DB, but I suppose it is possible to use CLOB in place of String for arbitrary long XML wrapper to the input list;
This binding a variable number of items into an in list problem seems to come up a lot in various form. One option is to concatenate the IDs into a comma separated string and bind that, and then use a bit of a trick to split it into a table you can join against, eg:
with bound_inlist
as
(
select
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a, actual_table b
where a.token = b.token
Bind variable peaking is going to be a problem though.
Does the query plan actually change for larger number of accounts, ie would it be more efficient to move from index to full table scan in some cases, or is it borderline? As someone else suggested, you could use the CARDINALITY hint to indicate how many IDs are being bound, the following test case proves this actually works:
create table actual_table (id integer, padding varchar2(100));
create unique index actual_table_idx on actual_table(id);
insert into actual_table
select level, 'this is just some padding for '||level
from dual connect by level <= 1000;
explain plan for
with bound_inlist
as
(
select /*+ CARDINALITY(10) */
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a, actual_table b
where a.token = b.id;
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 840 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 10 | 840 | 2 (0)| 00:00:01 |
| 3 | VIEW | | 10 | 190 | 2 (0)| 00:00:01 |
|* 4 | CONNECT BY WITHOUT FILTERING| | | | | |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | ACTUAL_TABLE_IDX | 1 | | 0 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | ACTUAL_TABLE | 1 | 65 | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
Another option is to always use n bind variables in every query. Use null for m+1 to n.
Oracle ignores repeated items in the expression_list. Your queries will perform the same way and there will be fewer hard parses. But there will be extra overhead to bind all the variables and transfer the data. Unfortunately I have no idea what the overall affect on performance would be, you'd have to test it.

Resources