Can you please advise me for the below?
I'm extracting XML From CLOB data table But returning non distinct multiple values in a cell. I required in a column or the first node value or the last node value.
Select d.b_id
xmltype('<rds><startRD>'||u.con_data||'</startRD></rds>').extract('//rds/startRD/text()').getStringVal() start_RD
From sp_d_a d
Left Outer Join c1_u u
On d.b_id = u.b_id
Where d.b_id In ('4017')
returning as below:
353.000000524.000000527.306000722.430000
I require as below like in a column:
b_id | startRD
==================
12 | 353.000000
12 | 524.000000
12 | 527.306000
12 | 722.430000
I have a sample table like below
PO_HEADER || ITEM || LINE_NUM
1 X
1 Y
1 Z
1 A
1 B
I want to update sequence number's in line_num column , like 1...5. , and when i enter another line , next sequence number should be generated automatically like 6 in line_number column.
I want to write the code to update the sequence numbers in line_num, & also to capture the next sequence number . So that when i enter the new line , I should get the next sequence number
Demo: http://sqlfiddle.com/#!4/4d6ff/1
CREATE TABLE Table1
("PO_HEADER" int, "ITEM" varchar2(1), "LINE_NUM" int)
;
INSERT ALL
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'X', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'Y', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'Z', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'A', NULL)
INTO Table1 ("PO_HEADER", "ITEM", "LINE_NUM")
VALUES (1, 'B', NULL)
SELECT * FROM dual
;
create sequence alamakota;
and now:
update table1 set LINE_NUM = alamakota.nextval;
select * from table1;
| PO_HEADER | ITEM | LINE_NUM |
|-----------|------|----------|
| 1 | X | 1 |
| 1 | Y | 2 |
| 1 | Z | 3 |
| 1 | A | 4 |
| 1 | B | 5 |
People usually use a sequence (Oracle object) which provides uniqueness but is (generally speaking) not gapless. Sequence is easy to implement; you'd either call it during insert, or create a database trigger which would set column value to the next sequence number.
If you insist on "MAX + 1" option, note that it is about to fail in a multi-user environment - two or more users fetch the same MAX value, depending on COMMIT moment and column uniqueness (primary/unique key, unique index) the first one will succeed and everyone else's insert will fail.
There's a workaround - additional table whose sequence number is maintained through an autonomous transaction function.
If you're on 12c, use the identity column. Otherwise, I'd suggest a sequence.
I would not use a sequence for this.
If there is a parent table, to which PO_HEADER is a foreign key, grab a lock on the parent row. Then, SELECT MAX(line_num)+1... for that PO and use it. Do not release the lock on the parent table until you commit your insert to TABLE1.
If you do not have a parent table, you can use DBMS_LOCK to accomplish the same thing. (Allocate a user lock representing the PO and lock it in lieu of a parent table).
Since your application is well designed and all TABLE1 inserts happen through this code, you know it will work. The lock on the parent table ensures two sessions do not get the same next line number because the lock forces them to act one after the other.
If you application is not so well designed, this won't help you.
I have a table in Oracle where there are two columns. In the first column, sometimes there are duplicate values that corresspond to a different value in the second column. How can I write a query that shows only unique values of the first column and all possible values from the second column?
The table looks somewhat like below
COLUMN_1 | COLUMN_2
NUMBER_1 | 4
NUMBER_2 | 4
NUMBER_3 | 1
NUMBER_3 | 6
NUMBER_4 | 3
NUMBER_4 | 4
NUMBER_4 | 5
NUMBER_4 | 6
You can use listagg() if you are using Oracle 11G or higher like
SELECT
COLUMN_1,
LISTAGG(COLUMN_2, '|') WITHIN GROUP (ORDER BY COLUMN_2) "ListValues"
FROM table1
GROUP BY COLUMN_1
Else, see this link for an alternative for lower versions
Oracle equivalent of MySQL group_concat
I have a Hive table with the following fields:
id STRING , x STRING
where x can have values such as 'c'.
I need a query that display number of rows where column x contains a value 'c' and the number of rows where x has values are other than 'c'.
id | count(x='c') | count(x<>'c')
---|--------------|--------------
1 | 3 | 7
I don't know if it's possible.
You can try :
SELECT sum(if(x='c',1,0)), sum(if(x!='c',1,0)) FROM table_name;
This will print two columns. I didn't understand the id field in your sample output.
Let's say users have 1 - n accounts in a system. When they query the database, they may choose to select from m acounts, with m between 1 and n. Typically the SQL generated to fetch their data is something like
SELECT ... FROM ... WHERE account_id IN (?, ?, ..., ?)
So depending on the number of accounts a user has, this will cause a new hard-parse in Oracle, and a new execution plan, etc. Now there are a lot of queries like that and hence, a lot of hard-parses, and maybe the cursor/plan cache will be full quite early, resulting in even more hard-parses.
Instead, I could also write something like this
-- use any of these
CREATE TYPE numbers AS VARRAY(1000) of NUMBER(38);
CREATE TYPE numbers AS TABLE OF NUMBER(38);
SELECT ... FROM ... WHERE account_id IN (
SELECT column_value FROM TABLE(?)
)
-- or
SELECT ... FROM ... JOIN (
SELECT column_value FROM TABLE(?)
) ON column_value = account_id
And use JDBC to bind a java.sql.Array (i.e. an oracle.sql.ARRAY) to the single bind variable. Clearly, this will result in less hard-parses and less cursors in the cache for functionally equivalent queries. But is there anything like general a performance-drawback, or any other issues that I might run into?
E.g: Does bind variable peeking work in a similar fashion for varrays or nested tables? Because the amount of data associated with every account may differ greatly.
I'm using Oracle 11g in this case, but I think the question is interesting for any Oracle version.
I suggest you try a plain old join like in
SELECT Col1, Col2
FROM ACCOUNTS ACCT
TABLE TAB,
WHERE ACCT.User = :ParamUser
AND TAB.account_id = ACCT.account_id;
An alternative could be a table subquery
SELECT Col1, Col2
FROM (
SELECT account_id
FROM ACCOUNTS
WHERE User = :ParamUser
) ACCT,
TABLE TAB
WHERE TAB.account_id = ACCT.account_id;
or a where subquery
SELECT Col1, Col2
FROM TABLE TAB
WHERE TAB.account_id IN
(
SELECT account_id
FROM ACCOUNTS
WHERE User = :ParamUser
);
The first one should be better for perfomance, but you better check them all with explain plan.
Looking at V$SQL_BIND_CAPTURE in a 10g database, I have a few rows where the datatype is VARRAY or NESTED_TABLE; the actual bind values were not captured. In an 11g database, there is just one such row, but it also shows that the bind value is not captured. So I suspect that bind value peeking essentially does not happen for user-defined types.
In my experience, the main problem you run into using nested tables or varrays in this way is that the optimizer does not have a good estimate of the cardinality, which could lead it to generate bad plans. But, there is an (undocumented?) CARDINALITY hint that might be helpful. The problem with that is, if you calculate the actual cardinality of the nested table and include that in the query, you're back to having multiple distinct query texts. Perhaps if you expect that most or all users will have at most 10 accounts, using the hint to indicate that as the cardinality would be helpful. Of course, I'd try it without the hint first, you may not have an issue here at all.
(I also think that perhaps Miguel's answer is the right way to go.)
For medium sized list (several thousand items) I would use this approach:
First:generate a prepared statement with an XMLTABLE in join with your main table.
For instance:
String myQuery = "SELECT ...
+" FROM ACCOUNTS A,"
+ "XMLTABLE('tab/row' passing XMLTYPE(?) COLUMNS id NUMBER path 'id') t
+ "WHERE A.account_id = t.id"
then loop through your data and build a StringBuffer with this content:
StringBuffer idList = "<tab><row><id>101</id></row><row><id>907</id></row> ...</tab>";
eventually, prepare and submit your statement, then fetch the results.
myQuery.setString(1, idList);
ResultSet rs = myQuery.executeQuery();
while (rs.next()) {...}
Using this approach is also possible to pass multi-valued list, as in the select statement
SELECT * FROM TABLE t WHERE (t.COL1, t.COL2) in (SELECT X.COL1, X.COL2 FROM X);
In my experience performances are pretty good, and the approach is flexible enough to be used in very complex query scenarios.
The only limit is the size of the string passed to the DB, but I suppose it is possible to use CLOB in place of String for arbitrary long XML wrapper to the input list;
This binding a variable number of items into an in list problem seems to come up a lot in various form. One option is to concatenate the IDs into a comma separated string and bind that, and then use a bit of a trick to split it into a table you can join against, eg:
with bound_inlist
as
(
select
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a, actual_table b
where a.token = b.token
Bind variable peaking is going to be a problem though.
Does the query plan actually change for larger number of accounts, ie would it be more efficient to move from index to full table scan in some cases, or is it borderline? As someone else suggested, you could use the CARDINALITY hint to indicate how many IDs are being bound, the following test case proves this actually works:
create table actual_table (id integer, padding varchar2(100));
create unique index actual_table_idx on actual_table(id);
insert into actual_table
select level, 'this is just some padding for '||level
from dual connect by level <= 1000;
explain plan for
with bound_inlist
as
(
select /*+ CARDINALITY(10) */
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a, actual_table b
where a.token = b.id;
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 840 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 10 | 840 | 2 (0)| 00:00:01 |
| 3 | VIEW | | 10 | 190 | 2 (0)| 00:00:01 |
|* 4 | CONNECT BY WITHOUT FILTERING| | | | | |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | ACTUAL_TABLE_IDX | 1 | | 0 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | ACTUAL_TABLE | 1 | 65 | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
Another option is to always use n bind variables in every query. Use null for m+1 to n.
Oracle ignores repeated items in the expression_list. Your queries will perform the same way and there will be fewer hard parses. But there will be extra overhead to bind all the variables and transfer the data. Unfortunately I have no idea what the overall affect on performance would be, you'd have to test it.