Split the table into equal chunks based on varchar column Oracle - oracle

I have a huge table with 20 million records and I want to split the table into 10 equal chunks.
The problem is the table only has varchar columns. I am able to use ROWNUM column and split the table into equal chunks but I couldn't seem to get the Start and End value of the varchar column into the query result set. Below is the query.
with bkt as (
select ROWNUM, width_bucket(ROWNUM, 1, 100100, 10) as id_bucket from "BOOKER"."test"
)
select id_bucket
, min(ROWNUM) as bkt_start
, max(ROWNUM) as bkt_end
, count(*)
from bkt
group by id_bucket
order by 1;
Please advise how can I add the varchar column with this query to give me the start and end varchar values of the column.

Related

Combining CLOB columns in Query

I have a table with a CLOB column. What I need to do is query the table, and combine the CLOB column of each row into a single CLOB column.
So, say I have something like:
ABC CLOB_VALUE1
ABC CLOB_VALUE2
ABC CLOB_VALUE2
What I need at output is:
ABC Combined Value (CLOB_VALUE1, CLOB_VALUE2, CLOB_VALUE3)
LISTAGG will not work due to the length, and I'm not having any luck with XMLAGG (unless I am doing it wrong).
I tried this, but it is not retrieving all the records:
SELECT id, XMLAGG(XMLELEMENT(E,price_string||',') ORDER BY
price_date).EXTRACT('//text()').getclobval() AS daily_7d_prices
FROM daily_price_coll
WHERE price_date >= TRUNC(SYSDATE) - 7
GROUP BY id;
I'm only getting the most recent row, when there are actually 3 rows in the table.
Any ideas?

How to retrieve workflow attribute values from workflow table?

I have a situation where in I need to take the values from table column which has data based on one of the column in same table.
There are two column values like that which is required to compare with another table.
Scenario:
Column 1 query:
SELECT text_value
FROM WF_ITEM_ATTRIBUTE_VALUES
WHERE name LIKE 'ORDER_ID' --AND number_value IS NOT NULL
AND Item_type LIKE 'ABC'
this query returns 14 unique records
Column 2 query:
SELECT number_value
FROM WF_ITEM_ATTRIBUTE_VALUES
WHERE name LIKE 'Source_ID' --AND number_value IS NOT NULL
AND Item_type LIKE 'ABC'
this also returns 14 records
and order_id of column 1 query is associated with source_id of column 2 query using this two column values i want to compare 14 records combined order_id, source_id with another table column i.e. Sales_tbl
columns sal_order_id, sal_source_id
Sample Data from WF_ITEM_ATTRIBUTE_VALUES:
Note: same data in the sales_tbl table but order_id is sal_order_id and sal_source_id
Order_id
204994 205000 205348 198517 198176 196856 204225 205348 203510 206528 196886 198971 194076 197940
Source_id
92262138 92261783 92262005 92262615 92374992 92375051 92374948 92375000 92375011 92336793 92374960 92691360 92695445 92695880
Desired O/p based on comparison:
Please help me in writing the query

How to select multiple column names (not values) in a table when column value is specified-Oracle

I have a table which has around 30 columns.Out of 30 columns i need to retrieve around 25 column names where value is set to some specified value(say 1).
I am not able to find a way to do this.Will multiple if statement work as below
select if columnname1='1' then 'columnname1' else null
if columnname2='1' then 'columnname2' else null from table.
In case the value is not set to 1, i don't want to retrieve the column names.
The below query can give me the column names but i can't specify the value with below query
select DISTINCT COLUMN_NAME from ALL_TAB_COLUMNS where TABLE_NAME='table_name'.
Does this work for you?
SELECT COLUMN_NAME
from ALL_TAB_COLUMNS
where TABLE_NAME='TABLE_NAME'
AND INSTR(column_name,'1') >0

Oracle finding last row inserted

Let say I have table my table has values(which they are varchar):
values
a
o
g
t
And I have insert a new value called V
values
V
a
o
g
t
Is there a way or query that can specify what is the last value was insert in the column ? the desired query : select * from dual where rown_num = count(*) -- just an example and the result will be V
Rows in a table have no inherent order. rownum is a pseudocolumn that's part of the select so it isn't useful here. There is no way to tell where in the storage a new row will physically be placed, so you can't rely on rowid, for example.
The only way to do this reliably is to have a timestamp column (maybe set by a trigger so you don't have to worry about it). That would let you order the rows by timestamp and find the row with the highest (most recent) timestamp.
You are still restricted by the precision of the timestamp, as I discovered creating a SQL Fiddle demo; without forcing a small gap between the inserts the timestamps were all the same, but then it only seems to support `timestamp(3). That probably won't be a significant issue in the real world, unless you're doing bulk inserts, but then the last row inserted is still a bit of an arbitrary concept.
As quite correctly pointed out in comments, if the actual time doesn't need to be know, a numeric field populated by a sequence would be more reliable and performant; another SQL Fiddle demo here, and this is the gist:
create table t42(data varchar2(10), id number);
create sequence seq_t42;
create trigger bi_t42
before insert on t42
for each row
begin
:new.id := seq_t42.nextval;
end;
/
insert into t42(data) values ('a');
insert into t42(data) values ('o');
insert into t42(data) values ('g');
insert into t42(data) values ('t');
insert into t42(data) values ('V');
select data from (
select data, row_number() over (order by id desc) as rn
from t42
)
where rn = 1;

Adding new column and index to a table with a billion records

I want to add a new column into to a table with billion records. To speed the up the select statement, I need to add a new index which will contain this column and the PK column.
How long will it take to add a new index in a billion records table?
The new column for example [Field], the value will be 0,1,2,9.
Most of record's will be 9. In the select condition
Field=0 or Field=1 or Field=2 will be used, but the Field=9 will not be used.
for example in the a billion records table ,
Field with value 0 records:100,000;
Field with value 1 records:100,000;
Field with value 2 records:100,000;
Field with value 9 records:a billion-300,000
Should I create index on the column?
If not, the select sql that contain the condition
Field=0 will be too slow to return results?
If most of the values are 9's then you can avoid including them in the index with:
create index my_index on my_table (case column_name when 9 then null else column_name end);
Then query on ...
select ...
from ...
where case column_name when 9 then null else column_name end = 2
... for example.
The time taken will be the time required to scan the entire table, then sort the 300,000 records that will go in the index. Faster with a parallel index build, of course.

Resources