I have a table:
+-------+-------+----------+
| GROUP | State | Priority |
+-------+-------+----------+
| 1 | MI | 1 |
| 1 | IA | 2 |
| 1 | CA | 3 |
| 1 | ND | 4 |
| 1 | AZ | 5 |
| 2 | IA | 2 |
| 2 | NJ | 1 |
| 2 | NH | 3 |
And so on...
How do I write a query that makes all the sets of the states by group, in priority order? Like so:
+-------+--------------------+
| GROUP | SET |
+-------+--------------------+
| 1 | MI |
| 1 | MI, IA |
| 1 | MI, IA, CA |
| 1 | MI, IA, CA, ND |
| 1 | MI, IA, CA, ND, AZ |
| 2 | NJ |
| 2 | NJ, IA |
| 2 | NJ, IA, NH |
+-------+--------------------+
This is similar to my question here and I've tried to modify that solution but, I'm just a forty watt bulb and it's a sixty watt problem...
This problem actually looks simpler than the answer to the question you linked, which is an excellent solution to that problem. Nevertheless, this uses the same hierarchical queries, with connect by
If it is the case that priority is always a continuous sequence of numbers, this will work
SELECT t.grp, level, ltrim(SYS_CONNECT_BY_PATH(state,','),',') as "set"
from t
start with priority = 1
connect by priority = prior priority + 1
and grp = prior grp
However, if that's not always true, we would require row_number() to define the sequence based on the order of priority ( which need not be consecutive integer)
with t2 AS
(
select t.*, row_number()
over ( partition by grp order by priority) as rn from t
)
SELECT t2.grp, ltrim(SYS_CONNECT_BY_PATH(state,','),',') as "set"
from t2
start with priority = 1
connect by rn = prior rn + 1
and grp = prior grp
DEMO
I realize this has already been answered, but I wanted to see if I could do this using ANSI standard syntax. "connect by" is an Oracle only feature, the following will work on multiple databases:
WITH
-- ASET is just setting up the sample dataset
aset AS
(SELECT 1 AS grp, 'MI' AS state, 1 AS priority FROM DUAL
UNION ALL
SELECT 1 AS grp, 'IA', 2 FROM DUAL
UNION ALL
SELECT 1 AS grp, 'CA', 3 FROM DUAL
UNION ALL
SELECT 1 AS grp, 'ND', 4 FROM DUAL
UNION ALL
SELECT 1 AS grp, 'AZ', 5 FROM DUAL
UNION ALL
SELECT 2 AS grp, 'IA', 2 FROM DUAL
UNION ALL
SELECT 2 AS grp, 'NJ', 1 FROM DUAL
UNION ALL
SELECT 2 AS grp, 'NH', 3 FROM DUAL),
bset AS
-- In BSET we convert the ASET records into comma separated values
( SELECT grp, LISTAGG( state, ',' ) WITHIN GROUP (ORDER BY priority) AS set1
FROM aset
GROUP BY grp),
cset ( grp
, set1
, set2
, pos ) AS
-- CSET breaks our comma separated values up into multiple rows
-- Each row adding the next CSV value
(SELECT grp AS grp
, set1 AS set1
, SUBSTR( set1 || ',', 1, INSTR( set1 || ',', ',' ) - 1 ) AS set2
, 1 AS pos
FROM bset
UNION ALL
SELECT grp AS grp
, set1 AS set1
, SUBSTR( set1 || ','
, 1
, INSTR( set1 || ','
, ','
, 1
, pos + 1 )
- 1 ) AS set2
, pos + 1 AS pos
FROM cset
WHERE INSTR( set1 || ','
, ','
, 1
, pos + 1 ) > 0)
SELECT grp, set2
FROM cset
ORDER BY grp, pos;
Related
I have the following table.
Table 1
+---------+-----------+
| col_key | col_value |
+---------+-----------+
| key1 | value1 |
| key1 | value2 |
| key1 | value3 |
| key1 | value2 |
| key1 | value2 |
| key1 | value2 |
| key1 | value1 |
| key1 | value1 |
| key1 | value3 |
| key1 | value3 |
| key1 | value3 |
| key1 | value2 |
+---------+-----------+
I want to get the following output:
Table 2
+---------+-------------------------------------------------------------------+
| col_key | col_value |
+---------+-------------------------------------------------------------------+
| key1 | value1 | value2 | value 3 | value 2 | value 1 | value 3 | value 2 |
+---------+-------------------------------------------------------------------+
Algorithm - if consecutive values same - merge them into one.
I am using Oracle and listagg function:
select
col_key,
listagg(distinct col_value, ' | ') within group (order by col_key) as col_value
from
sample_table
group by col_key
order by col_key
But, listagg function with distinct keyword removes duplicates.
So, is it possible to make this in Oracle (like in table 2)?
Oracle versions (12c and 18c)
From Oracle 12, you can use MATCH_RECOGNIZE to perform row-by-row comparisons and aggregate the adjacent duplicates and then you can use LISTAGG to aggregate the unique values:
SELECT col_key,
LISTAGG(value, ' | ') WITHIN GROUP (ORDER BY mno) AS col_value
FROM (SELECT t.*,
ROWNUM AS rn -- You need to provide a way of getting this order!
FROM table1 t)
MATCH_RECOGNIZE(
PARTITION BY col_key
ORDER BY rn
MEASURES
MATCH_NUMBER() AS mno,
FIRST(col_value) AS value
PATTERN (same_value+)
DEFINE
same_value AS FIRST(col_value) = col_value
)
GROUP BY col_key;
Which, for the sample data:
CREATE TABLE Table1 (col_key, col_value) AS
SELECT 'key1', 'value1' FROM DUAL UNION ALL
SELECT 'key1', 'value2' FROM DUAL UNION ALL
SELECT 'key1', 'value3' FROM DUAL UNION ALL
SELECT 'key1', 'value2' FROM DUAL UNION ALL
SELECT 'key1', 'value2' FROM DUAL UNION ALL
SELECT 'key1', 'value2' FROM DUAL UNION ALL
SELECT 'key1', 'value1' FROM DUAL UNION ALL
SELECT 'key1', 'value1' FROM DUAL UNION ALL
SELECT 'key1', 'value3' FROM DUAL UNION ALL
SELECT 'key1', 'value3' FROM DUAL UNION ALL
SELECT 'key1', 'value3' FROM DUAL UNION ALL
SELECT 'key1', 'value2' FROM DUAL;
Outputs:
COL_KEY
COL_VALUE
key1
value1 | value2 | value3 | value2 | value1 | value3 | value2
db<>fiddle here
You said you'll edit the question and provide sorting column; so far, you didn't do that so I used ROWID instead. Change it with your own column (once you find out which one to use).
Read comments within code.
SQL> with
2 temp as
3 -- find COL_VALUE and the value that follows it (NEXT_VALUE in this query.
4 -- As I already said, I used ROWID for sorting purposes)
5 (select col_key,
6 col_value,
7 rowid rid,
8 lead(col_value) over (partition by col_key order by rowid) next_value
9 from sample_table
10 ),
11 temp2 as
12 -- "new" COL_VALUE will be the "original" COL_VALUE if it is different from its
13 -- next value (or - for the last row - if there's no next value)
14 (select col_key,
15 rid,
16 case when col_value <> next_value or next_value is null then col_value
17 else null
18 end col_value
19 from temp
20 )
21 -- finally, aggregate the result
22 select col_key,
23 listagg(col_value, ' | ') within group (order by rid) col_value
24 from temp2
25 group by col_key;
COL_KEY COL_VALUE
---------- ------------------------------------------------------------
1 val1 | val2 | val3 | val2 | val1 | val3 | val2
SQL>
Here's another take on it:
WITH
cte1 AS (SELECT COL_KEY,
COL_VALUE,
LAG(COL_VALUE) OVER (ORDER BY ROWNUM) AS PREV_COL_VALUE
FROM TEST_TAB),
cte2 AS (SELECT COL_KEY,
COL_VALUE,
CASE
WHEN COL_VALUE = PREV_COL_VALUE THEN 0
ELSE 1
END AS FLAG
FROM cte1),
cte3 AS (SELECT COL_KEY,
COL_VALUE
FROM cte2
WHERE FLAG = 1)
SELECT COL_KEY,
LISTAGG(COL_VALUE, ' | ') WITHIN GROUP (ORDER BY COL_KEY, ROWNUM) AS COL_VALUES
FROM cte3
GROUP BY COL_KEY
This produces:
COL_KEY COL_VALUES
------- ------------------------------------------------------------
key1 value1 | value2 | value3 | value2 | value1 | value3 | value2
which matches your desired output, but without an ordering column in your data you're at the mercy of the database and how it chooses to return the data. Remember - a basic rule of relational databases is that tables are unordered collections of rows, and there must an ORDER BY clause to impose an order on those rows when they are returned by a query.
db<>fiddle here
I have a column in database which stores data like follows
{0}="[DYD666020115982-ZO]",{1}="SomeText"
I want to get the 0th value ie DYD666020115982-ZO printed in output.
I have tried SUBSTR({0}="[DYD666020115982-ZO]",{1}="SomeText",7,18) which gives me output, but I wanted to know is there any other way without hard coding the position.
The oldfashioned substr + instr combination works good, works fast (res_1).
Regexp (res_2) along with trims (because of parenthesis) is another option:
SQL> with test (col) as
2 (select '{0}="[DYD666020115982-ZO]",{1}="SomeText"' from dual)
3 select substr(col, instr(col, '[') + 1,
4 instr(col, ']') - instr(col, '[') - 1
5 ) res_1,
6 --
7 rtrim(ltrim(regexp_substr(col, '\[.+\]'), '['), ']') res_2
8 from test;
RES_1 RES_2
------------------ ------------------
DYD666020115982-ZO DYD666020115982-ZO
SQL>
You can use REGEXP_SUBSTR to match the pattern (^|,)\{0\}="(([^"]|\\")*?)"(,|$):
Oracle Setup:
CREATE TABLE test_data ( id, list ) AS
SELECT 1, '{0}="[DYD666020115982-ZO]",{1}="SomeText"' FROM DUAL UNION ALL
SELECT 2, '{1}="SomeText with a \"Quote\"",{0}="[DYD666020115982-ZO]"' FROM DUAL UNION ALL
SELECT 3, '{1}="SomeText"' FROM DUAL UNION ALL
SELECT 4, '{0}="[DYD666020115982-ZO-2]",{1}="SomeText",{0}="[DYD666020115982-ZO-1]"' FROM DUAL UNION ALL
SELECT 5, '{1}="{0}=\"[DYD666020115982-ZO]\""' FROM DUAL;
Query:
SELECT REGEXP_SUBSTR( list, '(^|,)\{0\}="(([^"]|\\")*?)"(,|$)', 1, 1, NULL, 2 ) AS value
FROM test_data
Output:
| VALUE |
| :--------------------- |
| [DYD666020115982-ZO] |
| [DYD666020115982-ZO] |
| null |
| [DYD666020115982-ZO-2] |
| null |
or, if you want to string the leading and training characters then:
SELECT SUBSTR( value, 2, LENGTH( value ) - 2 ) AS value
FROM (
SELECT REGEXP_SUBSTR( list, '(^|,)\{0\}="(([^"]|\\")*?)"(,|$)', 1, 1, NULL, 2 ) AS value
FROM test_data
)
which outputs:
| VALUE |
| :------------------- |
| DYD666020115982-ZO |
| DYD666020115982-ZO |
| null |
| DYD666020115982-ZO-2 |
| null |
If you want to get all the values then you can use a recursive sub-query and REGEXP_SUBSTR:
Query:
WITH data ( id, list, key, value, idx, max_idx ) AS (
SELECT id,
list,
TO_NUMBER( REGEXP_SUBSTR( list, '\{(\d+)\}="(([^"]|\\")*?)"(,|$)', 1, 1, NULL, 1 ) ),
REGEXP_SUBSTR( list, '\{(\d+)\}="(([^"]|\\")*?)"(,|$)', 1, 1, NULL, 2 ),
1,
REGEXP_COUNT( list, '\{(\d+)\}="(([^"]|\\")*?)"(,|$)' )
FROM test_data
UNION ALL
SELECT id,
list,
TO_NUMBER( REGEXP_SUBSTR( list, '\{(\d+)\}="(([^"]|\\")*?)"(,|$)', 1, idx + 1, NULL, 1 ) ),
REGEXP_SUBSTR( list, '\{(\d+)\}="(([^"]|\\")*?)"(,|$)', 1, idx + 1, NULL, 2 ),
idx + 1,
max_idx
FROM data
WHERE idx < max_idx
)
SELECT id, key, value
FROM data
WHERE idx <= max_idx
ORDER BY id, key
Output:
ID | KEY | VALUE
-: | --: | :---------------------------
1 | 0 | [DYD666020115982-ZO]
1 | 1 | SomeText
2 | 0 | [DYD666020115982-ZO]
2 | 1 | SomeText with a \"Quote\"
3 | 1 | SomeText
4 | 0 | [DYD666020115982-ZO-2]
4 | 0 | [DYD666020115982-ZO-1]
4 | 1 | SomeText
5 | 1 | {0}=\"[DYD666020115982-ZO]\"
If you just want the values where the key is 0 then add AND key = 0 to the end of the query.
db<>fiddle here
How to convert below SQL server recursive query in vertica. I know that vertica does not support recursive query. i tried using sum() over with lag but i am still not able to acheive final expected output.
with Product as (
select * from (
VALUES
(1, '2018-12-25','2019-01-05' ),
(1, '2019-03-01','2019-03-10' ),
(1, '2019-03-15','2019-03-19' ),
(1, '2019-03-22','2019-03-28' ),
(1, '2019-03-30','2019-04-02' ),
(1, '2019-04-10','2019-04-15' ),
(1, '2019-04-18','2019-04-25' )
) as a1 (ProductId ,ProductStartDt ,ProductEndDt)
), OrderedProduct as (
select *, ROW_NUMBER() over (order by ProductStartDt) as RowNum
from Product
), DateGroupsInterim (RowNum, GroupNum, GrpStartDt, Indx) as (
select RowNum, 1, ProductEndDt,1
from OrderedProduct
where RowNum=1
union all
select OrderedProduct.RowNum,
CASE WHEN OrderedProduct.ProductStartDt <= dateadd(day, 15, dgi.GrpStartDt)
THEN dgi.GroupNum
ELSE dgi.GroupNum + 1
END,
CASE WHEN OrderedProduct.ProductStartDt <= dateadd(day, 15, dgi.GrpStartDt)
THEN dgi.GrpStartDt
ELSE OrderedProduct.ProductEndDt
END,
CASE WHEN OrderedProduct.ProductStartDt <= dateadd(day, 15, dgi.GrpStartDt)
THEN 0
ELSE 1
END
from DateGroupsInterim dgi
join OrderedProduct on OrderedProduct.RowNum=dgi.RowNum+1
) select OrderedProduct.ProductId, OrderedProduct.ProductStartDt, OrderedProduct.ProductEndDt, DateGroupsInterim.GrpStartDt, DateGroupsInterim.GroupNum, Indx
from DateGroupsInterim
JOIN OrderedProduct on OrderedProduct.RowNum = DateGroupsInterim.RowNum
order by 2
Below is how the expected output looks like.
The operation you want to do is also called "sessionization" - which is the operation of splitting a time series into groups/ sub time series that have a certain meaning together.
The way you describe it, it does not seem to be possible:
The next group relies exactly on both the start of its previous group (15 min later than the start of the first row of the previous group) and the end of the previous group's last row. This needs to be a loop or a recursion, which is not offered by Vertica.
I managed to join the table with itself and get a session id for consecutive rows within 15 minutes. But, as of now, they're overlapping, and I found no way to determine which group I want to keep...
Like so:
WITH product(productid ,productstartdt ,productenddt) AS (
SELECT 1, DATE '2018-12-25',DATE '2019-01-05'
UNION ALL SELECT 1, DATE '2019-03-01',DATE '2019-03-10'
UNION ALL SELECT 1, DATE '2019-03-15',DATE '2019-03-19'
UNION ALL SELECT 1, DATE '2019-03-22',DATE '2019-03-28'
UNION ALL SELECT 1, DATE '2019-03-30',DATE '2019-04-02'
UNION ALL SELECT 1, DATE '2019-04-10',DATE '2019-04-15'
UNION ALL SELECT 1, DATE '2019-04-18',DATE '2019-04-25'
)
,
groups AS (
SELECT
a.productstartdt AS in_productstartdt
, b.*
, CONDITIONAL_CHANGE_EVENT(a.productstartdt) OVER(PARTITION BY a.productid ORDER BY a.productstartdt) AS grp
FROM product a
LEFT JOIN product b
ON a.productid = b.productid
AND a.productstartdt <= b.productstartdt
AND (a.productstartdt=b.productstartdt OR b.productstartdt <= a.productenddt + 15)
)
SELECT * FROM groups;
-- out in_productstartdt | productid | productstartdt | productenddt | grp
-- out -------------------+-----------+----------------+--------------+-----
-- out 2018-12-25 | 1 | 2018-12-25 | 2019-01-05 | 0
-- out 2019-03-01 | 1 | 2019-03-01 | 2019-03-10 | 1
-- out 2019-03-01 | 1 | 2019-03-22 | 2019-03-28 | 1
-- out 2019-03-01 | 1 | 2019-03-15 | 2019-03-19 | 1
-- out 2019-03-15 | 1 | 2019-03-15 | 2019-03-19 | 2
-- out 2019-03-15 | 1 | 2019-03-22 | 2019-03-28 | 2
-- out 2019-03-15 | 1 | 2019-03-30 | 2019-04-02 | 2
-- out 2019-03-22 | 1 | 2019-03-22 | 2019-03-28 | 3
-- out 2019-03-22 | 1 | 2019-03-30 | 2019-04-02 | 3
-- out 2019-03-22 | 1 | 2019-04-10 | 2019-04-15 | 3
-- out 2019-03-30 | 1 | 2019-04-10 | 2019-04-15 | 4
-- out 2019-03-30 | 1 | 2019-03-30 | 2019-04-02 | 4
-- out 2019-04-10 | 1 | 2019-04-10 | 2019-04-15 | 5
-- out 2019-04-10 | 1 | 2019-04-18 | 2019-04-25 | 5
-- out 2019-04-18 | 1 | 2019-04-18 | 2019-04-25 | 6
-- out (15 rows)
-- out
-- out Time: First fetch (15 rows): 35.454 ms. All rows formatted: 35.503 ms
What is the next difficulty is how to get rid of grp-s 2, 3, and 5 ....
TABLE: HIST
CUSTOMER MONTH PLAN
1 1 A
1 2 B
1 2 C
1 3 D
If I query:
select h.*, lead(plan) over (partition by customer order by month) np from HIST h
I get:
CUSTOMER MONTH PLAN np
1 1 A B
1 2 B C
1 2 C D
1 3 D (null)
But I wanted
CUSTOMER MONTH PLAN np
1 1 A B
1 2 B D
1 2 C D
1 3 D (null)
Reason being, next month to 2 is 3, with D. I'm guessing partition by customer order by month doesn't work the way I thought.
Is there a way to achieve this in Oracle 12c?
One way to do it is to use RANGE partitioning with the MIN analytic function. Like this:
select h.*,
min(plan) over
(partition by customer
order by month
range between 1 following and 1 following) np
from HIST h;
+----------+-------+------+----+
| CUSTOMER | MONTH | PLAN | NP |
+----------+-------+------+----+
| 1 | 1 | A | B |
| 1 | 2 | B | D |
| 1 | 2 | C | D |
| 1 | 3 | D | |
+----------+-------+------+----+
When you use RANGE partitioning, you are telling Oracle to make the windows based on the values of the column you are ordering by rather than making the windows based on the rows.
So, e.g.,
ROWS BETWEEN 1 following and 1 following
... will make a window containing the next row.
RANGE BETWEEN 1 following and 1 following
... will make a window containing all the rows having the next value for month.
UPDATE
If it is possible that some values for MONTH might be skipped for a given customer, you can use this variant:
select h.*,
first_value(plan) over
(partition by customer
order by month
range between 1 following and unbounded following) np
from h
+----------+-------+------+----+
| CUSTOMER | MONTH | PLAN | NP |
+----------+-------+------+----+
| 1 | 1 | A | B |
| 1 | 3 | B | D |
| 1 | 3 | C | D |
| 1 | 4 | D | |
+----------+-------+------+----+
You can use LAG/LEAD twice. The first time to check for duplicate months and to set the value to NULL in those months and the second time use IGNORE NULLS to get the next monthly value.
It has the additional benefit that if months are skipped then it will still find the next value.
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE HIST ( CUSTOMER, MONTH, PLAN ) AS
SELECT 1, 1, 'A' FROM DUAL UNION ALL
SELECT 1, 2, 'B' FROM DUAL UNION ALL
SELECT 1, 2, 'C' FROM DUAL UNION ALL
SELECT 1, 3, 'D' FROM DUAL UNION ALL
SELECT 2, 1, 'E' FROM DUAL UNION ALL
SELECT 2, 1, 'F' FROM DUAL UNION ALL
SELECT 2, 3, 'G' FROM DUAL UNION ALL
SELECT 2, 5, 'H' FROM DUAL;
Query 1:
SELECT CUSTOMER,
MONTH,
PLAN,
LEAD( np ) IGNORE NULLS OVER ( PARTITION BY CUSTOMER ORDER BY MONTH, PLAN, ROWNUM ) AS np
FROM (
SELECT h.*,
CASE MONTH
WHEN LAG( MONTH ) OVER ( PARTITION BY CUSTOMER ORDER BY MONTH, PLAN, ROWNUM )
THEN NULL
ELSE PLAN
END AS np
FROM hist h
)
Results:
| CUSTOMER | MONTH | PLAN | NP |
|----------|-------|------|--------|
| 1 | 1 | A | B |
| 1 | 2 | B | D |
| 1 | 2 | C | D |
| 1 | 3 | D | (null) |
| 2 | 1 | E | G |
| 2 | 1 | F | G |
| 2 | 3 | G | H |
| 2 | 5 | H | (null) |
Just so that it is listed here as an option for Oracle 12c (onward), you can use an apply operator for this style of problem
select
h.customer, h.month, h.plan, oa.np
from hist h
outer apply (
select
h2.plan as np
from hist h2
where h.customer = h.customer
and h2.month > h.month
order by month
fetch first 1 rows only
) oa
order by
h.customer, h.month, h.plan
I don't know of any Oracle 12c public fiddles so, an example in SQL Server can be found here: http://sqlfiddle.com/#!18/cd95e/1
| customer | month | plan | np |
|----------|-------|------|--------|
| 1 | 1 | A | C |
| 1 | 2 | B | D |
| 1 | 2 | C | D |
| 1 | 3 | D | (null) |
I have table that looks like the following.
create table Testing(
inv_num varchar2(100),
po_num varchar2(100),
line_num varchar2(100)
)
data with the following.
Insert into Testing (INV_NUM,PO_num,line_num) values ('19782594','P0254836',1);
Insert into Testing (INV_NUM,PO_num,line_num) values ('19782594','P0254836',1);
Insert into Testing (INV_NUM,PO_num,line_num) values ('19968276','P0254836',1);
Insert into Testing (INV_NUM,PO_num,line_num) values ('19968276','P0254836',1);
what i'm trying to do is identify the multiple items within the table with the same PO_num but different inv_num.
I have try this
SELECT
T1.inv_num,
T1.Po_num,
T1.LINE_num ,
count(*) over( partition by
T1.inv_num)myRecords
FROM testing T1
where T1.Po_num = 'P0254836'
group by
T1.inv_num,
T1.Po_num,
T1.LINE_num
order by t1.inv_num
but this those not give me the desired end result.
I would like to end with the following.
INV_NUM PO_NUM LINE_NUM Myrecords
19782594 P0254836 1 1
19782594 P0254836 1 1
19968276 P0254836 1 2
19968276 P0254836 1 2
Where I'm going wrong? I really like to identify the change in INV_NUM for that po.
Please be aware this is part of a much larger project and I have only picked a small subset to show here.
Updated:
SELECT
inv_num
, po_num
, line_num
, DENSE_RANK() OVER (ORDER BY inv_num) "MyRecords"
FROM (
SELECT
po_num
, inv_num
, line_num
, COUNT(line_num) OVER (PARTITION BY po_num, inv_num ORDER BY NULL) cnt
FROM testing
)
WHERE cnt > 1;
returns
| INV_NUM | PO_NUM | LINE_NUM | MYRECORDS |
|----------|----------|----------|-----------|
| 19782594 | P0254836 | 1 | 1 |
| 19782594 | P0254836 | 1 | 1 |
| 19968276 | P0254836 | 1 | 2 |
| 19968276 | P0254836 | 1 | 2 |
SQL Fiddle
Maybe this helps:
SELECT inv_num,
po_num,
line_num,
DENSE_RANK() OVER (ORDER BY inv_num) AS rn
FROM testing