How to implement multidimensional sequences - oracle

For example, here is a yearly sequence. The no increments with year:
| no | year |
+----+------+
| 1 | 2016 |
| 2 | 2016 |
| 3 | 2016 |
| 1 | 2017 |
| 2 | 2017 |
| 4 | 2016 |
For now I have created sequence for each year
but the problem is Oracle will not automatically create new sequence in next year.
Another problem is if I want to use a 3D sequence, incrementing within year and type:
| no | year | type |
+----+------+------+
| 1 | 2016 | a |
| 2 | 2016 | a |
| 1 | 2016 | b |
| 1 | 2017 | b |
| 2 | 2017 | b |
| 1 | 2017 | c |
This will be too many sequences in database
I do not recommend max(no) because of parallel access issue.
I tried to lock table before getting max(no) in a trigger but it resulted in deadlock.

The only way to do this is with a code control table ...
create table code_control
(year number(4,0) not null
, type varchar2(1) not null
, last_number number(38,0) default 1 not null
, primary key (year,type)
)
organization index
/
... which is maintained like this ...
create or replace function get_next_number
(p_year in number, p_type in varchar2)
return number
is
pragma autonomous_transaction;
cursor cur_cc is
select last_number + 1
from code_control cc
where cc.year= p_year
and cc.type = p_type
for update of last_number;
next_number number;
begin
open cur_cc;
fetch cur_cc into next_number;
if cur_cc%found then
update code_control
set last_number = next_number
where current of cur_cc;
else
insert into code_control (year,type)
values (p_year, p_type)
returning last_number into next_number;
end if;
commit;
return next_number;
end;
/
The important thing is the SELECT ... FOR UPDATE. Pessimistic locking guarantees uniqueness in a multi-user environment. The PRAGMA ensures that maintaining code_control doesn't pollute the broader transaction. It allows us to call the function in a trigger without deadlocks.
Here is a table with a key like yours:
create table t42
(year number(4,0) not null
, type varchar2(1) not null
, id number(38,0)
, primary key (year,type, id)
)
/
create or replace trigger t42_trg
before insert on t42 for each row
begin
:new.id := get_next_number(:new.year, :new.type);
end;
/
There's nothing up my sleeves before I populate t42:
SQL> select * from code_control;
no rows selected
SQL> select * from t42;
no rows selected
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'B');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2017, 'A');
1 row created.
SQL> select * from t42;
YEAR T ID
---------- - ----------
2016 A 1
2016 A 2
2016 A 3
2016 A 4
2016 B 1
2017 A 1
6 rows selected.
SQL> select * from code_control;
YEAR T LAST_NUMBER
---------- - -----------
2016 A 4
2016 B 1
2017 A 1
SQL>
So the obvious objection to this implementation is scalability. Inserting transactions are serialized on the code_control table. That's absolutely true. However the lock is held for the shortest possible time, so this should not be an issue even if the t42 table is populated many times a second.
However, if the table is subjected to massive numbers of concurrent inserts the locking may become an issue. It is crucial the table has sufficient Interested Transaction slots (INITRANS, MAXTRANS) to cope with concurrent demands. But very busy systems may need a smarter implementation (perhaps generating the IDs in batches); otherwise abandon the compound key in favour of a sequence (because sequences do scale in multi-user environments).

Related

PL/SQL insert record using max function

Create a PL/SQL block to insert a new record into the Department table. Fetch the maximum department id from the Department table and add 10 to it; take this value for department id; 'TESTING' is the value for department name and CHN-102 is the value for Location ID.
Note: Use '/' to terminate your query before compilation and evaluation
Table name : Department
Column name | Data type | Constraints
DEPARTMENT_ID | NUMBER(5) | PK
DEPARTMENT_NAME | VARCHAR2(25) | NOT NULL
LOCATION_ID | VARCHAR2(15)
Sample Output:
DEPARTMENT_ID DEPARTMENT_NAME LOCATION_ID
------------- --------------- -----------
XXXX TESTING CHN-102
The way you described it, it would look like this:
SQL> declare
2 l_department_id department.department_id%type;
3 l_department_name department.department_name%type := 'TESTING';
4 l_location_id department.location_id%type := 'CHN-102';
5 begin
6 select nvl(max(department_id), 10)
7 into l_department_id
8 from department;
9
10 insert into department (department_id, department_name, location_id)
11 values (l_department_id + 10, l_department_name, l_location_id);
12 end;
13 /
PL/SQL procedure successfully completed.
SQL> select * From department;
DEPARTMENT_ID DEPARTMENT_NAME LOCATION_I
------------- -------------------- ----------
20 TESTING CHN-102
SQL>
Note, though, that MAX + 10 is a wrong approach. If two (or more) users run the same procedure at the same time, only the first one who commits changes will be able to do that; other user(s) will violate the primary key constraint because that department_id already exists (as it was inserted moments ago by someone else). Use a sequence instead.

What is going on underneath FDA queries?

Let's say I want to pull data for the TEST_TABLE table for some date. I create a query with FDA syntax:
select * from TEST_TABLE as of timestamp (timestamp 2021.05.05 15:00:15);
I want to check how exactly the query looks like in oracle engine. I.e. what are the conditions of this query, what tables are the data taken from etc....
Execution plan returned me this info:
Predicate Information (identified by operation id):
------------------------------------------
* 4 - filter(("STARTSCN"<=148411288669 OR "STARTSCN" IS NULL) AND "ENDSCN">148411288669 AND ("OPERATION"<>'D' OR "OPERATION" IS NULL) AND "ENDSCN"<=155682149589)
* 5 - filter("STARTSCN"<=148411288669 OR "STARTSCN" IS NULL)
* 7 - filter(("T"."VERSIONS_STARTSCN" IS NULL OR "T"."VERSIONS_STARTSCN"<=148411288669) AND ("T"."VERSIONS_ENDSCN" IS NULL OR "T"."VERSIONS_ENDSCN">148411288669) AND ("T"."VERSIONS_OPERATION" IS NULL
OR "T"."VERSIONS_OPERATION"<>'D'))
* 8 - filter(("ENDSCN"(+) IS NULL OR "ENDSCN"(+)>155682149589) AND ("STARTSCN"(+)<155682149589 OR "STARTSCN"(+) IS NULL))
* 9 - access("RID"(+)=ROWIDTOCHAR("T".ROWID))
But it's not quite what I'm looking for... When I add these into where section in TEST_TABLE the results are not the same.
If you are referring to which tables are used by Flashback Data Archive, a.k.a FDA, you need first to understand how Oracle works with Flashback query.
Let me show you an example. I will create a small flashback archive group and a table will be assigned to it.
SQL> create flashback archive fda_test tablespace tbrepdata quota 1g retention 1 year ;
Flashback archive created.
SQL> grant flashback archive on fda_test to test ;
Grant succeeded.
SQL> grant flashback archive administer to test ;
Grant succeeded.
SQL> GRANT EXECUTE ON DBMS_FLASHBACK_ARCHIVE TO test;
Grant succeeded.
SQL> create table test.t1 ( c1 number, c2 number ) flashback archive fda_test ;
Table created.
SQL> insert into test.t1 values ( 1 , 1 ) ;
1 row created.
SQL> insert into test.t1 values ( 2 , 2 ) ;
1 row created.
SQL> insert into test.t1 values ( 3, 3 ) ;
1 row created.
SQL> commit ;
Commit complete.
SQL> update test.t1 set c1=4,c2=4 where c1=3 ;
1 row updated.
SQL> commit ;
Commit complete.
Now, if I do a query
SQL> col versions_startscn format 9999999999999999
SQL> col versions_endscn format 9999999999999999
SQL> r
1 SELECT versions_startscn,
2 --versions_starttime,
3 versions_endscn,
4 --versions_endtime,
5 versions_xid,
6 versions_operation,
7 c1,
8 c2
9* from test.t1 versions between scn minvalue and maxvalue
VERSIONS_STARTSCN VERSIONS_ENDSCN VERSIONS_XID V C1 C2
----------------- ----------------- ---------------- - ---------- ----------
13142361651647 13001C0000AB0000 U 4 4
13142361651581 13142361651647 20002A00BD960000 I 3 3
13142361651581 20002A00BD960000 I 2 2
13142361651581 20002A00BD960000 I 1 1
Let's check the plan
SQL> set autotrace traceonly
SQL> r
1 SELECT versions_startscn,
2 --versions_starttime,
3 versions_endscn,
4 --versions_endtime,
5 versions_xid,
6 versions_operation,
7 c1,
8 c2
9* from test.t1 versions between scn minvalue and maxvalue
Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 164 | 4264 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T1 | 164 | 4264 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
Statistics
----------------------------------------------------------
5 recursive calls
4 db block gets
22 consistent gets
0 physical reads
0 redo size
1091 bytes sent via SQL*Net to client
591 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
As you can see, Oracle is just accessing the table. Why ? Because the data is still in the undo tablespace, as the undo blocks have not yet expired. When you use FDA, Oracle will use always this approach when you use flashback query:
If the data to the Undo Tablespace, data is recover from it.
If the data is no longer available in the Undo tablespace, then it will retrieve the rows from the underlying FDA table.
The underlying table contains the archive data based on the retention established for the archive group
SQL> set lines 200
SQL> SELECT owner_name,
2 table_name,
3 flashback_archive_name,
4 archive_table_name,
5 status
6* FROM dba_flashback_archive_tables where owner_name = 'TEST' and table_name = 'T1'
OWNER_NAME TABLE_NAME FLASHBACK_ARCHIVE_NAME ARCHIVE_TABLE_NAME STATUS
------------------------------ ------------------------------ ------------------------------ ------------------------------ -------------
TEST T1 FDA_TEST SYS_FBA_HIST_2779773 ENABLED
If you are sure that the data you are recovering with as of timestamp is no longer in the undo tablespace, you can use a 10046 event to generate a trace file to really see how Oracle is really getting the data.
Although I wonder what is you are looking for in getting that level of detail.

Oracle: constraint preventing insert more than N (variable) rows

I've got a table defining the max number of objects for each customer.
Table_1
id_table_1 numeric primary key
cd_object varchar2(20)
max_number number
Another table stores the object assigned to each customer
Table_2
id_table_2 numeric primary key
cd_customer varchar2(20)
cd_object varchar2(20)
How can I set up a constraint in table_2 in order to prevent more than "max_number" record for each "customer - object" couple?
For example:
Table_1
cd_object / max_number
xxx / 1
yyy / 2
Table_2
insert "customer_1", "xxx" -> OK!
insert "customer_1", "xxx" -> KO!
insert "customer_1", "yyy" -> OK!
insert "customer_1", "yyy" -> OK!
insert "customer_1", "yyy" -> KO!
Thanks in advance for your replies.
This constraint is more complex than a CHECK constraint can handle. One day we hope Oracle will support SQL ASSERTIONS which are constraints of arbitrary complexity.
Meanwhile this can be done (with caution re performance) using materialized views (MVs) and constraints. I blogged about this may years ago: your requirement is very similar to my example 3 there. Applying to your case it would be something like:
create materialized view table_2_mv
build immediate
refresh complete on commit as
select t2.cd_customer, t2.cd_object, t1.max_number, count(*) cnt
from table_2 t2
join table_1 t1 on t1.cd_object = t2.cd_object
group by t2.cd_customer, t2.cd_object, t1.max_number;
alter table table_2_mv
add constraint table_2_mv_chk
check (cnt <= max_number)
deferrable;
Pure trigger-based solutions tend to fail in the real world as when 2 users similtaneously add a record that just takes the count to the maximum, both succeed and when committed leave the table with more rows than the maximum in it!
However, taking into account your comment that you have 2M rows in table_2, which perhaps makes the MV approach above unusable, there could be another approach that does involve triggers:
Create a table that denormalizes information from table_1 and table_2 like this:
create table denorm as
select t2.cd_customer, t2.cd_object, t1.max_number, count(*) cnt
from table_2 t2
join table_1 t1 on t1.cd_object = t2.cd_object
group by t2.cd_customer, t2.cd_object, t1.max_number;
Use a trigger or triggers on table_1 to ensure denorm.max_number is always correct - e.g when table_1.max_number is updated to a new value, update the corresponding denorm table rows.
Use a trigger or triggers on table_2 to update the denorm.cnt value - e.g. when a row is added, increment denorm.cnt, when a row is deleted, decrement it.
Add a check constraint to denorm
alter table denorm
add constraint denorm_chk
check (cnt <= max_number);
This is essentially the same as the MV solution, but avoids the full refresh by using triggers to maintain the denorm table as you go along. It works in a multi-user system because the updates to the denorm table serialize changes to table_2 so that 2 users cannot modify it simultaneously and break the rules.
You can use the trigger on TABLE_2 as following:
-- creating the tables
SQL> CREATE TABLE TABLE_1 (
2 ID_TABLE_1 NUMBER PRIMARY KEY,
3 CD_OBJECT VARCHAR2(20),
4 MAX_NUMBER NUMBER
5 );
Table created.
SQL> CREATE TABLE TABLE_2 (
2 ID_TABLE_2 NUMBER PRIMARY KEY,
3 CD_CUSTOMER VARCHAR2(20),
4 CD_OBJECT VARCHAR2(20)
5 );
Table created.
-- creating the trigger
SQL> CREATE OR REPLACE TRIGGER TRG_TABLE_2_MAX_OBJECT BEFORE
2 INSERT OR UPDATE ON TABLE_2
3 FOR EACH ROW
4 DECLARE
5 LV_MAX_NUMBER TABLE_1.MAX_NUMBER%TYPE;
6 LV_COUNT NUMBER;
7 BEGIN
8 BEGIN
9 SELECT
10 MAX_NUMBER
11 INTO LV_MAX_NUMBER
12 FROM
13 TABLE_1
14 WHERE
15 CD_OBJECT = :NEW.CD_OBJECT;
16
17 EXCEPTION
18 WHEN OTHERS THEN
19 LV_MAX_NUMBER := -1;
20 END;
21
22 SELECT
23 COUNT(1)
24 INTO LV_COUNT
25 FROM
26 TABLE_2
27 WHERE
28 CD_OBJECT = :NEW.CD_OBJECT;
29
30 IF LV_MAX_NUMBER = LV_COUNT AND LV_MAX_NUMBER >= 0 THEN
31 RAISE_APPLICATION_ERROR(-20000, 'Not allowed - KO');
32 END IF;
33
34 END;
35 /
Trigger created.
-- testing the code
SQL> INSERT INTO TABLE_1 VALUES (1,'xxx',1);
1 row created.
SQL> INSERT INTO TABLE_1 VALUES (2,'yyy',2);
1 row created.
SQL> INSERT INTO TABLE_2 VALUES (1,'customer_1','xxx');
1 row created.
SQL> INSERT INTO TABLE_2 VALUES (2,'customer_1','xxx');
INSERT INTO TABLE_2 VALUES (2,'customer_1','xxx')
*
ERROR at line 1:
ORA-20000: Not allowed - KO
ORA-06512: at "TEJASH.TRG_TABLE_2_MAX_OBJECT", line 28
ORA-04088: error during execution of trigger 'TEJASH.TRG_TABLE_2_MAX_OBJECT'
SQL> INSERT INTO TABLE_2 VALUES (3,'customer_1','yyy');
1 row created.
SQL> INSERT INTO TABLE_2 VALUES (4,'customer_1','yyy');
1 row created.
SQL> INSERT INTO TABLE_2 VALUES (5,'customer_1','yyy');
INSERT INTO TABLE_2 VALUES (5,'customer_1','yyy')
*
ERROR at line 1:
ORA-20000: Not allowed - KO
ORA-06512: at "TEJASH.TRG_TABLE_2_MAX_OBJECT", line 28
ORA-04088: error during execution of trigger 'TEJASH.TRG_TABLE_2_MAX_OBJECT'
SQL>
-- Checking the data in the TABLE_2
SQL> SELECT * FROM TABLE_2;
ID_TABLE_2 CD_CUSTOMER CD_OBJECT
---------- -------------------- --------------------
1 customer_1 xxx
3 customer_1 yyy
4 customer_1 yyy
SQL>
Cheers!!

Getting data from different tables between dates when each table has it on date modified

Like to get some views from you all, regarding one scenario i'm struggling with currently. Below is a problem statement
I have Table A , B , C
A has below column
user|modified date| wokred_on A | ..some more related to user operation
B has columns
user | modified date | worked on B | ..some other columns
C has columsn
user | modified date | worked on C| ..some other columns
these tables are not have any direct relation except then user.
we have to pull data from these tables for a user between given dates with the count op action or work he has done between a given date range?
my struggle here is these each table has it's own date modified if a date range selected which is not in other column but still i need to pull the data as user has worked on it in between dates.
can it be possible to select these dates and have the in one column so that one can put that in where clause and having outer joins to pull other records ?
Sorry for this big problem statement. any suggestions are very much appreciated
Below is a use case.just extending the assumption given by littlefoot
First, test case:
SQL> create table a (cuser varchar2(10), modified_date date,action );
varchar2 (10) )
Table created.
SQL> create table b (
Table created.cuser varchar2(10), modified_date date,action
varchar2 (10) );
SQL> create table c (cuser varchar2(10), modified_date date,action
varchar2 (10) ));
Table created.
SQL> insert into a values ('lf', date '2018-05-01', 'issue raised');
1 row created.
SQL> insert into a values ('mc', date '2018-05-01', 'issue raised ');
1 row created.
SQL> insert into b values ('lf', date '2018-05-01',issue raised');
1 row created.
SQL> insert into b values ('lf', date '2018-05-01','issue resolved');
1 row created.
SQL> insert into c values ('if', date '2018-05-28',' issue resolved');
1 row created.
SQL> insert into c values ('mc', date '2018-05-13','issue raised');
1 row created.
SQL> insert into c values ('mc', date '2018-05-13','issue resolved');
1 row created.
SQL> alter session set nls_date_format = 'yyyy-mm-dd';
Session altered.
SQL> select * from a;
CUSER MODIFIED_D. ACTION
---------- ----------
lf 2018-05-01 issue raised
mc 2018-05-01 issue raised
SQL> select * from b;
CUSER MODIFIED_D ACTION
---------- ----------. ______________
lf 2018-05-01 issue raised
lf 2018-05-01. issue resolve
SQL> select * from c;
CUSER MODIFIED_D. ACTION
---------- ----------
If 2018-05-28. issue resolve
mc 2018-05-13. issue raised
mc 2018-05-13. issue resolve
CUSER DATE CNT_ISSUE_RAISED CNT_ISSUE_RESOLVED
------ ------- --------------- -------------------
if 2018-05-01 2 1
lf 2018-05-28 0 1
mc 2018-05-01 0 1
mc 2018-05-13 1 1
This is how I understood the question.
First, test case:
SQL> create table a (cuser varchar2(10), modified_date date);
Table created.
SQL> create table b (cuser varchar2(10), modified_date date);
Table created.
SQL> create table c (cuser varchar2(10), modified_date date);
Table created.
SQL> insert into a values ('lf', date '2018-05-01');
1 row created.
SQL> insert into a values ('mc', date '2018-05-15');
1 row created.
SQL> insert into b values ('lf', date '2018-05-07');
1 row created.
SQL> insert into b values ('lf', date '2018-05-08');
1 row created.
SQL> insert into c values ('jw', date '2018-05-28');
1 row created.
SQL> insert into c values ('mc', date '2018-05-13');
1 row created.
SQL> insert into c values ('mc', date '2018-05-22');
1 row created.
SQL> alter session set nls_date_format = 'yyyy-mm-dd';
Session altered.
SQL> select * from a;
CUSER MODIFIED_D
---------- ----------
lf 2018-05-01
mc 2018-05-15
SQL> select * from b;
CUSER MODIFIED_D
---------- ----------
lf 2018-05-07
lf 2018-05-08
SQL> select * from c;
CUSER MODIFIED_D
---------- ----------
jw 2018-05-28
mc 2018-05-13
mc 2018-05-22
Query which returns desired result - number of rows per each user in every table, in desired date period. As I use SQL*Plus, variables are preceded by && to avoid multiple insert requests. In a tool you use, that might be a colon (:).
SQL> select nvl(nvl(a.cuser, b.cuser), c.cuser) cuser,
2 count(distinct a.modified_date) cnt_a,
3 count(distinct b.modified_date) cnt_b,
4 count(distinct c.modified_date) cnt_c
5 from a full outer join b on a.cuser = b.cuser
6 full outer join c on a.cuser = c.cuser
7 where a.modified_date between &&date_from and &&date_to
8 or b.modified_date between &&date_from and &&date_to
9 or c.modified_date between &&date_from and &&date_to
10 group by nvl(nvl(a.cuser, b.cuser), c.cuser)
11 order by 1;
Enter value for date_from: '2018-05-01'
Enter value for date_to: '2018-06-01'
CUSER CNT_A CNT_B CNT_C
---------- ---------- ---------- ----------
jw 0 0 1
lf 1 2 0
mc 1 0 2
SQL>

Finding summary & basic statistics from data in Vertica

Recently I am exploring HPE Vertica a bit. Is it possible to find summary statistics (mean,sd,quartiles,max,min,counts etc) from a data table loaded in vertica?
These two links;
https://my.vertica.com/docs/7.0.x/HTML/Content/Authoring/SQLReferenceManual/Functions/VerticaFunctions/ANALYZE_STATISTICS.htm
https://my.vertica.com/docs/7.0.x/HTML/Content/Authoring/SQLReferenceManual/Functions/VerticaFunctions/ANALYZE_HISTOGRAM.htm
say that we can find statistics & histogram from the data but the result is making no sense to me.
According to it, the ANALYZE_STATISTICS command will throw a 0 for successful execution. Like
NEWDB_aug17=> SELECT ANALYZE_STATISTICS ('MM_schema.capitalline');
ANALYZE_STATISTICS
--------------------
0
(1 row)
Here NEWDB_aug17 is the database, schema is MM_schema under which capitalline table was inserted. But where are the summary measures, i mean the numbers we are actually looking for? Only a 0 is not going to serve my purpose.
Can you please guide me in this context?
Vertica saves the statistics collected by ANALYZE_STATISTICS() in the catalog location.
These statistics are later used to calculate best query execution plan.
You can find the statistics details in the system table v_internal.dc_analyze_statistics
[dbadmin#vertica-1 ~]$ vsql
dbadmin=> \x
Expanded display is on.
dbadmin=> select * from v_internal.dc_analyze_statistics limit 1;
-[ RECORD 1 ]----+-----------------------------------
time | 2017-08-21 02:07:03.287895+00
node_name | v_test_node0001
session_id | v_test_node0001-502811:0x834a4
user_id | 45035996273704962
user_name | dbadmin
transaction_id | 45035996307673368
statement_id | 9
request_id | 1
table_name | test_table
proj_column_name | test_column
proj_name | test_table_sp_v11_b1
table_oid | 45036013037102108
proj_column_oid | 45036013037111264
proj_row_count | 119878353211
disk_percent | 10
disk_read_rows | 11987835321
sample_rows | 131072
sample_bytes | 7602176
start_time | 2017-08-21 02:07:03.657377+00
end_time | 2017-08-21 02:07:24.799398+00
Time: First fetch (1 row): 849.467 ms. All rows formatted: 849.594 ms
Or at this path:
{your_catalog_location}/{db_name}/{node_name}_catalog/DataCollector/AnalyzeStatistics_*.log
percentile_cont function of Vertica would be helpful in retrieving quartile.
create table test
(metric_value integer);
insert into test values(1);
insert into test values(2);
insert into test values(3);
insert into test values(4);
insert into test values(5);
insert into test values(6);
insert into test values(7);
insert into test values(8);
insert into test values(9);
insert into test values(10);
alter table anatest add column metric varchar(100) default 'abc';
select
metric_value,
percentile_cont(1) within group (order by metric_value) over (partition by metric) as max,
percentile_cont(.75) within group (order by metric_value ) over (partition by metric) as q3,
percentile_cont(.5) within group (order by metric_value ) over (partition by metric) as median,
percentile_cont(.25) within group (order by metric_value ) over (partition by metric) as q1,
percentile_cont(0) within group (order by metric_value ) over (partition by metric) as min
from test ;

Resources