How to find Max value of an alphanumeric field in oracle? - oracle

I have the data as below and ID is VARCHAR2 type
Table Name :EMP
ID TST_DATE
A035 05/12/2015
BAB0 05/12/2015
701 07/12/2015
81 07/12/2015
I used below query to get max of ID group by TST_DATE.
SELECT TST_DATE,MAX(ID) from EMP group by TST_DATE;
TST_DATE MAX(ID)
05/12/2015 BAB0
07/12/2015 81
In the second row it returning 81 instead of 701.

To sort strings that represent (hex) numbers in numeric, rather than lexicographical, order you need to convert them to actual numbers:
SELECT TST_DATE, ID, TO_NUMBER(ID, 'XXXXXXXXXX') from EMP
ORDER BY TO_NUMBER(ID, 'XXXXXXXXXX');
TST_DATE ID TO_NUMBER(ID,'XXXXXXXXXX')
---------- ---- ---------------------------------------
07/12/2015 81 129
07/12/2015 701 1793
05/12/2015 A035 41013
05/12/2015 BAB0 47792
You can use that numeric form within your max() and convert back to a hex string for display:
SELECT TST_DATE,
TO_CHAR(MAX(TO_NUMBER(ID, 'XXXXXXXXXX')), 'XXXXXXXXXX')
from EMP group by TST_DATE;
TST_DATE TO_CHAR(MAX
---------- -----------
07/12/2015 701
05/12/2015 BAB0
With a suitable number of Xs in the format models of course; how many depends on the size of your varchar2 column.

Related

Get closest date with id and value Oracle

I ran into a problem and maybe there are experienced guys here to help me figure it out:
I have a table with rows:
ID
VALUE
DATE
2827
0
20.07.2022 10:40:01
490
27432
20.07.2022 10:40:01
565
189
20.07.2022 9:51:03
200
1
20.07.2022 9:50:01
731
0.91
20.07.2022 9:43:21
161
13004
19.07.2022 16:11:01
This table has a million records, there are about 1000 ID instances, only the date of the value change and, therefore, the value itself changes in them.
When the value of the ID changes is added to this table:
ID | Tme the value was changed (DATE) | VALUE
My task is to get the all id's values closest to the input date.
I mean: if I input date "20.07.2022 10:00:00"
I want to get each ID (1-1000) with rows "value, date" with last date before "20.07.2022 10:00:00":
ID
VALUE
DATE
2827
0
20.07.2022 9:59:11
490
27432
20.07.2022 9:40:01
565
189
20.07.2022 9:51:03
200
1
20.07.2022 9:50:01
731
0.91
20.07.2022 8:43:21
161
13004
19.07.2022 16:11:01
What query will be the most optimal and correct in this case?
If you want the data for each ID with the latest change up to, but not after, your input date then you can just filter on that date, and use aggregate functions to get the most recent data in that filtered range:
select id,
max(change_time) as change_time,
max(value) keep (dense_rank last order by change_time) as value
from your_table
where change_time <= <your input date>
group by id
With your previous sample data, using midnight this morning as the input date would give:
select id,
max(change_time) as change_time,
max(value) keep (dense_rank last order by change_time) as value
from your_table
where change_time <= timestamp '2022-07-28 00:00:00'
group by id
order by id
ID
CHANGE_TIME
VALUE
1
2022-07-24 10:00:00
900
2
2022-07-22 21:51:00
422
3
2022-07-24 13:01:00
1
4
2022-07-24 10:48:00
67
and using midday today woudl give:
select id,
max(change_time) as change_time,
max(value) keep (dense_rank last order by change_time) as value
from your_table
where change_time <= timestamp '2022-07-28 12:00:00'
group by id
order by id
ID
CHANGE_TIME
VALUE
1
2022-07-24 10:00:00
900
2
2022-07-22 21:51:00
422
3
2022-07-28 11:59:00
12
4
2022-07-28 11:45:00
63
5
2022-07-28 10:20:00
55
db<>fiddle with some other input dates to show the result set changing.

When to add a sequence of fields in a materialized view

Good evening,
I am trying to understand in which cases the sequence would be used, for the example below, since the rowids would not always give me a single row to manage the changes.
Why consider a sequence of additional fields?
I will be grateful if you could clarify the doubt, with some example.
Thank you so much,
Greetings.
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE(amount_sold, time_id, prod_id)
INCLUDING NEW VALUES;
Now imagine that you want to create a materialized view that contains aggregates on this table. Because the materialized view log has been created with all referenced columns in the materialized view's defining query, the materialized view is fast refreshable. If DML is applied against the sales table, then the changes are reflected in the materialized view when the commit is issued.
CREATE MATERIALIZED VIEW sum_sales
REFRESH FAST ON COMMIT AS
SELECT s.time_id, COUNT(*) AS count_grp,
SUM(s.amount_sold) AS sum_dollar_sales,
COUNT(s.amount_sold) AS count_dollar_sales
FROM sales s
GROUP BY s.time_id;
Without using the sequence, you will have the following error
ORA-12033: cannot use filter columns from materialized view log on "ADMIN"."SALES"
So let me try to explain why, by using a Testcase
drop table sales;
create table sales (time_id number, prod_id number, amount_sold number)
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE(amount_sold, time_id, prod_id)
INCLUDING NEW VALUES;
truncate table sales;
insert into sales values (1,1,23);
insert into sales values (1,2,23);
commit;
select time_id, sum(amount_sold) from sales group by time_id;
TIME_ID SUM(AMOUNT_SOLD)
------- ----------------
1 46
Now Imagine, that you will modify a row multiple times.
update sales set amount_sold = 55 where time_id = 1 and PROD_ID = 1;
update sales set amount_sold = 12 where time_id = 1 and PROD_ID = 1;
select time_id, sum(amount_sold) from sales group by time_id;
TIME_ID SUM(AMOUNT_SOLD)
------- ----------------
1 35
your new amount_sold is 35. How to do a fast refresh without reading the value for the row (1,2) because you only modified (1,1)
select * from MLOG$_SALES where DMLTYPE$$ != 'I' order by ;
AMOUNT_SOLD TIME_ID PROD_ID M_ROW$$ SEQUENCE$$ SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$ XID$$
----------- ------- ------- ------------------ ---------- -------------------- --------- --------- --------------- ----------------
23 1 1 AAAwaSAAAAAAFpTAAA 105 4000-01-01T00:00:00Z U U CA== 4222223434942993
55 1 1 AAAwaSAAAAAAFpTAAA 106 4000-01-01T00:00:00Z U N CA== 4222223434942993
55 1 1 AAAwaSAAAAAAFpTAAA 107 4000-01-01T00:00:00Z U U CA== 4222223434942993
12 1 1 AAAwaSAAAAAAFpTAAA 108 4000-01-01T00:00:00Z U N CA== 4222223434942993
So you can use the previous value 46 and increment/decrement using the old/new value as follow
select 46 - 23 + 55 - 55 + 12 as newval from dual;
NEWVAL
------
35
You can also do the same when doing a delete
This is not possible having only the rowid. To generate the new value 35 you need to read an unmodified value, so you cannot do a fast refresh.
Hope that this can help you to understand in which cases the sequence is used

What is the exact NULL value for a field in Oracle?

Title says it all pretty much. What is the exact value that is assigned to a(n) a)Arithmetic b)String c)Logical field to represent NULL, in Oracle DBMS?
Thank you for your time!
Null is the absence of meaning, the absence of value. What gets assigned is null. Not even an ASCII null (ascii value 0) but nothing.
That's why there's a special operation to test for null . This will return false:
...
where col1 = null
We need to test for:
where col1 is null
"we were asked by a professor at uni to find what exactly that value is in these 3 respective cases"
Okay, let's investigate that. Here is a table with two rows:
SQL> create table t42 (
2 type varchar2(10)
3 , colv varchar2(10)
4 , coln number
5 , cold date
6 )
7 /
Table created.
SQL> insert into t42 values ('not null', 'X', 1, sysdate);
1 row created.
SQL> insert into t42 values ('all null', null, null, null);
1 row created.
SQL>Exp
Oracle has a function dump() which shows us the datatype and content of the passed value. Find out more.
What does dump() tell us about our two rows?
SQL> select type
2 , dump(colv) as colv
3 , dump(coln) as coln
4 , dump(cold) as cold
5 from t42;
TYPE COLV COLN COLD
---------- -------------------- -------------------- ----------------------------------
not null Typ=1 Len=1: 88 Typ=2 Len=2: 193,2 Typ=12 Len=7: 120,117,4,29,6,60,44
all null NULL NULL NULL
SQL>
So: the null columns have no data type, no value.
"I don't think dump is suitable for supporting any argument over what "exactly" gets stored to represent a null - because if the expression is null, it simply returns null by definition "
#JeffreyKemp makes a fair point. So let's dip a toe into the internals. The first step is to dump the data block(s);l the dump is written to a trace file:
SQL> conn / as sysdba
Connected.
USER is "SYS"
SQL> select dbms_rowid.rowid_relative_fno(t42.rowid) as fno
2 , dbms_rowid.rowid_block_number(t42.rowid) as blk
3 from a.t42
4 /
FNO BLK
-------- --------
11 132
11 132
SQL> alter system dump datafile 11 block 132;
System altered.
SQL> select value from v$diag_info where name = 'Default Trace File';
VALUE
--------------------------------------------------------------------------------
/home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3275.trc
SQL>
Because T42 is small it fits into only one block. Here is the interesting bit of the dump:
data_block_dump,data header at 0x805664
===============
tsiz: 0x1f98
hsiz: 0x16
pbl: 0x00805664
76543210
flag=--------
ntab=1
nrow=2
frre=-1
fsbo=0x16
fseo=0x1f73
avsp=0x1f5d
tosp=0x1f5d
0xe:pti[0] nrow=2 offs=0
0x12:pri[0] offs=0x1f7f
0x14:pri[1] offs=0x1f73
block_row_dump:
tab 0, row 0, #0x1f7f
tl: 25 fb: --H-FL-- lb: 0x1 cc: 4
col 0: [ 8] 6e 6f 74 20 6e 75 6c 6c
col 1: [ 1] 58
col 2: [ 2] c1 02
col 3: [ 7] 78 75 05 01 02 08 08
tab 0, row 1, #0x1f73
tl: 12 fb: --H-FL-- lb: 0x1 cc: 1
col 0: [ 8] 61 6c 6c 20 6e 75 6c 6c
end_of_block_dump
End dump data blocks tsn: 33 file#: 11 minblk 132 maxblk 132
We can see there are two rows in the table. The first row has entries for four columns; this is the 'not null' row. The second row has only one column: this is the 'all null' row. So, Jeffrey is quite right. All the trailing fields are null so Oracle stores nothing for them.
Answer from APC is fully right, let's give some information on "what does it mean":
Arithmetic: NULL basically means "not defined". Every math operation with NULL (i.e. "not defined") also returns NULL
String: NULL is an empty string, i.e. '' IS NULL returns TRUE - this behavior of Oracle is different to many others RDBMS.
Logical: I assume you mean what happens to BOOLEAN data types. Unlike almost any other programming language in PL/SQL a BOOLEAN variable can have three different states: TRUE, FALSE and NULL. Be aware of this special behavior when you work with BOOLEAN in PL/SQL.
In addition to #APC, in DB there is a something like 'ternary logic' in comparison operations, when we can say that value are equal, not equal and the third is "we don't know", cause of it value absence NULL, and even comparison of to NULL values gives the NULL, meaning that we have no information about operands

Oracle - Insert x amount of rows with random data

I am currently doing some testing and am in the need for a large amount of data (around 1 million rows)
I am using the following table:
CREATE TABLE OrderTable(
OrderID INTEGER NOT NULL,
StaffID INTEGER,
TotalOrderValue DECIMAL (8,2)
CustomerID INTEGER);
ALTER TABLE OrderTable ADD CONSTRAINT OrderID_PK PRIMARY KEY (OrderID)
CREATE SEQUENCE seq_OrderTable
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10000;
and want to randomly insert 1000000 rows into it with the following rules:
OrderID needs to be be sequential (1, 2, 3 etc...)
StaffID needs to be a random number between 1 and 1000
CustomerID needs to be a random number between 1 and 10000
TotalOrderValue needs to be a random decimal value between 0.00 and 9999.99
Is this even possible to do? I can I could generate all of these using this update statement? however generating a million rows in 1 go I am not sure on how to do this
Thanks for any help on this matter
This is how i would randomly generate the number on update:
UPDATE StaffTable SET DepartmentID = DBMS_RANDOM.value(low => 1, high => 5);
For testing purposes I created the table and populated it in one shot, with this query:
CREATE TABLE OrderTable(OrderID, StaffID, CustomerID, TotalOrderValue)
as (select level, ceil(dbms_random.value(0, 1000)),
ceil(dbms_random.value(0,10000)),
round(dbms_random.value(0,10000),2)
from dual
connect by level <= 1000000)
/
A few notes - it is better to use NUMBER as data type, NUMBER(8,2) is the format for decimal. It is much more efficient for populating this kind of table to use the "hierarchical query without PRIOR" trick (the "connect by level <= ..." trick) to get the order ID's.
If your table is created already, insert into OrderTable (select level...) (same subquery as in my code) should work just as well. You may be better off adding the PK constraint only after you create the data though, so as not to slow things down.
A small sample from the table created (total time to create the table on my cheap laptop - 1,000,000 rows - was 7.6 seconds):
SQL> select * from OrderTable where orderid between 500020 and 500030;
ORDERID STAFFID CUSTOMERID TOTALORDERVALUE
---------- ---------- ---------- ---------------
500020 666 879 6068.63
500021 189 6444 1323.82
500022 533 2609 1847.21
500023 409 895 207.88
500024 80 2125 1314.13
500025 247 3772 5081.62
500026 922 9523 1160.38
500027 818 5197 5009.02
500028 393 6870 5067.81
500029 358 4063 858.44
500030 316 8134 3479.47

Sorting Matrix Columns in RDLC Report

I get the following query result:
EmployeeName payelement payelementValue payelementOrder
------------ ---------- --------------- ---------------
emp1 PE1 122 2
emp1 PE2 122 1
emp2 PE1 122 2
emp2 PE2 122 1
emp3 PE1 122 2
emp3 PE2 122 1
Which results in a report that looks like:
Employee Name PE2 PE1
emp1 122 122
emp2 122 122
emp3 122 122
I have created a matrix in rdlc report and and put the column field with the ->'payelement ' and the value field with ->'payelementValue' and set the rows field with ->'employeeName ' the problem now is that I want to sort the 'payelement' upon the field named 'payelementOrder' which represents the order for paylements in their actual table while I actually get them sorted alphabetically by defualt i.e.(PE1 then PE2). Any help would be greatly appreciated.
I Solved by this...
Go to the .rdlc... Check the Row Groups(which we will find in the left-bottom) under of that we will find the grouped column name (which we are having in the tables) then right click on it-> Go to Group properties... -> Go to sorting-> on the sort by give the column name which you want to sort according to and Click Ok.
And You are Done....
When you created a matrix you got a Column group. In the group properties of the column group you can set order by specific field (payelementOrder in your case)

Resources