A useful example of when to use vsize function instead of length function in Oracle? - oracle

It seems vsize() and length() return the same results. Does anyone know of a practical example of when to use vsize instead of length?
select vsize(object_name), length(object_name) from user_objects
Result:
/468ba408_LDAPHelper 20 20
/de807749_LDAPHelper 20 20
A4201_A4201_UK 14 14
A4201_PGM_FK_I 14 14
A4201_PHC_FK_I 14 14

Well, Length() takes a character argument (CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB) whereas VSize() takes just about any data type, so if you pass Length() a noncharacter data type there has to be an implicit conversion.
Length is also sensitive to to character sets.
drop table daa_test;
create table daa_test as select sysdate dt from dual;
alter session set nls_date_format = 'YYYY-MM-DD';
select vsize(dt) from daa_test;
select length(dt) from daa_test;
alter session set nls_date_format = 'YYYY-MM-DD HH24:mi:ss';
select vsize(dt) from daa_test;
select length(dt) from daa_test;
... giving ...
drop table daa_test succeeded.
create table succeeded.
alter session set succeeded.
VSIZE(DT)
----------------------
7
1 rows selected
LENGTH(DT)
----------------------
10
1 rows selected
alter session set succeeded.
VSIZE(DT)
----------------------
7
1 rows selected
LENGTH(DT)
----------------------
19
1 rows selected
VSize is really of use IMHO in understanding internal storage requirements of data.

see: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1897591221788

Related

when we declared number datatype in oracle what it will take default data type?

I was declared one column in oracle datatype is number ex:cust_acc_no NUMBER (9) DEFAULT (0),.
After creating table that column take double datatype why?
but that column is account number so when i select that particular field it shows account numbers with decimal.
If you didn't put any decimal numbers into it, then they aren't decimal numbers. If you think they are, please, post an example - copy/paste your SQL*Plus session which shows what you're saying. I suspect that it is matter of formatting, not data storage.
By the way, you could have used the INT datatype, e.g.
SQL> create table test (cust_acc_no int default 0);
Table created.
SQL> insert into test
2 select 100 from dual union
3 select 0.5 from dual union
4 select 20.6 from dual;
3 rows created.
SQL> select * from test;
CUST_ACC_NO
-----------
1
21
100
SQL>

What is the best data type for the field of format "YYYY-MM-DD" in oracle 11g?

I'm creating tables in Oracle 11g table and came across one date field of format "YYYY-MM-DD".
I don't want to use varchar2 for this and when I use number(5), it's still accepting the input. Then what's the meaning of limit 5 here?
Please suggest me the best datatype I can use here.
This is, obviously, a date format mask. If you're about to store dates into that column, you should use the DATE datatype, such as
SQL> create table test
2 (datum date);
Table created.
Don't use VARCHAR2 (put strings into it, not dates) nor NUMBER (put numbers into it, not dates) datatypes for that. You'll regret it sooner than you think.
I'm going to enter some values into the table, showing different ways of how you could do that - it is important that you insert dates, not strings into it. Never rely on Oracle, implicitly converting strings you might provide to dates. Sooner or later, it'll produce an error.
SQL> insert into test values (date '2018-12-25');
1 row created.
SQL> insert into test values (to_date('09.05.2018', 'dd.mm.yyyy'));
1 row created.
SQL> insert into test values (sysdate);
1 row created.
Now, several ways of selecting that value:
This one returns date in a format currently set by my database's NLS settings:
SQL> select * from test;
DATUM
--------
25.12.18
09.05.18
09.05.18
I'm forcing it to return values in desired format, using ALTER SESSION:
SQL> alter session set nls_date_format = 'yyyy-mm-dd';
Session altered.
SQL> select * from test;
DATUM
----------
2018-12-25
2018-05-09
2018-05-09
Yet another format; note that value inserted via the SYSDATE function (which returns DATE) contains date and time component. It was "invisible" in previous examples:
SQL> alter session set nls_date_format = 'dd.mm.yyyy hh24:mi:ss';
Session altered.
SQL> select * from test;
DATUM
-------------------
25.12.2018 00:00:00
09.05.2018 00:00:00
09.05.2018 08:03:50
Using TO_CHAR function with some format (such as dd-mon-yyyy). I'm also requesting Oracle to "translate" month name into English (as my database works in Croatian):
SQL> select to_char(datum, 'dd-mon-yyyy', 'nls_date_language = english') datum from test;
DATUM
-----------
25-dec-2018
09-may-2018
09-may-2018
SQL>
[EDIT]
Oracle doesn't store DATE values in any "human" readable format (there's more to read on the Internet, Google for it). It is a format mask that represents that value to you.
I strongly suggest you NOT to store dates into any datatype column but DATE. It's a time bomb, waiting to explode (and then it'll hurt). Nobody stops you from entering a value as '1234-99-66' or '12-345-678'; what will you do with it, then?
Consider creating a view on a top of the table which uses TO_CHAR function and returns the value in a format you want ('yyyy-mm-dd'). DATE datatype column in a table makes sure that values are valid, and the view will let the third-party application to accept values it finds appropriate.
For example:
SQL> create view v_test as
2 select to_char(datum, 'yyyy-mm-dd') datum
3 from test;
View created.
SQL> select * from v_test;
DATUM
----------
2018-12-25
2018-05-09
2018-05-09
SQL>
So: you wouldn't let the third-party application to access the table, but the view instead.

Trying to export a Oracle via PL/SQL gives a date of 0000-00-00

I have inherited an Oracle .dmp file which I'm trying to get into CSV so that I can load it into MySQL.
The general approach I'm using is described here. I'm having a problem with one row though. It contains a date of 5544-09-14 like so:
alter session set nls_date_format = 'dd-MON-yyyy';
select OID, REF, TRADING_DATE From LOAN WHERE REF = 'XXXX';
OID REF TRADING_DATE
--- -------------------- ------------
1523 XXXX 14-SEP-5544
This is garbage data from the legacy system which didn't validate the input dates. I'm wondering why my PL/SQL function to export the data chokes on this value though?
It exports that row with a TRADING_DATE value of '0000-00-00T00:00:00' and I'm not sure why?
SELECT dump(TRADING_DATE) FROM LOAN WHERE REF = 'XXXX';
DUMP(TRADING_DATE)
--------------------------------------------------------------------------------
Typ=12 Len=7: 44,156,9,14,1,1,1
and
SELECT to_char(trading_date, 'YYYYMMDDHH24MISS') FROM LOAN WHERE REF = 'XXXX';
TO_CHAR(TRADIN
--------------
00000000000000
The value stored in that column is not a valid date. The first byte of the dump should be the century, which according to Oracle support note 69028.1 is stored in 'excess-100' notation, which means it should have a value of 100 + the actual century; so 1900 would be 119, 2000 would be 120, and 5500 would be 155. So 44 would represent -5600; the date you have stored appears to actually represent 5544-09-14 BC. As Oracle only supports dates with years between -4713 and +9999, this isn't recognised.
You can recreate this fairly easily; the trickiest bit is getting the invalid date into the database in the first place:
create table t42(dt date);
Table created.
declare
d date;
begin
dbms_stats.convert_raw_value('2c9c090e010101', d);
insert into t42 (dt) values (d);
end;
/
PL/SQL procedure successfully completed.
select dump(dt), dump(dt, 1016) from t42;
DUMP(DT)
--------------------------------------------------------------------------------
DUMP(DT,1016)
--------------------------------------------------------------------------------
Typ=12 Len=7: 45,56,9,14,1,1,1
Typ=12 Len=7: 2d,38,9,e,1,1,1
So this has a single row with the same data you do. Using alter session I can see what looks like a valid date:
alter session set nls_date_format = 'DD-Mon-YYYY';
select dt from t42;
DT
-----------
14-Sep-5544
alter session set nls_date_format = 'YYYYMMDDHH24MISS';
select dt from t42;
DT
--------------
55440914000000
But if I use an explicit date mask it just gets zeros:
select to_char(dt, 'DD-Mon-YYYY'), to_char(dt, 'YYYYMMDDHH24MISS') from t42;
TO_CHAR(DT,'DD-MON-Y TO_CHAR(DT,'YY
-------------------- --------------
00-000-0000 00000000000000
And if I run your procedure:
exec dump_table_to_csv('T42');
The resultant CSV has:
"DT"
"0000-00-00T00:00:00"
I think the difference is that those that attempt to show the date are sticking with internal date data type 12, while those that show zeros are using external data type 13, as mentioned in note 69028.1.
So in short, your procedure isn't doing anything wrong, the date it's trying to export is invalid internally. Unless you know what date it was supposed to be, which seems unlikely given your starting point, I don't think there's much you can do about it other than guess or ignore it. Unless, perhaps, you know how the data was inserted and can work out how it got corrupted.
I think it's more likely to be from an OCI program than what I did here; this 'raw' trick was originally from here. You might also want to look at note 331831.1. And this previous question is somewhat related.

How to index a date column with null values?

How should I index a date column when some rows has null values?
We have to select rows between a date range and rows with null dates.
We use Oracle 9.2 and higher.
Options I found
Using a bitmap index on the date column
Using an index on date column and an index on a state field which value is 1 when the date is null
Using an index on date column and an other granted not null column
My thoughts to the options are:
to 1: the table have to many different values to use an bitmap index
to 2: I have to add an field only for this purpose and to change the query when I want to retrieve the null date rows
to 3: locks tricky to add an field to an index which is not really needed
What is the best practice for this case?
Thanks in advance
Some infos I have read:
Oracle Date Index
When does Oracle index null column values?
Edit
Our table has 300,000 records. 1,000 to 10,000 records are inserted and delete every day. 280,000 records have a null delivered_at date. It is a kind of picking buffer.
Our structure (translated to english) is:
create table orders
(
orderid VARCHAR2(6) not null,
customerid VARCHAR2(6) not null,
compartment VARCHAR2(8),
externalstorage NUMBER(1) default 0 not null,
created_at DATE not null,
last_update DATE not null,
latest_delivery DATE not null,
delivered_at DATE,
delivery_group VARCHAR2(9),
fast_order NUMBER(1) default 0 not null,
order_type NUMBER(1) default 0 not null,
produkt_group VARCHAR2(30)
)
In addition to Tony's excellent advice, there is also an option to index your column in such a way that you don't need to adjust your queries. The trick is to add a constant value to just your index.
A demonstration:
Create a table with 10,000 rows out of which only 6 contain a NULL value for the a_date column.
SQL> create table mytable (id,a_date,filler)
2 as
3 select level
4 , case when level < 9995 then date '1999-12-31' + level end
5 , lpad('*',1000,'*')
6 from dual
7 connect by level <= 10000
8 /
Table created.
First I'll show that if you just create an index on the a_date column, the index is not used when you use the predicate "where a_date is null":
SQL> create index i1 on mytable (a_date)
2 /
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'mytable',cascade=>true)
PL/SQL procedure successfully completed.
SQL> set autotrace on
SQL> select id
2 , a_date
3 from mytable
4 where a_date is null
5 /
ID A_DATE
---------- -------------------
9995
9996
9997
9998
9999
10000
6 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=72 Card=6 Bytes=72)
1 0 TABLE ACCESS (FULL) OF 'MYTABLE' (Cost=72 Card=6 Bytes=72)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
720 consistent gets
0 physical reads
0 redo size
285 bytes sent via SQL*Net to client
234 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
6 rows processed
720 consistent gets and a full table scan.
Now change the index to include the constant 1, and repeat the test:
SQL> set autotrace off
SQL> drop index i1
2 /
Index dropped.
SQL> create index i1 on mytable (a_date,1)
2 /
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'mytable',cascade=>true)
PL/SQL procedure successfully completed.
SQL> set autotrace on
SQL> select id
2 , a_date
3 from mytable
4 where a_date is null
5 /
ID A_DATE
---------- -------------------
9995
9996
9997
9998
9999
10000
6 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=6 Bytes=72)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MYTABLE' (Cost=2 Card=6 Bytes=72)
2 1 INDEX (RANGE SCAN) OF 'I1' (NON-UNIQUE) (Cost=2 Card=6)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
285 bytes sent via SQL*Net to client
234 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
6 rows processed
6 consistent gets and an index range scan.
Regards,
Rob.
"Our table has 300,000 records....
280,000 records have a null
delivered_at date. "
In other words almost the entire table satisfies a query which searches on where DELIVERED_AT is null. An index is completely inappropriate for that search. A full table scan is much the best approach.
If you have an Enterprise Edition license and you have the CPUs to spare, using a parallel query would reduce the elapsed time.
Do you mean that your queries will be like this?
select ...
from mytable
where (datecol between :from and :to
or datecol is null);
It would only be worth indexing the nulls if they were relatively few in the table - otherwise a full table scan may be the most efficient way to find them. Assuming it is worth indexing them you could create a function-based index like this:
create index mytable_fbi on mytable (case when datecol is null then 1 end);
Then change your query to:
select ...
from mytable
where (datecol between :from and :to
or case when datecol is null then 1 end = 1);
You could wrap the case in a function to make it slicker:
create or replace function isnull (p_date date) return varchar2
DETERMINISTIC
is
begin
return case when p_date is null then 'Y' end;
end;
/
create index mytable_fbi on mytable (isnull(datecol));
select ...
from mytable
where (datecol between :from and :to
or isnull(datecol) = 'Y');
I made sure the function returns NULL when the date is not null so that only the null dates are stored in the index. Also I had to declare the function as DETERMINISTIC. (I changed it to return 'Y' instead of 1 merely because to me the name "isnull" suggests it should; feel free to ignore my preference!)
Avoid the table lookup and create the index like this :
create index i1 on mytable (a_date,id) ;

Obtaining an inserted recordid on Oracle db

I'm using Oracle on database server, from an XP client, using VB6 and ADO. In one transaction, I'm inserting one record into a parent table, which has a trigger and sequence to create a unique recordid, then that recordid is used for the relationship to a child table for a variable number of inserts to the child table. For performance, this is being sent in one execute command from my client app. For instance (simplified example):
declare Recordid int;
begin
insert into ParentTable (_field list_) Values (_data list_);
Select ParentTableSequence.currVal into Recordid from dual;
insert into ChildTable (RecordID, _field list_) Values (Recordid, _data list_);
insert into ChildTable (RecordID, _field list_) Values (Recordid, _data list_);
... multiple, variable number of additional ChildTable inserts
commit;
end;
This is working fine. My question is: I also need to return to the client the Recordid that was created for the inserts. On SQL Server, I can add something like a select to Scope_Identity() after the commit to return a recordset to the client with the unique id.
But how can I do something similar for Oracle (doesn't have to be a recordset, I just need that long integer value)? I've tried a number of things based on results from searching the 'net, but have failed in finding a solution.
These two lines can be compressed into a single statement:
-- insert into ParentTable (field list) Values (data list);
-- Select ParentTableSequence.currVal into Recordid from dual;
insert into ParentTable (field list) Values (data list)
returning ParentTable.ID into Recordid;
If you want to pass the ID back to the calling program you will need to define your program as a stored procedure or function, returning Recordid as an OUT parameter or a RETURN value respectively.
Edit
MarkL commented:
This is more of an Oracle PL/SQL
question than anything else, I
believe.
I confess that I no nothing about ADO, so I don't know whether the following example will work in your case. It involves building some infrastructure which allows us to pass an array of values into a procedure. The following example creates a new department, promotes an existing employee to manage it and assigns two new hires.
SQL> create or replace type new_emp_t as object
2 (ename varchar2(10)
3 , sal number (7,2)
4 , job varchar2(10));
5 /
Type created.
SQL>
SQL> create or replace type new_emp_nt as table of new_emp_t;
2 /
Type created.
SQL>
SQL> create or replace procedure pop_new_dept
2 (p_dname in dept.dname%type
3 , p_loc in dept.loc%type
4 , p_mgr in emp.empno%type
5 , p_staff in new_emp_nt
6 , p_deptno out dept.deptno%type)
7 is
8 l_deptno dept.deptno%type;
9 begin
10 insert into dept
11 (dname, loc)
12 values
13 (p_dname, p_loc)
14 returning deptno into l_deptno;
15 update emp
16 set deptno = l_deptno
17 , job = 'MANAGER'
18 , mgr = 7839
19 where empno = p_mgr;
20 forall i in p_staff.first()..p_staff.last()
21 insert into emp
22 (ename
23 , sal
24 , job
25 , hiredate
26 , mgr
27 , deptno)
28 values
29 (p_staff(i).ename
30 , p_staff(i).sal
31 , p_staff(i).job
32 , sysdate
33 , p_mgr
34 , l_deptno);
35 p_deptno := l_deptno;
36 end pop_new_dept;
37 /
Procedure created.
SQL>
SQL> set serveroutput on
SQL>
SQL> declare
2 dept_staff new_emp_nt;
3 new_dept dept.deptno%type;
4 begin
5 dept_staff := new_emp_nt(new_emp_t('MARKL', 4200, 'DEVELOPER')
6 , new_emp_t('APC', 2300, 'DEVELOPER'));
7 pop_new_dept('IT', 'BRNO', 7844, dept_staff, new_dept);
8 dbms_output.put_line('New DEPTNO = '||new_dept);
9 end;
10 /
New DEPTNO = 70
PL/SQL procedure successfully completed.
SQL>
The primary keys for both DEPT and EMP are assigned through triggers. The FORALL syntax is a very efficient way of inserting records (it also works for UPDATE and DELETE). This could be written as a FUNCTION to return the new DEPTNO instead, but it is generally considered better practice to use a PROCEDURE when inserting, updating or deleting.
That would be my preferred approach but I admit it's not to everybody's taste.
Edit 2
With regards to performance, bulk operations using FORALL will definitely perform better than a handful of individual inserts. In SQL, set operations are always preferable to record-by-record. However, if we are dealing with only a handful of records each time it can be hard to notice the difference.
Building a PL/SQL collection (what you think of as a temporary table in SQL Server) can be expensive in terms of memory. This is especially true if there are many users running the code, because it comes out of the session level allocation of memory, not the Shared Global Area. When we're dealing with a large number of records it is better to populate an array in chunks, perhaps using the BULK COLLECT syntax with a LIMIT clause.
The Oracle online documentation set is pretty good. The PL/SQL Developer's Guide has a whole chapter on Collections. Find out more.

Resources