I have one problem while creating external table which has 3.2 million records
The external file has 3.2 million records as a .CSV file format and I have created the EXTERNAL file in Oracle,
CREATE TABLE SCOT.RXD32LExt
(
SORT_CODE NUMBER(9),
ACCOUNT_NUM NUMBER(11),
BANK_NAME VARCHAR2(6),
TRAN_DEBIT NUMBER(9),
TRAN_CREDIT NUMBER(9),
SEQ_NUM NUMBER(11),
RXD_FLAG NUMBER(4),
REPORT_FLAG NUMBER(4),
FLAG VARCHAR2(4)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY GK3_F
ACCESS PARAMETERS
(
records delimited by newline
fields terminated by ','
optionally enclosed by '"'
missing field values are null
(
SORT_CODE, ACCOUNT_NUM, BANK_NAME, TRAN_DEBIT, TRAN_CREDIT,
SEQ_NUM, RXD_FLAG, REPORT_FLAG, FLAG
)
)
LOCATION ('32LRecords.csv')
)
PARALLEL
REJECT LIMIT UNLIMITED;
Table Created.
SQL> SELECT * FROM SCOT.RXD32LExt;
My database is facing memory issue out of bound exception. since SYSTEM user is not having sufficient space in database and I have check the below query for tablespace
SELECT TABLESPACE_NAME,
SUM(bytes)/1024/1024 "USED MEGABYTES",
SUM(maxbytes)/1024/1024 "MAX MEGABYTES"
FROM dba_data_files
WHERE tablespace_name IN ('SYSTEM', 'USERS')
GROUP BY tablespace_name;
Output
------
TABLESPACE_NAME - USED MEGABYTES - MAX MEGABYTES
----------------------------------------------
USERS 100 11264
SYSTEM 600 600
If I need to DECREASE USED MEGABYTES means what i need to do or
If I need to INCREASE MAX MEGABYTES MEANS what I need to do.
I did the following query
ALTER DATABASE DATAFILE
'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBF' RESIZE 10G;
but the above alter query has increase the used megabytes memory of not max memory.
TABLESPACE_NAME - USED MEGABYTES - MAX MEGABYTES
----------------------------------------------
USERS 100 11264
SYSTEM 3200 600
Related
I am looking for an SQL query to give me that kind of output:
table_owner table_name partition_name data (bytes) indexes (bytes)
MY_OWNER MY_TABLE P01 12345678 23456789
MY_OWNER MY_TABLE P02 34567890 45678901
MY_OWNER MY_TABLE P03 56789012 67890123
...
I visited How do I calculate tables size in Oracle but didn't find what I am looking for, because it is always per table, per owner/schema but never per partition.
I know that the amount in bytes may not really be representative of the reality since it could be "real bytes in disk" and/or "bytes preallocated" (and real data could only use 1% of this preallocated space), but I am open to have this value even if it is the preallocated or the real amount of bytes used.
Notes:
Using Oracle 18c Enterprise
We do NOT care about system tables, ORACLE tables, maintenance tables, etc. just focusing on tables created by myself for the data
Tablespace name is ALWAYS the same for all partitions in all tables in the same schema, and of course each tablespace name is different for each schema
No need to round in Kb, Mb, Gb or even Tb, if possible I would prefer it in bytes.
No need of pourcentage, no need of max space available
I only use partitions, NOT any subpartitions
I can use a PL/SQL block if we need to loop on each schema, table and partition
Guessing I can run this query by any user
Any idea?
Hi something like this? We use a similar query that I've modified. You would have to change your ORDER BY clause as you wish, and maybe limit it to one owner as it could take long over the whole database. For that reason we don't join to segments for size, but only look at rows and blocks.
select "OWNER", "OBJECT", "TYPE", "PARTITION_NAME", "NUM_ROWS", "GB", "PARTITION_POSITION", "DUMMY"
from ((select tp.table_owner owner, tp.TABLE_NAME object, 'TABLE' type, tp.PARTITION_NAME,ROUND(s.bytes/1024/1024/1024, 2) GB, tp.NUM_ROWS,tp.PARTITION_POSITION, 1 dummy
from dba_tab_partitions tp, dba_tables t, dba_segments s
where t.table_name=tp.table_name and t.table_name=s.segment_name )
union
(select ip.index_owner owner,ip.INDEX_NAME object, 'INDEX' type, ip.PARTITION_NAME,ROUND(s.bytes/1024/1024/1024, 2) GB,ip.NUM_ROWS, ip.PARTITION_POSITION, 2 dummy
from dba_ind_partitions ip, dba_indexes i, dba_tables t, dba_segments s
where i.index_name=ip.index_name and i.table_name=t.table_name and i.index_name=s.segment_name )
union
(select lp.table_owner owner,lp.LOB_NAME object, 'LOB' type, lp.PARTITION_NAME, 0, ROUND(s.bytes/1024/1024/1024, 2) GB,lp.PARTITION_POSITION, 3 dummy
from dba_lob_partitions lp, dba_tables t, dba_segments s
where t.table_name=lp.table_name and t.table_name=s.segment_name ))
order by 8,1,2,7;
I am trying to copy values of one column into another column of the same table with different name with below code, however running into an error mentioned in subject line. How I can fix it?
INSERT INTO T1 (C1)
SELECT C2
FROM T1;
The user you are connected with seems to have a quota you exceeded!
First check you quotas:
SQL> select * from user_ts_quotas;
TABLESPACE_NAME BYTES MAX_BYTES BLOCKS MAX_BLOCKS DRO
--------------- ---------- ---------- ---------- ---------- ---
USERS 0 1048576 0 128 NO
SQL>
and then either remove:
alter user your_user_name quota unlimited on your_tablespace_name;
or change your quota:
alter user your_user_name quota 10M on your_tablespace_name;
I'm trying to insert data from PURCHASE_HIST_D to PURCHASE_HIST.
(different schemas in different servers, with DBLINK).
The target table has a lot of deleted data blocks.
This is how I'm checking the segments vs the used blocks:
-- result : 199.8743480481207370758056640625 GB
select (AVG_ROW_LEN*NUM_ROWS)/1024/1024/1024 from DBA_TABLES where TABLE_NAME='PURCHASE_HIST';
-- result: 250.7939453125 GB
select SUM(BYTES)/1024/1024/1024 from DBA_SEGMENTS where SEGMENT_NAME='PURCHASE_HIST';
which means that there are 50 GB of used blocks which can be reuse for the new data.
I'm query the same for the source table:
-- result: 21.8079682849347591400146484375
select (AVG_ROW_LEN*NUM_ROWS)/1024/1024/1024 from DBA_TABLES where TABLE_NAME='PURCHASE_HIST_D';
-- result: 27.447265625
select SUM(BYTES)/1024/1024/1024 from DBA_SEGMENTS where SEGMENT_NAME='PURCHASE_HIST_D';
The source is only 27 GB so It's looks like I dont need to add more space for the tablespace.
This is the free tablespace information:
-- result: 1889477 (Used MB) 4923 (Free MB) 1894400 (Total MB)
select
fs.tablespace_name "Tablespace",
(df.totalspace - fs.freespace) "Used MB",
fs.freespace "Free MB",
df.totalspace "Total MB",
round(100 * (fs.freespace / df.totalspace)) "Pct. Free"
from
(select
tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from
dba_data_files
group by
tablespace_name
) df,
(select
tablespace_name,
round(sum(bytes) / 1048576) FreeSpace
from
dba_free_space
group by
tablespace_name
) fs
WHERE
DF.TABLESPACE_NAME = FS.TABLESPACE_NAME
and df.TABLESPACE_NAME = 'TS_DWHDATA';
So why when I'm execute the insert (even with NOAPPEND hint) I get an error that there is no enough space in tablesapce?
-- examole of the Insert
INSERT
/*+ monitor NOAPPEND parallel(64) statement_queuing */
INTO DWH.PURCHASE_HIST
SELECT *
FROM DWH_MIG.PURCHASE_HIST_D#DWH_MIG ;
The exception:
ORA-01653: unable to extend table DWH.PURCHASE_HIST by 8192 in tablespace TS_DWHDATA
You are getting the free space (using query) from the table but the high watermark is still at the point from where you can not insert data below that.
You need to reclaim that space using the following:
alter table your_table enable row movement;
alter table your_table shrink space cascade;
Also, You will need to rebuild the indexes of the table after such an operation.
Refer Oracle documentation for reclaiming waste space.
Cheers!!
Hi I want to capture all the Oracle Errors for my DML operations in the manually created table with columns as ErrorID and Error_Descr.
How to get ORA_ERR_NUMBER$ and ORA_ERR_MESG$ values in the above columns?
This table contains user defined errors as well so I do not want to limit it to the Oracle Errors.
Is there any way of capturing Oracle as well as User Defined Errors in the User Defined Tables?
Thanks in Advance!
As per documentation Link,
Oracle allows you to use a manually created table for LOGGING only if you have included these mandatory columns.
ORA_ERR_NUMBER$
ORA_ERR_MESG$
ORA_ERR_ROWID$
ORA_ERR_OPTYP$
ORA_ERR_TAG$
If you want other columns to capture the information in those two columns, you could make them as virtual columns.
CREATE TABLE my_log_table (
ORA_ERR_NUMBER$ NUMBER
,ORA_ERR_MESG$ VARCHAR2(2000)
,ORA_ERR_ROWID$ ROWID
,ORA_ERR_OPTYP$ VARCHAR2(2)
,ORA_ERR_TAG$ VARCHAR2(2000)
,ErrorID NUMBER AS (COALESCE(ORA_ERR_NUMBER$, ORA_ERR_NUMBER$))
,Error_Descr VARCHAR2(2000) AS (COALESCE(ORA_ERR_MESG$, ORA_ERR_MESG$))
);
using COALESCE is a hack because Oracle doesn't allow you to have one column default to another directly.
Now, you could run your error logging dml normally mentioning the logging table name.
INSERT INTO t_emp
SELECT employee_id * 10000
,first_name
,last_name
,hire_date
,salary
,department_id
FROM hr.employees
WHERE salary > 10000 LOG ERRORS
INTO my_log_table('ERR_SAL_LOAD') REJECT LIMIT 25
0 row(s) inserted.
select ORA_ERR_TAG$,ErrorID,Error_Descr FROM my_log_table ;
ORA_ERR_TAG$ ERRORID ERROR_DESCR
ERR_SAL_LOAD 1438 ORA-01438: value larger than specified precision allowed for this column
ERR_SAL_LOAD 1438 ORA-01438: value larger than specified precision allowed for this column
I try to create a table2 on Oracle 11g.2.0.3 with:
CREATE table2
LOGGING TABLESPACE TS_table1_2014 PCTFREE 10 INITRANS 1 STORAGE ( INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS UNLIMITED BUFFER_POOL DEFAULT ) NOCOMPRESS
as (select * from table1 where date_text <= '2015-12-31');
and I have received error below when I try to exchange this table2 with a partitioned table3:
alter table table3 exchange partition partition_name WITH TABLE table2;
Error report -
SQL Error: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
14097. 00000 - "column type or size mismatch in ALTER TABLE EXCHANGE PARTITION"
*Cause: The corresponding columns in the tables specified in the
ALTER TABLE EXCHANGE PARTITION are of different type or size
*Action: Ensure that the two tables have the same number of columns
with the same type and size.
I have test diferences with query below:
Select a.COLUMN_NAME
, a.DATA_TYPE, b.DATA_TYPE
, a.data_length, b.data_length
, a.data_precision, b.data_precision
, a.data_scale, b.data_scale
, a.nullable, b.nullable
from ALL_TAB_COLUMNS a
full outer join ALL_TAB_COLUMNS b on a.column_name=b.column_name
and b.owner=user and b.table_name='&table2'
where a.owner=user and a.table_name='&table1'
and (
nvl(a.data_type,'#')!=nvl(b.data_type,'#')
or nvl(a.data_length,-1)!=nvl(b.data_length,-1)
or nvl(a.data_precision,-100)!=nvl(b.data_precision,-100)
or nvl(a.data_scale,-100)!=nvl(b.data_scale,-100)
or nvl(a.nullable,'#')!=nvl(b.nullable,'#')
)
;
Some differences resulted are in a column size. This syntax "create as select" didn't keep order and size for new table created.
How can I create table2 as select from table1 with force keep same size columns as primary table1 source?
Thanks!
I can't find any differences in your DDL. What I suggest is to use the same DDL to create table2, then do:
insert into table2 select * from table1;
You need to use dbms_metadata package or query number of data dictionary views like all_tab_columns and etc to get metadata about existing table so you can construct correct sql for swap-table (used in exchange partition operation). CTAS does not transfer DEFAULT values for example and constraints except NOT NULL checks.
The best practice is to create/re-create/modify this table simultaneously with partitioned table.