Copying rows from remote database oracle - oracle

I am using Oracle XE 10.2. I am trying to copy 2,653,347 rows from a remote database with the statement
INSERT INTO AUTOSCOPIA
(field1,field2...field47)
SELECT * FROM AUTOS#REMOTE;
I am trying to copy all 47 columns for all 2 million rows locally. After running for a few minutes, however, I get the error:
ORA- 12952 : The request exceeds the maximum allowed database size of 4 GB data.
How can I avoid this error?
Details: i have 3 indexes in my local table (where i want to insert the remote information).

You're using the express edition of Oracle 10.2 which includes a number of limitations. The one that you're running into is that you are limited to 4 GB of space for your tables and your indexes.
How big is the table in GB? If the table has 2.6 million rows, if each row is more than ~1575 bytes, then what you want to do isn't possible. You'd have to either limit the amount of data you're copying over (not getting every row, not getting every column, or not getting all the data in some columns would be options) or you would need to install a version and edition that allows you to store that much data. The express edition of 11.2 allows you to store 11 GB of data and is free just like the express edition of 10.2 so that would be the easiest option. You can see how much space the table consumes in the remote database by querying the all_segments column in the remote database-- that should approximate the amount of space you'd need in your database
Note that this ignores the space used by out-of-line LOB segments as well as indexes
SELECT sum(bytes)/1024/1024/1024 size_in_gb
FROM all_segments#remote
WHERE owner = <<owner of table in remote database>>
AND segment_name = 'AUTOS'
If the table is less than 4 GB but the size of the table + indexes is greater than 4 GB, then you could copy the data locally but you would need to drop one or more of the indexes you've created on the local table before copying the data over. That, of course, may lead to performance issues but you would at least be able to get the data into your local system.
If you (or anyone else) has created any tables in this database, those tables count against your 4 GB database limit as well. Dropping them would free up some space that you could use for this table.
Assuming that you will not be modifying the data in this table once you copy it locally, you may want to use a PCTFREE of 0 when defining the table. That will minimize the amount of space reserved in each block for subsequent updates.

Related

Merge Query in oracle throws "ORA-01652: unable to extend temp segment by 64 in tablespace TEMP"

We are using a one-time merge query in oracle 12c which fetches around "22633334" records of data and updates it in the target table, but every time when the query run's it throws "ORA-01652: unable to extend temp segment by 64 in tablespace TEMP" issue, the DBA has already extended it to 60 GB. Can anyone tell me how this can be resolved or what would be the ideal temp space to be allocated for this volume of data?
Oracle uses that space to build temporary result sets and perform sorts. The amount of space required depends on the precise explain plain of your query (check it out and it should give you some clues), but could be several times the actual amount of data in the tables if there are joins. The number of records matters not at all; it's about the size of the records and what Oracle is trying to do with them. You want to avoid full table scans, unnecessary sorts, and that sort of thing. Make sure your table statistics are up to date. In the end, it needs what it needs and all you can do is increase your TEMP tablespace size until it works.

Full table scan behaviour with cache and nocache in oracle 12c

I have a same query running on two different DB servers with almost identical config. Query is doing Full Table scan(FTS) on one table
SELECT COUNT (1) FROM tax_proposal_dtl WHERE tax_proposal_no = :b1 AND taxid != :b2 AND INSTR(:b3 , ',' || STATUS || ',' ) > 0 ;
While on 1st DB I get result in less than 3 secs with 0 disk read while on 2nd DB disk read is high and elapsed time is approx 9 secs
Only difference between the table config on two DBs is that on 1st Table has Cache = 'Y' while on 2nd Cache = 'N'. As per my understanding is that in case of FTS cache wont be used and direct path read will be used. so, why is the performance of same query is impacted by cache/nocache(Because that is the only difference between two envs and even the execution plan is same).
As suggested by Jon and after doing further research on this topic(Specially with regards to _SMALL_TABLE_THRESHOLD), I am adding more details.
Current version: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit
Details of 2nd DB:
Total block count of table from DBA_SEGMENTS = 196736
Details of 1st DB:
Total block count of table from DBA_SEGMENTS = 172288
Execution plan on both the DBs are same but there are two major differences :
a) On 2nd DB cache option is false on the table(I tried alter table cache but still no impact on performance)
b) On 2nd DB because _STT parameter is 23920 so as per 5*_STT rule table will not be qualified as medium sized table while on 1st DB _STT parameter is 48496 so as per 5*_STT rue table will be qualified as medium sized table.
Below is a chart based on my research till now on _STT an Cache parameter of how system will behave for different table size.
Please let me know if my understanding is correct in assuming that Cache option will have no impact on Medium or Large sized table but it will help in retaining small sized table longer in LRU. So based on above assumptions and chart presented I am concluding that in the case of 2nd DB Table is classified as Large sized table and hence DPR and more elapsed time while in the case of 1st it is classified as medium sized table and hence cache read and less elapsed time.
As per this link I have set the _STT parameter on session on 2nd DB
alter session set "_small_table_threshold"=300000;
So, performance has improved considerably and almost same as 1st DB with 0 disk reads, as this implies that table will be considered Small sized.
I have used following articles in my research.
https://jonathanlewis.wordpress.com/2011/03/24/small-tables/
https://hoopercharles.wordpress.com/2010/06/17/_small_table_threshold-parameter-and-buffer-cache-what-is-wrong-with-this-quote/?unapproved=43522&moderation-hash=be8d35c5530411ff0ca96388a6fa8099#comment-43522
https://dioncho.wordpress.com/tag/full-table-scan/
https://mikesmithers.wordpress.com/2016/06/23/oracle-pinning-table-data-in-the-buffer-cache/
http://afatkulin.blogspot.com/2012/07/serial-direct-path-reads-in-11gr2-and.html
http://afatkulin.blogspot.com/2009/01/11g-adaptive-direct-path-reads-what-is.html
The keywords CACHE and NOCACHE are a bit misleading - they don't simply enable or disable caching, they only make cache reads more or less likely by changing how the data is stored in the cache. Like most memory systems, the Oracle buffer cache is constantly adding new data and aging out old data. The default, NOCACHE, will still add table data from full table scans to the buffer cache, but it will mark it as the first piece of data to age out.
According to the SQL Language Reference:
CACHE
For data that is accessed frequently, this clause indicates
that the blocks retrieved for this table are placed at the most
recently used end of the least recently used (LRU) list in the buffer
cache when a full table scan is performed. This attribute is useful
for small lookup tables.
...
NOCACHE
For data that is not accessed
frequently, this clause indicates that the blocks retrieved for this
table are placed at the least recently used end of the LRU list in the
buffer cache when a full table scan is performed. NOCACHE is the
default for LOB storage.
The real behavior can be much more complicated. The in-memory option, result caching, OS and SAN caching, direct path reads (usually for parallelism), the small table threshold (where Oracle doesn't cache the whole table if it exceeds a threshold), and probably other features I can't think of may affect how data is cached and read.
Edit: I'm not sure if I can add much to your analysis. There's not a lot of official documentation around these thresholds and table scan types. Looks like you know as much about the subject as anyone else.
I would caution that this kind of full table scan optimization should only be needed in rare situations. Why is a query frequently doing a full table scan of a 1GB table? Isn't there an index or a materialized view that could help instead? Or maybe you just need to add more memory if you need the development environment to match production.
Another option, instead of changing the small table threshold, is to change the perceived size of the table. Modify the statistics so that Oracle thinks the table is small. This way no other tables are affected.
begin
dbms_stats.set_table_stats(ownname => user, tabname => 'TAX_PROPOSAL_DTL', numblks => 999);
dbms_stats.lock_table_stats(ownname => user, tabname => 'TAX_PROPOSAL_DTL');
end;
/

Inserting data into temporary tables in PostgreSQL is significantly slower compared to Oracle

Our application supports multiple databases including Oracle and PostgreSQL. In several use-cases, multiple queries are run to fetch necessary data. The data obtained from one or more queries is filtered based on business logic, and the filtered data is then inserted into a temporary table using a parameterized INSERT statement. This temporary table is then joined with other tables in a subsequent query. We have noticed that time taken for inserting data into temporary table linearly increases with the number of rows inserted with PostgreSQL database. This temporary table has only one varchar column of 15 bytes size. Inserting 80 rows takes 16ms, 160 rows takes 32ms, 280 rows takes 63ms, and so on. The same operations with Oracle database take about 1 ms for these inserts.
We are using PostgreSQL 10.4 with psqlODBC driver 10.03 version. We have configured temp_buffers (256MB), shared_buffers (8GB), work_mem (128MB) and maintenance_work_mem (512MB) parameters based on the guidelines provided in PostgreSQL documentation.
Are there any other configuration options we could try to improve the performance of temp table inserts in PostgreSQL database? Please suggest.
You haven't really identified the temporary table as the problem.
For example, below is a quick test of inserts to a 15 character (not the same as bytes of course) varchar column
=> CREATE TEMP TABLE tt (vc varchar(15));
CREATE TABLE
=> \timing on
Timing is on.
=> INSERT INTO tt SELECT to_char(i, '0000000000') FROM generate_series(1,100) i;
INSERT 0 100
Time: 0.758 ms
This is on my cheap, several years old laptop. Unless you are running your PostgreSQL database on a Raspberry Pi then I don't think temporary table speed is a problem for you.

Migration issue while using DMS . Incorrect junk data for empty columns

While migrating from MySQL to ORAcle using AWS DMS servcie, In the source side(MySQL DB instance), some huge column (mediumtext) values are empty for 75% of rows in a table. Whereas in the target (Oracle ), its migrated with some other value (Not Junk values) . For me it looks like the column values are copied incorrectly between rows.
Wherever there is empty values in the source side columns, it copied some other data. Around 75% of table data for some of the clob columns with empty values in source side, are incorrectly mapped with some other data in the oracle side. We used FULL LOB mode and 10000Kb as chunk size.
Some questions or requests -
1. Could you share the table DDL from source and target?
2. Are you sure there is no workload running on the target that could change values in the table outside the DMS process?
3. Full LOB mode migrates LOBs in chunks. Why are we specifying such a high LOB chunk size? Also, do we not know the max LOB size to use limited LOB mode.
4. Could you paste the task ARN here? I work for AWS DMS and can look to see what is going on? Once I find the root cause, I will also make sure I post an analysis here for all stackoverflow users.
Let me know.

SQL Server 2008 R2 Express 10GB Filesize limit

I have reached the file size limit on my SQL Server 2008 R2 Express database which I believe is 10Gb. I know this because I see Event ID 1101 in the event log.
Could not allocate a new page for database 'ExchangeBackup' because of insufficient disk space in filegroup 'PRIMARY'
I have removed some historic data to work around the problem for now but it is only a temporary fix. One table (PP4_MailBackup) is much larger than the others so when I created this database 12 months ago, I converted this table to be a Filestream table and the data is stored outside the FileGroup in the File System. This appeared to be working successfully until I received the error and new data was no longer being added to my database.
When I do a report on table sizes I see the Reserved(KB) column adds up to almost 10GB.
The folder that holds my FileStream data is 176 GB
The database .mdf file is indeed 10GB.
Does anyone have any idea why the table PP4_MailBackup is still using nearly 7GB?
Here is the "Standard Reports -> Disk Usage report" for this database:
Thanks in advance
David
Update
Here is some more info.
There are 868,520 rows in this table.
This cmd returns 1 so I'm assuming Ansipadding is on. I have never changed this from the default.
SELECT SESSIONPROPERTY('ANSI_PADDING')
The columns are defined like this
Even if every record for every column filled the full record size, by my rough calculation the table would be around 4,125,470,000 bytes. I understand that the nvarchar columns only use the actual space required.
I'm still missing a lot of space.
Not really an answer but more of a conclusion.
I have given up on this problem and resided myself to remove data to stay under the 10GB Primary file size limit. I figured out that the nvarchar columns store 2 bytes per character in order to deal with Unicode characters although they do only use the space required and don't pad out the column with spaces. So this would account for some of the space I can't find.
I tried to convert my char(500) columns to varchar(500) by adding new columns with the correct type copying data into them and then removing the old column. This worked but the table actually got bigger because removing the column is only a Meta data change and does not actually remove the data. To recover the space I would need to create a new table and copy the data across then remove the old table of course I don't have enough space in the primary file to do that.
I thought about copying the table to temp db removing the original table then copying it back but temp db doesn't support filestream columns (at least to my knowledge) so I would need to hold all 170GB within the temp db table. This sounded like a dubious solution and my test server didn't have enough space on the partition where temp db was stored. I couldn't find anything on the files size limit of tempdb on sql 2008 Express, but at this point it was all getting too hard.

Resources