I want to move tables from my sybase database to my oracle database. However, some tables in my sybase database have long identifiers or table names (above 30chars) so the "copy to oracle" function on Oracle SQL Developer keeps failing.
How can I migrate TABLE DATA only to my oracle schema?
Also note, I tired doing the data migration flow but when i get to the step to move data, it doesn't let me move table data. It just isn't visible. It'll only let me move procedures and such.
Do i have something disabled?
You need to enable long identifiers in your 12cR2 (12.2.x.y) or newer version of Oracle Database.
As soon as you set COMPATIBLE to “12.2.0” or higher for your Database, you can use the Long identifiers for every object in Oracle with the following exceptions:
Database name ≤ 8 byte
Disk groups ≤ 30 byte
PDB names ≤ 30 byte
Rollback segments ≤ 30 byte
Tablespace names ≤ 30 byte
Tablespace sets ≤ 30 byte
So let's do a table then...I'm running 12.2 in this scenario, but it would be find in 18c or 19c as well.
Click OK and I get - Success!
Copied Objects:
Sybase.copy_to_oracle.dbo.TABLE.this_is_a_table_to_hold_employees_please_dont_put_customers_in_it
Drop Target: HR
Copy DDL: Yes
Do Not Replace Existing Objects
Copy Data: Yes
Append Existing Objects
Task Succeeded.
1 table(s) copied.
Created table this_is_a_table_to_hold_employees_please_dont_put_customers_in_it and inserted 0 row(s)
Related
Using DBeaver, I'm trying to migrate a table from an Oracle Instance to another. I just right-click over the desired table, select Export Data and follow the wizard.
My problem is that the CLOB column is truncated. In the source database instance the max CLOB length is 6046297, but in the target it is 970823. The source has 340 records with the CLOB column value larger than 970823.
I've just noticed now that the source table has 24806 rows and the target has 12876. The table sequence id, the max value is 70191 in the source and 58185 in the target. The source has 22716 registers with id less than 58185 and the target has 12876, so it wasn't just a truncation. DBeaver is not transferring half of the registers.
I'm connecting to Oracle with the JDBC driver. Is there an configuration in DBeaver or in the connection or in the driver that would allow me to transfer this table? Maybe I just try to use another tool.
Our application supports multiple databases including Oracle and PostgreSQL. In several use-cases, multiple queries are run to fetch necessary data. The data obtained from one or more queries is filtered based on business logic, and the filtered data is then inserted into a temporary table using a parameterized INSERT statement. This temporary table is then joined with other tables in a subsequent query. We have noticed that time taken for inserting data into temporary table linearly increases with the number of rows inserted with PostgreSQL database. This temporary table has only one varchar column of 15 bytes size. Inserting 80 rows takes 16ms, 160 rows takes 32ms, 280 rows takes 63ms, and so on. The same operations with Oracle database take about 1 ms for these inserts.
We are using PostgreSQL 10.4 with psqlODBC driver 10.03 version. We have configured temp_buffers (256MB), shared_buffers (8GB), work_mem (128MB) and maintenance_work_mem (512MB) parameters based on the guidelines provided in PostgreSQL documentation.
Are there any other configuration options we could try to improve the performance of temp table inserts in PostgreSQL database? Please suggest.
You haven't really identified the temporary table as the problem.
For example, below is a quick test of inserts to a 15 character (not the same as bytes of course) varchar column
=> CREATE TEMP TABLE tt (vc varchar(15));
CREATE TABLE
=> \timing on
Timing is on.
=> INSERT INTO tt SELECT to_char(i, '0000000000') FROM generate_series(1,100) i;
INSERT 0 100
Time: 0.758 ms
This is on my cheap, several years old laptop. Unless you are running your PostgreSQL database on a Raspberry Pi then I don't think temporary table speed is a problem for you.
I am using Oracle XE 10.2. I am trying to copy 2,653,347 rows from a remote database with the statement
INSERT INTO AUTOSCOPIA
(field1,field2...field47)
SELECT * FROM AUTOS#REMOTE;
I am trying to copy all 47 columns for all 2 million rows locally. After running for a few minutes, however, I get the error:
ORA- 12952 : The request exceeds the maximum allowed database size of 4 GB data.
How can I avoid this error?
Details: i have 3 indexes in my local table (where i want to insert the remote information).
You're using the express edition of Oracle 10.2 which includes a number of limitations. The one that you're running into is that you are limited to 4 GB of space for your tables and your indexes.
How big is the table in GB? If the table has 2.6 million rows, if each row is more than ~1575 bytes, then what you want to do isn't possible. You'd have to either limit the amount of data you're copying over (not getting every row, not getting every column, or not getting all the data in some columns would be options) or you would need to install a version and edition that allows you to store that much data. The express edition of 11.2 allows you to store 11 GB of data and is free just like the express edition of 10.2 so that would be the easiest option. You can see how much space the table consumes in the remote database by querying the all_segments column in the remote database-- that should approximate the amount of space you'd need in your database
Note that this ignores the space used by out-of-line LOB segments as well as indexes
SELECT sum(bytes)/1024/1024/1024 size_in_gb
FROM all_segments#remote
WHERE owner = <<owner of table in remote database>>
AND segment_name = 'AUTOS'
If the table is less than 4 GB but the size of the table + indexes is greater than 4 GB, then you could copy the data locally but you would need to drop one or more of the indexes you've created on the local table before copying the data over. That, of course, may lead to performance issues but you would at least be able to get the data into your local system.
If you (or anyone else) has created any tables in this database, those tables count against your 4 GB database limit as well. Dropping them would free up some space that you could use for this table.
Assuming that you will not be modifying the data in this table once you copy it locally, you may want to use a PCTFREE of 0 when defining the table. That will minimize the amount of space reserved in each block for subsequent updates.
For Source: OLE DB Source - Sql Command
SELECT -- The destination table Id has IDENTITY(1,1) so I didn't take it here
[GsmUserId]
,[GsmOperatorId]
,[SenderHeader]
,[SenderNo]
,[SendDate]
,[ErrorCodeId]
,[OriginalMessageId]
,[OutgoingSmsId]
,24 AS [MigrateTypeId] --This is a static value
FROM [MyDb].[migrate].[MySource] WITH (NOLOCK)
To Destination: OLE DB Destination
Takes 5 or more minutes to insert 1.000.000 data. I even unchecked Check Constraints
Then, with the same SSIS configurations I wanted to test it with another table exactly the same as the Destination table. So, I re-create the destination table (with the same constrains except the inside data) and named as dbo.MyDestination.
But it takes about 30 seconds or less to complete the SAME data with the same amount of Data.
Why is it significantly faster with the test table and not the original table? Is it because the original table already has 107.000.000 data?
Check for indexes/triggers/constraints etc. on your destination table. These may slow things down considerably.
Check OLE DB connection manager's Packet Size, set it appropriately, you can follow this article to set it to right value.
If you are familiar with of SQL Server Profiler, then use it to get more insight especially what happens when you use re-created table to insert data against original table.
The simple version of this question is: Is it possible to export all the data from an Oracle 10g XE database that has reached it's 4GB maximum size limit?
Now, here's the background:
My (Windows) Oracle 10g XE database has reached its maximum allowed database size of 4GB. The solution I intended to implement to this problem was to upgrade to Oracle 11g XE which has a larger maximum size limit, and better reflects our production environment anyway. Of course, in typical Oracle fashion, they do not have an upgrade-in-place option (at least not that I could find for XE). So I decided to follow the instructions in the "Importing and Exporting Data between 10.2 XE and 11.2 XE" section of the Oracle 11g XE Installation Guide. After fighting with SQLPlus for a while, I eventually reached step 3d of the instructions which instructs the user to enter the following (it doesn't specify the command-line rather than SQLPlus, but it means the command-line):
expdp system/system_password full=Y EXCLUDE=SCHEMA:\"LIKE \'APEX_%\'\",SCHEMA:\"LIKE \'FLOWS_%\'\" directory=DUMP_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
That command results in the following output:
Export: Release 10.2.0.1.0 - Production on Thursday, 29 September, 2011 10:19:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.SYS_EXPORT_FULL_06"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-12952: The request exceeds the maximum allowed database size of 4 GB
I have deleted quite a bit of data from the USERS tablespace, but am unable to resize it because of the physical locations of the data. And no matter what I do, I always get that same output. I have tried running "Compact Storage" from the admin web application with no effect.
So my question is, am I missing something? Or is Oracle really so incompetent as to leave people completely out of luck if their XE databases fill up?
You can get to the point you need to export data, sounds like you just need some help coalescing the data so you can reduce the USERS tablespace size and increase the SYSTEM tablespace size to get past your issue.
You mentioned that you removed data from the USERS tablespace but can't resize. Since you can't reduce the tablespace size smaller than the highest block, reorganiza your table data by executing the following command for each table:
ALTER TABLE <table_name> MOVE <tablespace_name>;
The tablespace name can be the same tablespace that the table currently lives in, it will still reorganize the data and coalesce the data.
This statement will give you the text for this command for all the tables that live in USERS tablespace:
select 'ALTER TABLE '||OWNER||'.'||TABLE_NAME||' MOVE '||TABLESPACE_NAME||';' From dba_tables where tablespace_name='USERS';
Indexes will also have to be rebuilt (ALTER INDEX REBUILD;) as the MOVE command invalidates them because it changes physical organization of the table data (blocks) instead of relocating row by row.
After the data is coalesced you can resize the USERS tablespace to reflect the data size.
Is it a pain? Yes. Is Oracle user friendly? They would love you to think so but its really not, especially when you hit some weird corner case that keeps you from doing the types of things you want to do.
As you can see, you need some free space in the SYSTEM tablespace in order to export, and Oracle XE refuses to allocate it because the sum of SYSTEM+USERS has reached 4 Gb.
I would try to install an Oracle 10gR2 Standard Edition instance on a similar architecture, then shutdown Oracle XE and make a copy your existing USERS data file. Using "ALTER TABLESPACE" commands on the standard edition, you should be able to link the USERS tablespace to your existing data file, then export the data.