Oracle: How to efficiently copy a table from one schema to another on a different database and server - oracle

I have a large table (3.5MM records) that I need to copy from one schema/database to another schema/database. I tried TOAD's copy data from table feature, but got errors and it never fully copied, in part because the connection keeps getting dropped. I'm trying the object copy feature of SQLDeveloper, and after 11 minutes, it's still copying. I tried the SQLPlus COPY statement but got a syntax error (help needed). I'm still open to extracting the data as INSERT statements that I can just run directly.
1) SQLPLUS Copy as follows:
copy from report_new/mypassword#(DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=10.15.15.20)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=STAGE))) to report/mypassword#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.18.22.25)(PORT=1550))(CONNECT_DATA=(SERVICE_NAME=DEV))) CREATE USER_USAGE_COUNT USING SELECT * FROM _USER_USAGE_COUNT
The above gives me
SQL> start copy_user_count_table.sql
SP2-0758: FROM clause missing username
2) I tried TOAD
The TOAD "Copy data to another schema" fails due to the connection getting
dropped. I set the commit threshold first to 5000 then to 500.
3) I'm trying SQLDeveloper's copy function, but I think it's not going to finish anytime soon and it gives me no real progress indications. For all I know, it could be hung but that it just doesn't want to tell me.
4) I thought about creating a datalink, but I don't have the authority to create one, and it's in a corporate environment wherein the DBA's don't respond in under 3 days.
Todo: Should I write my own Java code to just do this one record at a time?? I shouldn't have to do this, but somehow it's easier to send a man to the moon than to copy data from one schema to another.

You can use the copy command of sqlcl which is part of newer SQLdeveloper releases. The sqlcl is found in the Sqldeveloper\bin directory and is named sql.exe (Windows) or sql (Unix/Linux/Mac). The steps to follow are:
Connect to Destination database with sqlcl
sql username/password#destindationdb
Use the copy command
copy from username#sourcedatabase create newtablename using select * from sourcetable;

Related

How to download all views stored in snowflake to local machine

I want to create a backup of all SQL scripts (views) that are saved on snowflake (not data). How can I do it? Obviously manual copy and pasting is not an answer.
Expected result: I have all views (sql scripts) that are in snowflake database on my local machine, file per view.
Expected result perfect version: I have all views (sql scripts) that are in snowflake on my local machine, where folders would correspond to schemas in snowflake and files would correspond to views in snowflake (files are also placed in correct folders).
SHOW VIEWS includes the DDL in the text column. To get all views in the database:
show views in database my_database;
select "text" from table(result_scan(-1));
You can invoke it from the CLI with SNOWSQL.
You can run:
SELECT get_ddl('schema',{schema_name});
This will get you the DDL of all objects in the schema, which you can then save to a file in a folder.
You just download the full database DDL
Option 1
select GET_DDL('database','databasename') and copy and save it to your machine
Option 2
Write a Python Script to get a list of schemas, views, tables, stored procs etc. and save it to its corresponding folder on your local machine. Something like this, you just have to extend it to get your perfect version output. Just install the snowflake Python connector to run the following code.
import snowflake.connector
con = snowflake.connector.connect(
user='YourUsername',
password='YourPassword',
account='your snowflakeaccount',
database='databasename',
warehouse='datawarehousename',
role='dbrole'
)
cur = con.cursor()
try:
cur.execute("SELECT TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE from information_schema.tables")
for (TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE) in cur:
print('{0}, {1}'.format(TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE))
#Have another loop to get the DDL for each object and save it to a file/folder structure, something like this..
#cur2.execute("SELECT GET_DDL('object type information from previous query','object name'")
finally:
cur.close()
Great answers above, but you could take it a step further and manage all your DDL through a version controlled tool like DBT.
That way, you would have a mechanism not only to store your DDL in text files, but also to run that DDL (instead of relying on error-prone manual processes).
Otherwise, how would you know whether or not your text files are up-to-date?

PL/SQL Errors Suddenly

I am developing a PL/SQL script, using TOAD. At this point of the development, I am debugging it. This has involved: wrap a section in begin/end, F5 run it, receive error info, fix problem, repeat.
All of a sudden, out of nowhere, I am receiving
ORA-00604: error occurred at recursive SQL level 2
ORA-01654: unable to extend index SYS.I_OBJ5 by 128 in tablespace SYSTEM
The script begins with a drop table/create table set of instructions for a simple 2-field table, in my logon schema. After this started happening, I narrowed the part I am re-running to just one line: drop table <tblName>
In trying to narrow this down, I finally went to the TOAD Schema Browser, right-clicked on the table, and selected "Drop table" from the context menu — same result.
I must have run this statement 120 times yesterday, without this act giving me any trouble. Now? Not happenin! I am really stumped. Did all those runs maybe load up some area that is now full? Part of this script opens file system files. I didn't know I had to then close them, and I ran into "`This action would result in ‘too many files open’ (each iterative run opened one more). Have I done something like that by dropping and recreating this table so many times?
I agree with #Peter M, most likely your SYSTEM tablespace is full.
The error message says it quite clearly: unable to extend index ... in tablespace SYSTEM means that Oracle ran out of space while trying to make an index bigger. The tablespace SYSTEM is used by Oracle for internal purposes, for instance for the list of tables and columns. It is therefore quite important and normally well supervised by DBAs and kept clean of other objects like developer tables. The schema name SYSalso points in this direction.
The other hint is recursive SQL: Oracle runs not only your SQL (like CREATE TABLE) but sometimes needs to do some housekeeping, like updating said list of table, which is also done by SQL. The second flavour is called recursive.
I'd guess therefore that it is not your table that causing the SYSTEM tablespace to overflow, but the many changes.
If this happened at my place of work, I'd got a friendly phone call by a DBA by now, asking what's going on...

How to copy data from one database/table to another database/table in oracle using toad

I am trying to copy a table data from dev box db to uat db which are 2 different data bases . I am trying in toad.All the connection details are correct but its not working and throwing the following error.
[Error] Execution (12: 1): ORA-00900: invalid SQL statement
This is what i am trying
copy from abc/cde#//abc.abc.com:1521/devbox to abc/cde#//abc.abc.com/uatbox
INSERT TOOL_SERVICE_MAPPING (*)
USING (SELECT * FROM TOOL_SERVICE_MAPPING)
If your table doesn't have a huge number of rows you can use Toad's Export function: it creates an insert statement for each row. You can then run these statements in destination DB to re-create your table's data.
Here are the steps:
A. Create a copy of the table in destination DB
in source DB in a schema browser window click on the table you want to copy, select "script" tab in the right part of the window: you will find the script to re-create your table; copy this script
paste the script in a new SQL editor window in destination DB and run it. This should create the new table
B. Copy data in new table
in a schema browser window right click on table name in source DB
select "Export Data" from context menu
write "where" statement of your export query (leave it blank if you want to copy the entire table)
select destination: clipboard
click "ok" (now insert statements are stored in your clipboard)
paste insert statements in a new SQL editor window in destination DB
run statements as script (shortcut F5)
copy is a SQL*Plus command, not a SQL statement. I would be surprised if Toad had implemented that particular SQL*Plus command (it does implement many of the simpler commands). If you want to use the copy command, you would need to use SQL*Plus, not Toad.
If you want to use Toad, you would need to use a SQL statement to copy the data. You could create a database link in the destination database that points to the source database and then
INSERT INTO tool_service_mapping
SELECT *
FROM tool_service_mapping#<<db link to source database>>
The easyest and most error-free way I have experienced so far is: Database->Compare->Schemas
It's not too complicated as it looks (lots of checkboxes), but you tick boxes for objects you need to be created in an empty database, and at the end of comparison you end up with SQL script including all objects (triggers, views, sequences, packges) that you selected (checkboxes).
I clearly see all tables, triggers, data, etc in generated sql script and even can tick these I don't wish to create (if any)... Before executing script, TOAD asks you to confirm against which database you are running the script - saved me few times... As ackward as it looks, it works perfectly.
I have arround 200 tables I don't know if this is suitable for huge databases.

SAS 9.2 running Oracle query indefinitely

I'm running a pretty large query against Oracle database using SAS for windows 9.2. This query is pretty large where in I wrote a sub-query in WITH clause and used it 4 times. This runs fine on SQL PLUS and SQL Developer, but when i run it using SAS, the program hangs up after 20 mins and I can't even see the log window. I have never worked with SAS and not sure how to proceed but tried following option:
I created a SAS code file and ran it from windows batch file hoping to get log written to windows file system, but even this runs in-definitely and I don't see anything written to log file
Can some one direct me here. How can i use ALTLOG command to get log file written to windows file system so that i can understand the exact error message. By the way DBA's have mentioned that query runs fine and rows are returned from server side, but for some reason SAS program is not able to show this data. I get about 45,000 records from the query.
Thanks
I'll break it into two points:
1) running an existing Oracle SQL query in SAS without ever using SAS:
best way for you is to embed your Oracle SQL code in so called PROC SQL explicit pass-through:
proc sql;
connect to oracle as db1 (user=user1 pw=pasw1 path=DB1);
create table test_table as
select *
from connection to db1
( /* here we're in oracle */
select * from test.table1 where rownum <20
)
;
disconnect from db1;
quit;
(borrowed from my answer to another question Limiting results in PROC SQL)
The point is not to try to translate it to SAS SQL (don't know if you tried or not).
Also make sure you're creating a SAS table (as in the example) from query result, not writing it to SAS OUTPUT window.
2) Regarding getting the log: the log about an action is in general written once it's done, so if the query is really running for a long time, you won't see any intermediate logs.
Anyway, log buffering is the default setting for batch jobs, so log messages are written after the buffer is full.
To get log messages written immediately to the log file set LOGPARM option:
-LOGPARM= “WRITE=IMMEDIATE”
the opposite option is BUFFERED.
To find out the config file(s) used run following in your SAS session:
proc options option=config;run;
Then enter the option above on separate line in the config file.

SQL identity column insert using pentaho data integration

I am new to Pentaho data integration tool.I am trying to move data from a source table into target table ... both is SQL Server. The tables are identical and has an identity column.
Tried many options but ... it gives an error every time saying "Indentity insert is set to OFF"
Tried introducing a hop inbetween to execute a SQL statement to "SET identity_insert tblname ON" .. still dint work.
Any suggestions would be highly appreciated.
Thanks.
Putting that in a hop certainly wont work, because PDI/kettle uses a connection(s) per step. You need to put that setting in the advanced options of the database connection and then you should be ok - it will then be used for all instances of that database connection.
Also make sure you "share" your database connections, otherwise if you create them from hand in every transformation you'll need to apply that setting to every single database connection in each transformation. ( Unless you're using a database or EE repository in which case the connections are centralised so you're ok )
One other thing you can try is to remove the identity columns from the select you are using to pass from the source to the destination.
This way, you will make sure that SQL will create a new identity for each one of the rows intead of trying to insert them,
You should add a command after db connection established.

Resources