Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am using Oracle 12c on an instance of Amazon Web Services EC2.
I want to export data from one Oracle table with 5M records to a local folder in CSV format.
Is there a script or program to do that quickly in Redhat/bash environment?
I am looking for minimal installation and setup.
You want it quickly? How about a simple SPOOL SQLPlus command? You can make it prettier using different SET commands (type HELP SET on SQLPlus command prompt), but the general idea is as follows:
SQL> set colsep ','
SQL> spool emp.csv
SQL> select employee_id, first_name, last_name
2 from employees
3 where rownum < 5;
EMPLOYEE_ID,FIRST_NAME ,LAST_NAME
-----------,--------------------,-------------------------
100,Steven ,King
101,Neena ,Kochhar
102,Lex ,De Haan
103,Alexander ,Hunold
SQL> spool off;
SQL> $type emp.csv
SQL> select employee_id, first_name, last_name
2 from employees
3 where rownum < 5;
EMPLOYEE_ID,FIRST_NAME ,LAST_NAME
-----------,--------------------,-------------------------
100,Steven ,King
101,Neena ,Kochhar
102,Lex ,De Haan
103,Alexander ,Hunold
SQL> spool off;
SQL>
[EDITED by LF, after seeing OP's comment]
OK then, as you didn't take that effort and examine what SET offers, here you go: if you want to get a clean output (no headings, underlines, SELECT command, etc.), create a SQL file (let's name it SP.SQL):
SET ECHO OFF
SET VERIFY OFF
SET TRIMSPOOL ON
SET TRIMOUT ON
SET LINESIZE 9999
SET PAGESIZE 0
SET FEEDBACK OFF
SET TIMING OFF
SET TIME OFF
SET COLSEP ','
SPOOL emp.csv
SELECT employee_id, first_name, last_name
FROM employees
WHERE rownum < 5;
SPOOL OFF
Now connect to SQLPlus and run that script:
SQL> #sp
100,Steven ,King
101,Neena ,Kochhar
102,Lex ,De Haan
103,Alexander ,Hunold
SQL>
Finally, if you take a look at EMP.CSV, you'll see that it is nice and clean.
Satisfied?
Related
I have a requirement, where there are about 2 million rows of data and 1000 columns in 1 table in oracle database. I want this data to be dumped out from this table into a .txt file with PIPE as separator. This file should be created on UNIX application server and not on database server. After that, the file will be FTPed to other server, where it will be loaded by other application. This process of generating file should be fast(in like 1 hour max)
Any suggestions, as to what will be the best way to do this?
Assuming: The application server has Oracle software installed, including SQL*Plus, and can access the database server.
To expand on #Popeye's comment, create a file dump_stuff.sql and put in it something like:
set pagesize 0
set linesize 50000
set trimspool on
set trimout on
set feedback off
set markup csv on delimiter | quote off
set feedback off
col owner format a30
col object_name format a50
col subobject_name format a50
col object_type format a30
spool stuff_output.txt
select owner, object_name, subobject_name, object_type from dba_objects;
spool off
Invoke SQL*Plus on the application server, connect to the target database, and do the following:
set termout off
#dump_stuff
On my database, dumping 78,000 rows from dba_objects took just over 2 seconds. The termoout off command prevents output going to the screen, but still goes to the spool file.
Sample output:
OWNER|OBJECT_NAME|SUBOBJECT_NAME|OBJECT_TYPE
SYS|UNDO$||TABLE
SYS|C_COBJ#||CLUSTER
SYS|PROXY_ROLE_DATA$||TABLE
SYS|I_CDEF2||INDEX
SYS|I_PROXY_ROLE_DATA$_1||INDEX
SYS|FILE$||TABLE
SYS|UET$||TABLE
SYS|I_FILE#_BLOCK#||INDEX
...
You will need to modify the value for the set linesize 50000 to be long enough to avoid your data from the 1,000 columns from wrapping around.
Documentation of the SQL*Plus set command: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqpug/SET-system-variable-summary.html#GUID-A6A5ADFF-4119-4BA4-A13E-BC8D29166FAE
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
create table samp
(
empno number(2),
ename varchar2(30),
sal number(7,2),
dob date
)
SQL> /
SQL> insert into samp values(1,'MASTAN',24000,'24-JUL-1987');
1 row created.
here i did not commit data so it is in redo log buffer,but when retrieving , how the below Query giving data? How internally works ?kindly suggest me
SQL> SELECT * FROM SAMP;
EMPNO ENAME SAL DOB
---------- ------------------------------ ---------- ---------
1 MASTAN 24000 24-JUL-87
It seems you did not commit or rollback so you are seeing the correct result from your select statement because it happens in single transaction. Try rollback and check the results. Another good way to understand transactions is to try opening two separate sqlplus shell and trying insert statements in one and select statements in the other shell.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I want to insert million of records into a file using oracle and without using a loop or java code.
while browsing I found something as util_file in oracle which is used to record rows into file but was unable to use it. can some please help me in understanding or writing the code to write the resultset returned by a select query into a file or even a procedure would even work.
I tried below procedure it run successfully but doesnot create file with data
create or replace procedure familydetails
( p_file_dir IN varchar2, p_filename IN varchar2 )
IS v_filehandle utl_file.file_type;
cursor family Is
select * from fmly_detl
begin
v_filehandle :=utl_file.fopen('C:\tempfolder','sqltest.txt','W');
utl_file.new_line(v_filehandle);
for fmly IN fmly_detl LOOP
utl_file.putf(v_filehandle,'family %s details:s\n',fmly.last_name,fmly.first_name,fmly.indv_id);
end loop;
utl_file.put_line(v_filehandle,'END OF REPORT');
UTL_FILE.fclose(v_filehandle);
end familydetails;
1) if you use sqlplus in unix.. Here is a simple solution, put the below as a script_name.ksh and execute it (ksh script_name.ksh)
sqlplus -s user_id/password#sid << ! >> ~/sql_output.txt
set feedback off;
set pages 0;
set linesize 20000;
set arraysize 1000;
set tab off;
--Your Query, with proper padding , or comma seprated
--Eg Select employee_id||'|'||employee_name||'|'||employee_age
-- from employee;
!
2) If you use a IDE like SQL Developer or TOAD, you can execute the Query and Export it.
3) But for PL/SQL
test_dir mentioned below, is a directory in the host machine , accessible to Oracle. It should be listed in *ALL_DIRECTORIES* dictionary table.
DECLARE
fileHandler UTL_FILE.FILE_TYPE;
cursor emp_cursor is select employee_id,employee_name,employee_age FROM employee;
TYPE emp_type IS TABLE OF emp_cursor%ROWTYPE INDEX BY PLS_INTEGER;
l_emp_type emp_type;
v_fetch_limit NUMbER := 10000;
BEGIN
fileHandler := UTL_FILE.FOPEN('test_dir', 'sql_output.txt', 'W',max_linesize => 4000);
open emp_cursor;
LOOP
FETCH emp_cursor BULK COLLECT INTO l_emp_type LIMIT v_fetch_limit;
EXIT WHEN l_emp_type.COUNT < v_fetch_limit;
// Used to control the fetch size.
FOR I IN 1..l_emp_type.COUNT LOOP
UTL_FILE.PUTF(fileHandler, '%s|%s|%s',l_emp_type(I).employee_id,
l_emp_type(I).employee_name,
l_emp_type(I).employee_age);
END LOOP;
UTL_FILE.FFLUSH(fileHandler);//Flush the buffer to file
END LOOP
UTL_FILE.FCLOSE(fileHandler);
IF(emp_cursor%ISOPEN) THEN
emp_cursor.close();
END IF;
EXCEPTION
WHEN utl_file.invalid_path THEN
raise_application_error(-20000, 'ERROR: Invalid PATH FOR file.');
IF(emp_cursor%ISOPEN) THEN
emp_cursor.close();
END IF;
END;
/
Finally, copy it from the UTL_FILE directory in Server.
The directory may be a NAS mount too. Just the Oracle need to have write access to it.
4) Like a PL/SQL, a Pro*C program, or Any OCI interface too will work! Generally, Options 3 and 4 gives you good control in the process you do!
Good Luck!
EDIT: Added improvements over fetch size and flushing
Is there a option to see if existing table/record from a Oracle database is updated?
From a monitoring perspective (not intended to find previous changes), you have several options including but not limited to triggers, streams, and a column with a default value of sysdate. A trigger will allow you to execute a bit of programming logic (stored directly in the trigger or in an external database object) whenever the record changes (insert, update, delete). Streams can be used to track changes by monitoring the redo logs. One of the easiest may be to add a date column with a default value of sysdate.
Are you talking about within a transaction or outside of it?
Within our program we can use things like SQL%ROWCOUNT to see whether our DML succeeded...
SQL> set serveroutput on size unlimited
SQL> begin
2 update emp
3 set job = 'SALESMAN', COMM=10
4 where empno = 8083;
5 dbms_output.put_line('Number of records updated = '||sql%rowcount);
6 end;
7 /
Number of records updated = 1
PL/SQL procedure successfully completed.
SQL>
Alternatively we might test for SQL%FOUND (or SQL%NOTFOUND).
From outside the transaction we can monitor ORA_ROWSCN to see whether a record has changed.
SQL> select ora_rowscn from emp
2 where empno = 8083
3 /
ORA_ROWSCN
----------
83828715
SQL> update emp
2 set comm = 25
3 where empno = 8083
4 /
1 row updated.
SQL> commit
2 /
Commit complete.
SQL> select ora_rowscn from emp
2 where empno = 8083
3 /
ORA_ROWSCN
----------
83828780
SQL>
By default ORA_ROWSCN is set at the block level. If you want to track it at the lower level your need to create the table with the ROWDEPENCIES keyword.
These are ad hoc solutions. If you want to proactive monitoring then you need to implementing some form of logging. Using triggers to write log records is a common solution. If you have Enterprise Edition you should consider using Fine Grained Auditing: Dan Morgan's library has a useful demo of how to use FGA to track changes.
You can see if a table definition has change by querying the last_ddl_time from the user_objects view.
Without using triggers or materialized logs (which would be a total hack) there is no way I know of to see when any particular row in a table has been updated.
How do i import a script with 3954275 Lines of Insert Statements into a Oracle 10g. I can do it with sqlplus user/pass # script.sql but this is dam slow (even worse the commit is at the end of this 900MB file. I dont know if my Oracle configuration can handle this). Is there a better (faster) way to import the Data?
Btw. the DB is empty before the Import.
Use SQL*Loader.
It can parse even your INSERT commands if you don't have your data in another format.
SQL*Loader is a good alternative if your 900MB file contains insert statements to the same table. It will be cumbersome if it contains numerous tables. It is the fastest option however.
If for some reason a little improvement is good enough, then make sure your sessions CURSOR SHARING parameter is set to FORCE or SIMILAR. Each insert statement in your file will likely be the same except for the values. If CURSOR_SHARING is set to EXACT, then each of insert statements needs to be hard parsed, because it is unique. FORCE and SIMILAR automatically turns your literals in the VALUES clause to bind variables, removing the need for the hard parse over and over again.
You can use the script below to test this:
set echo on
alter system flush shared_pool
/
create table t
( id int
, name varchar2(30)
)
/
set echo off
set feedback off
set heading off
set termout off
spool sof11.txt
prompt begin
select 'insert into t (id,name) values (' || to_char(level) || ', ''name' || to_char(level) || ''');'
from dual
connect by level <= 10000
/
prompt end;;
prompt /
spool off
set termout on
set heading on
set feedback on
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = force
/
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = exact
/
set echo off
drop table t purge
/
The example executes 10,000 statements like "insert into t (id,name) values (1, 'name1');
". The output on my laptop:
SQL> alter system flush shared_pool
2 /
Systeem is gewijzigd.
SQL> create table t
2 ( id int
3 , name varchar2(30)
4 )
5 /
Tabel is aangemaakt.
SQL> set echo off
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:17.10
Sessie is gewijzigd.
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:05.50
More than 3 times as fast with CURSOR_SHARING set to FORCE.
Hope this helps.
Regards,
Rob.
Agreed with the above: use SQL*Loader.
However, if that is not an option, you can adjust the size of the blocks that SQL Plus brings in by putting the statement
SET arraysize 1000;
at the beginning of your script. This is just an example from my own scripts, and you may have to fine tune it to your needs considering latency, etc. I think it defaults to like 15, so you're getting a lot of overhead in your script.