I actually want create table script in oracle, for which I searched and found some SQL in below link,
How can I generate (or get) a ddl script on an existing table in oracle? I have to re-create them in Hive
Which is as,
SQL> set long 100000
SQL> set head off
SQL> set echo off
SQL> set pagesize 0
SQL> set verify off
SQL> set feedback off
SQL> select dbms_metadata.get_ddl('TABLE', 'EMP') from dual
But after executing this in SQL CMD I am getting error as below,
ORA-24813: cannot send or receive an unsupported LOB
Please tell me how I can get the DDL
Related
Using this command, I am able to create a table from another schema, but it does not include triggers. Is it possible to create a table from another schema, including triggers?
create table B.tablename unrecoverable as select * from A.tablename where 1 = 0;
First option is to run CREATE script for those objects, if you have a code repository. I suppose you don't.
If you use any GUI tool, things are getting simpler as they contain the SCRIPT tab that enables you to copy code from source and paste it into target user.
If you're on SQLPlus, it means that you should, actually, know what you're supposed to do. Here's a short demo.
SQL> connect hr/hr#xe
Connected.
SQL> create table detail (id number);
Table created.
SQL> create or replace trigger trg_det
2 before insert on detail
3 for each row
4 begin
5 :new.id := 1000;
6 end;
7 /
Trigger created.
SQL>
SQL> -- you'll have to grant privileges on table to another user
SQL> grant all on detail to scott;
Grant succeeded.
Connect as SCOTT and check what we've got:
SQL> connect scott/tiger#xe
Connected.
SQL> -- now, query ALL_SOURCE and you'll get trigger code
SQL> set pagesize 0
SQL> col text format a50
SQL> select text from all_source where name = 'TRG_DET' order by line;
trigger trg_det
before insert on detail
for each row
begin
:new.id := 1000;
end;
6 rows selected.
SQL>
Yet another option is to export & import table, which will get the trigger as well (I've removed parts that aren't relevant, as Oracle database version):
C:\>exp hr/hr#xe tables=detail file=detail.dmp
About to export specified tables via Conventional Path ...
. . exporting table DETAIL 0 rows exported
Export terminated successfully without warnings.
C:\>imp scott/tiger#xe file=detail.dmp full=y
. importing HR's objects into SCOTT
. importing HR's objects into SCOTT
. . importing table "DETAIL" 0 rows imported
Import terminated successfully without warnings.
C:\>
Check what's imported (should be both table and trigger):
SQL> desc detail
Name Null? Type
----------------------------------------- -------- ---------------
ID NUMBER
SQL> select * From detail;
no rows selected
SQL> insert into detail (id) values (-1);
1 row created.
SQL> select * From detail;
ID
----------
1000
SQL>
Cool; even the trigger works.
There might be some other options, but these 4 should be enough to get you started.
I am trying to access a sequence that is:
A. Located in another schema
B. Is actually a synonym to another database through a dblink.
What works:
select schema.sequence#dblink.nextval from dual;
What doesn't work:
select schema.synonym.sequence.nextval from dual;
The above returns a '%s: invalid identifier'
Is it possible to access the remote sequence without using the dblink annotation?
Yes, it is possible to use synonym for remote sequence object.
Database 1
SQL> conn jay
SQL> create sequence myseq increment by 1;
Sequence created.
Database 2
SQL> conn jay
SQL> create database link dbl_db1 connect to jay identified by jay using 'DB1';
Database link created.
SQL> create synonym myseq_syno for jay.myseq#dbl_db1;
Synonym created.
SQL> select myseq_syno.nextval from dual;
NEXTVAL
----------
1
I need help for gettng the TKProf of a sql query.
The oracle doc says to check the following parameters before enabling SQL trace.
TIMED_STATISTICS, MAX_DUMP_FILE_SIZE, and USER_DUMP_DEST
Can someone help me how to check their value for current session, and how to set them if the are incorrect?
I tried
show parameter max_dump;
show parameter timed_statistics;
from sqlplus (windows), but i get the ORA-00942: table or view does not exist error.
Also, any further help on the further steps of TKPROF will be highly appreciated.
You need to grant select on v_$parameter
SQL> show parameter max_dump
ORA-00942: table or view does not exist
SQL> conn / as sysdba
Connected.
SQL> grant select on v_$parameter to hr;
Grant succeeded.
SQL> conn hr/hr
Connected.
SQL> show parameter max_dump
NAME TYPE VALUE
------------------------------------ ----------- ---------
max_dump_file_size string unlimited
SQL>
My scripts use SQL*Plus error logging to track errors during installation.
The scripts start like this - they enable erorr logging and truncate any existing entries:
SQL> set errorlogging on truncate
SQL> select * from table_does_not_exist;
select * from table_does_not_exist
*
ERROR at line 1:
ORA-00942: table or view does not exist
Then at the very end I query sperrorlog to see what went wrong:
SQL> select statement from sperrorlog;
STATEMENT
--------------------------------------------------------------------------------
select * from table_does_not_exist
But every now and then the truncate does not work, and I get errors from previous installations. Why doesn't truncate work?
Despite it's name, SQL*Plus error logging truncate does not actually truncate the table. It deletes the data and does not commit.
This SQL*Plus session enables error logging and creates an error. Another call to enable errorlogging and truncate does clear out the data, but the rollback undoes the truncate.
SQL> set errorlogging on truncate
SQL> select * from table_does_not_exist;
select * from table_does_not_exist
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> commit;
Commit complete.
SQL> set errorlogging on truncate
SQL> select statement from sperrorlog;
no rows selected
SQL> rollback;
Rollback complete.
SQL> select statement from sperrorlog;
STATEMENT
--------------------------------------------------------------------------------
select * from table_does_not_exist
SQL>
To be safe, you should always issue a commit right after set errorlogging on truncate.
To be safe, you should always issue a commit right after set errorlogging on truncate
Or, do an explicit TRUNCATE which would do an implicit commit being DDL statement. Of course, this is just like not using the truncate option, however, so is the problem with the rollback. I found the workaround for the rollback issue and shared it in my blog SQL*Plus error logging – workaround for ROLLBACK issue
Coming back to your issue, I am trusting more on an explicit truncate:
SQL> set errorlogging on
SQL> select * from table_does_not_exist;
select * from table_does_not_exist
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select statement from sperrorlog;
STATEMENT
----------------------------------------
select * from table_does_not_exist
SQL> truncate table sperrorlog;
Table truncated.
SQL> select statement from sperrorlog;
no rows selected
SQL> rollback;
Rollback complete.
SQL> select statement from sperrorlog;
no rows selected
SQL>
Alternatively, you could use global temporary table for sperrorlog table and make it on commit delete rows.
Is there a way to force an implicit date conversion on an insert statement (i.e. without using TO_DATE)?
context: I'm importing a data dump from Postgresql to Oracle. Everything is working well except for date formatting. I would prefer not to have to mung the Postgresql output.
(update: this is with 10.2. Strangely, changing the format to RRRR-MM-DD makes everything work!)
SQL> create table a(b date);
Table created.
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD';
Session altered.
SQL> insert into a values('2009-12-03');
insert into a values('2009-12-03')
*
ERROR at line 1:
ORA-01843: not a valid month
SQL> insert into a values(to_date('2009-12-03','YYYY-MM-DD'));
1 row created.