Getting an error message when trying to create a temp table. I copied this code directly from Oracle's website. I also downloaded the latest version 18.2. What am I missing here?
CREATE PRIVATE TEMPORARY TABLE ora$ptt_my_temp_table
(
id NUMBER(10,2),
description VARCHAR2(20)
)
ON COMMIT PRESERVE DEFINITION;
Error Message:
Error starting at line : 1 in command -
CREATE PRIVATE TEMPORARY TABLE ora$ptt_my_temp_table
(
id NUMBER(10,2),
description VARCHAR2(20)
)
ON COMMIT PRESERVE DEFINITION
Error report -
ORA-00905: missing keyword
00905. 00000 - "missing keyword"
*Cause:
*Action:
I presume you're not on Oracle 18c but some lower version (which doesn't know private temporary tables). Therefore, I suggest you run
CREATE GLOBAL TEMPORARY TABLE ora$ptt_my_temp_table
(
id NUMBER(10,2),
description VARCHAR2(20)
)
ON COMMIT PRESERVE ROWS;
and move on.
The problem is the ON COMMIT PRESERVE ROWS line. That syntax is only used for Global Temorary Tables. For Private Temporary tables, you need to use one of the following:
ON COMMIT DROP DEFINITION
This drops the table at the end of the transaction (or at the end of the session if transactions are not being used).
This is the default, so this line can be omitted if this is the behavior you want.
ON COMMIT PRESERVE DEFINITION
The table will persist beyond any transactions, but will still be deleted at the end of the session.
See: https://oracle-base.com/articles/18c/private-temporary-tables-18c
Related
alter table tablename rename column zl_divn_nbr to div_loc_nbr;
Error while executing the above statement. Please help.
SQL Error: ORA-54032: column to be renamed is used in a virtual column expression
54032. 0000 - "column to be renamed is used in a virtual column expression"
*Cause: Attempted to rename a column that was used in a virtual column
expression.
*Action: Drop the virtual column first or change the virtual column
expression to eliminate dependency on the column to be renamed
Run the following SQL query in your database using the table name mentioned in the error message. For example, in the error message shown in this article, the table name is 'tablename'. Note that whilst the table name appears in lower case in the error message, it may be upper case in your DB. This query is case sensitive so if you receive no results, check whether the table name is upper case inside your database.
SELECT COLUMN_NAME, DATA_DEFAULT, HIDDEN_COLUMN
FROM USER_TAB_COLS
WHERE TABLE_NAME = 'tablename';
Before proceeding, make sure the Bitbucket Server process is not running. If Extended Statistics has been enabled, contact your database administrator to have them drop the Extended Statistics metadata from the table, and proceed with your upgrade. If you wish to enable Extended Statistics again after the upgrade you may do so, however be aware that you may need to repeat this process again for subsequent upgrades otherwise you risk running into this issue again.
Removing columns created by Extended Statistics requires using an in-build stored procedure,
DBMS_STATS.DROP_EXTENDED_STATS().
Usage of this stored procedure is covered further in ORA-54033 and the Hidden Virtual Column Mystery, and looks similar to the following:
EXEC DBMS_STATS.DROP_EXTENDED_STATS(ownname=>'<YOUR_DB_USERNAME>', tabname=>'tablename', extension=>'("PR_ROLE", "USER_ID", "PR_APPROVED")')
References
Database Upgrade Eror: column to be rename
Thanks.
Probably, you have such a table :
CREATE TABLE tablename(
id NUMBER,
zl_divn_nbr NUMBER,
zl_divn_percent NUMBER GENERATED ALWAYS AS (ROUND(zl_divn_nbr/100,2)) VIRTUAL
);
where zl_divn_nbr column is used for a computation for virtual(zl_divn_percent) column.
To rename zl_divn_nbr, all referenced virtual columns to this column should be removed, and may be created later.
The syntax for defining a virtual column is this :
column_name [datatype] [GENERATED ALWAYS] AS (expression) [VIRTUAL]
Since version 11 R1, we have this property.
ALTER TABLE rename column to
In the case of tables with virtual or 'group extension columns' the above
statement returns an error before Oracle 12cR2. For Oracle 12cR2 or newer versions the above statement runs fine cause 'renaming column' command is decoupled from the group extension aspect.
I have problems with some Oracle external table
create table myExternalTable
(field1, field2....)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY myDirectory
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
NOLOGFILE
NOBADFILE
NODISCARDFILE
FIELDS TERMINATED BY '$')
LOCATION ('data.dsv'));
commit;
alter table myExternalTable reject limit unlimited; --solve reject limit reached problem
select * from myExternalTable;
When I select on the table I have this error :
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file data.dsv in myDirectory not found
It seems that the error description is not right because normally the table is already loaded with data.dsv when created.
Plus data.dsv exists in myDirectory.
What is going on? Can somebody help?
Note :
Instead of the select, this is what I normally do :
merge into myDatabaseTable
using
(select field1, field2,.... from myExternalTable) temp
on (temp.field1= myDatabaseTable.field1)
when matched then update
set myDatabaseTable.field1 = temp.field1,
myDatabaseTable.field2 = temp.field2,
......;
This works good on my development environment but on some other environment I have the error I said before :
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file data.dsv in myDirectory not found
First I thought that, in the environment it does not work, the directory did not point where it had to but selecting on dba_directories table, I could see the path of the directory is correct.
The problem is related with the access rights of the user on the operating system side. It is defined in Oracle Support Note Create Database Directory on Remote Share/Server (Doc ID 739772.1)
For my case, I created the directory with a sysdba and then for allowing other users to accesss that external table, I created another table which is creates by Create Table as Select statement for the external table. Otherwise, I need to map the Windows Oracle Service owner user to the exact Oracle user which is already defined in the note.
So, it is more like a well fitting workaround for my case.
To sum up the steps in a nutshell:
1- Create the external table T2
2- Create a table named T1 with CTAS to external table
3- Give SELECT right to T1
Hope this helps for your case.
I want to write a sql plus error for when the oracle find a record, or more records, that already exist and just ignore it/them. This is a example:
sqlError=`egrep "ORA-[0-9][0-9][0-9][0-9][0-9]" ${FILE_SPOOL_DAT} | awk '{print $0}';`
if test ! -f ${FILE_SPOOL_DAT}
then
echo "Error request " >> ${FILE_SPOOL_DAT}
else
if [ ! "$sqlError" = "" ] #controls if the variable $sqlError contains a value different from spaces, i think this is the point to change
then
echo "Error $sqlError" >> ${FILE_SPOOL_DAT}
fi
fi
In this example sqlplus controls if the variable $sqlError contains a value different from spaces. How can i change this condition put the DUPKEY error? Thanks
If you're using 11g, the IGNORE_ROW_ON_DUPKEY_INDEX hint can help.
SQL> create table table1(a number primary key);
Table created.
SQL> insert into table1 values(1);
1 row created.
SQL> insert into table1 values(1);
insert into table1 values(1)
*
ERROR at line 1:
ORA-00001: unique constraint (JHELLER.SYS_C00810741) violated
SQL> insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table1(a))*/ into table1 values(1);
0 rows created.
Yikes, that's dangerous! If Oracle finds one or more records that already exists, it will normally rollback the transaction. Suppressing the relevant error messages is way to late.
A much better place is to instruct Oracle to ignore duplicate records directly during the INSERT, for instance by using the MERGE command:
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9016.htm#i2081218
When you get a duplicate key error it really means you have violated a constraint set on a table.
Assuming whoever designed the table understood the data model, what you want to do is a BAD idea.
If it is a bad design, change the table's metadata by removing the constraint.
The MERGE command may let you work around it, which I doubt, but with the subsequent data mess left behind I do not believe it is worth the risk.
If you are just playing, remove the constraint anyway. I don't know how your table is set up,
but you will probably have to ALTER or DROP and CREATE an index, or ALTER the table.
If this is development for production and thus paid work, don't try to get around the constraint without talking to the person(s) who designed the table to start with.
I have a - for me unknown - issue and I don't know what's the logic/cause behind it. When I try to insert a record in a table I get a DB2 error saying:
[SQL0803] Duplicate key value specified: A unique index or unique constraint *N in *N
exists over one or more columns of table TABLEXXX in SCHEMAYYY. The operation cannot
be performed because one or more values would have produced a duplicate key in
the unique index or constraint.
Which is a quite clear message to me. But actually there would be no duplicate key if I inserted my new record seeing what records are already in there. When I do a SELECT COUNT(*) from SCHEMAYYY.TABLEXXX and then try to insert the record it works flawlessly.
How can it be that when performing the SELECT COUNT(*) I can suddenly insert the records? Is there some sort of index associated with it which might give issues because it is out of sync? I didn't design the data model, so I don't have deep knowledge of the system yet.
The original DB2 SQL is:
-- Generate SQL
-- Version: V6R1M0 080215
-- Generated on: 19/12/12 10:28:39
-- Relational Database: S656C89D
-- Standards Option: DB2 for i
CREATE TABLE TZVDB.PRODUCTCOSTS (
ID INTEGER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 INCREMENT BY 1
MINVALUE 1 MAXVALUE 2147483647
NO CYCLE NO ORDER
CACHE 20 )
,
PRODUCT_ID INTEGER DEFAULT NULL ,
STARTPRICE DECIMAL(7, 2) DEFAULT NULL ,
FROMDATE TIMESTAMP DEFAULT NULL ,
TILLDATE TIMESTAMP DEFAULT NULL ,
CONSTRAINT TZVDB.PRODUCTCOSTS_PK PRIMARY KEY( ID ) ) ;
ALTER TABLE TZVDB.PRODUCTCOSTS
ADD CONSTRAINT TZVDB.PRODCSTS_PRDCT_FK
FOREIGN KEY( PRODUCT_ID )
REFERENCES TZVDB.PRODUCT ( ID )
ON DELETE RESTRICT
ON UPDATE NO ACTION;
I'd like to see the statements...but since this question is a year old...I won't old my breath.
I'm thinking the problem may be the
GENERATED BY DEFAULT
And instead of passing NULL for the identity column, you're accidentally passing zero or some other duplicate value the first time around.
Either always pass NULL, pass a non-duplicate value or switch to GENERATED ALWAYS
Look at preceding messages in the joblog for specifics as to what caused this. I don't understand how the INSERT can suddenly work after the COUNT(*). Please let us know what you find.
Since it shows *N (ie n/a) as the name of the index or constraing, this suggests to me that is is not a standard DB2 object, and therefore may be a "logical file" [LF] defined with DDS rather than SQL, with a key structure different than what you were doing your COUNT(*) on.
Your shop may have better tools do view keys on dependent files, but the method below will work anywhere.
If your table might not be the actual "physical file", check this using Display File Description, DSPFD TZVDB.PRODUCTCOSTS, in a 5250 ("green screen") session.
Use the Display Database Relations command, DSPDBR TZVDB.PRODUCTCOSTS, to find what files are defined over your table. You can then DSPFD on each of these files to see the definition of the index key. Also check there that each of these indexes is maintained *IMMED, rather than *REBUILD or *DELAY. (A wild longshot guess as to a remotely possible cause of your strange anomaly.)
You will find the DB2 for i message finder here in the IBM i 7.1 Information Center or other releases
Is it a paging issue? we seem to get -0803 on inserts occasionally when a row is being held for update and it locks a page that probably contains the index that is needed for the insert? This is only a guess but it appears to me that is what is happening.
I know it is an old topic, but this is what Google shown me on the first place.
I had the same issue yesterday, causing me a lot of headache. I did the same as above, checked the table definitions, keys, existing items...
Then I found out the problem was with my INSERT statement. It was trying to insert to identical records at once, but as the constraint prevented the commit, I could not find anything in the database.
Advice: review your INSERT statement carefully! :)
I want to write a trigger which fires on deletion of a record from a table, and inserts a record in another table and uses the details of the record deleted.
Database : Oracle 10g
My trigger looked like this
CREATE or REPLACE TRIGGER myTrigger
AFTER DELETE
ON myTable
REFERENCING NEW AS old_tab
FOR EACH ROW
BEGIN
INSERT INTO ACTIVITYLOG values ('ADMIN',:old_tab.tabletID,'MIGRATION','ERROR','TEST','T','NIL',sysdate)
END;
here :old_tab.tabletID the tabletID is the column of the table myTable in which deletion is done.
I want to save the I and a log that it was deleted.
But when I try deleting a record I get the following error
Error code 4098, SQL state 42000: ORA-04098: trigger 'DB.MYTRIGGER' is
invalid and failed re-validation
P.S. Ran the trigger creation in NetBeans SQL Editor.
Here is the,
EDIT
STRUCTURE OF myTable (Table deletion occurs)
tabletID varchar2(15) PRIMARY KEY
tabletName varchar2(100)
STRUCTURE OF ACTIVITYLOG
username varchar2(15)
tabletKey varchar2(15)
page_ref varchar2(100)
errors varchar2(100)
remarks varchar2(100)
operationcode char(2)
lastupdateip varchar2(20)
lastupdatedate date
Sorry don't have access to SQL PLUS EDITOR.
You should use the :OLD values rather than the :NEW values. The :NEW values in a DELETE trigger (whether BEFORE or AFTER) are blank. This makes sense, because if you think about it the record has logically ceased to exist at this point.
However that is not a source of compilation errors.
"still the same error shows up on deletion. "
I suppose we could spend all day guessing what's wrong so let's stop now. You can discover the compilation errors with this simple query:
select * from user_errors
where name = 'MYTRIGGER'
and type = 'TRIGGER'
"I changed the :NEW to :OLD, and added a semicolan and ran it on SQL
PLUS, and that did the trick"
For the benefit of future here is a version of the trigger which will compile and which will correctly write the required values:
CREATE or REPLACE TRIGGER myTrigger
AFTER DELETE
ON myTable
REFERENCING OLD AS old_tab
FOR EACH ROW
BEGIN
INSERT INTO ACTIVITYLOG values ('ADMIN',:old_tab.tabletID,'MIGRATION','ERROR','TEST','T','NIL',sysdate);
END;
/
The problem is this:
REFERENCING NEW AS old_tab
You've redefined the NEW values with the label "old_tab". This is somewhat like adding #define FALSE TRUE to the top of a program.
Add a semicolon after the insert statement
Because you're using an AFTER DELETE trigger, you only need to access the :OLD values, e.g.:
CREATE or REPLACE TRIGGER myTrigger
AFTER DELETE
ON myTable
FOR EACH ROW
BEGIN
INSERT INTO ACTIVITYLOG values ('ADMIN',:OLD.tabletID,'MIGRATION','ERROR','TEST','T','NIL',sysdate);
END;