Is there way to create SQLLDR control file such a way it checks ID number before inserting data into another table and if ID does not match, loader throws it into dicard file?
In original import file there is a column: ID which include 8 digit set of numbers.
Then we have a ID table where these 8 digit ID´s found.
Now I need to check from this ID table first, if ID in file match or not. Matched ones will be inserted into table SETS and mismatches ended into sets.dsc -file.
Can I use WHEN or should I put this selection right into ID inside quotation marks?
We use Oracle 11
Why wouldn't you rather create referential integrity constraint? If ID doesn't exist in the "ID table" (which is "parent"), then such a row won't be loaded from the input file. Oracle will raise
ORA-02291: integrity constraint violated-parent key not found
error.
I mean, why reinvent the wheel?
Related
Working on hive table, where I need to change column name as below, its working as expected and changing column name but underline value of this column getting NULL.
ALTER TABLE db.tbl CHANGE hdfs_loaddate hdfs_load_date String;
Here changed column name is hdfs_load_date and values are getting NULL after renaming column name.
Does any one have idea to fix this. Thanks in advance!!
#Ajay_SK Referencing this article: Hive Alter table change Column Name
There is a comment:
Note that the column change will not change any underlying data if it is a parquet table. That is, if you have data in the table already, renaming a column will not make the data in that column accessible under the new name: select a from test_change; 1 alter table test_change change a a1 int; select a1 from test_change; null
He is specific to parquet, but the scenario you describe is similar where you have successfully changed the name, but hive still thinks the original data is in the original key.
A better approach to solve your issue, would be to create a new table of the schema you want with column name change. Then perform an Insert INTO new table select FROM * old table.
I'm trying to add a key from my customer table to my reservation table in oracle.
However I keep getting an error message when I try to run my SQL commands which states 'Customer_ID is an invalid identifier'.
What I am trying to do is first use an alter statement to alter the reservation table.
Then I am adding a foreign key, which is called 'Customer_ID'
Then I enter a references statement, which tells it that I am getting the CUSTOMER_ID attribute from the customer table. However to sql this doesn't make sense at all.
To me, logically it makes sense, I don't see anything wrong with the syntax or structure of the statements. Any sharp eyes/minds to help me on this matter would be greatly appreciated.
the statements used are:
ALTER TABLE reservation
ADD FOREIGN KEY (Customer_ID)
REFERENCES Customer(Customer_ID);
There's nothing wrong with your syntax; I was able to create simple one-column tables with the appropriate names then execute exactly the statement you posted. So I suspect the column CUSTOMER_ID does not exist in one or the other table. Describe the two tables and double-check the column names. Keep in mind that normally column names in Oracle are case-insensitive, but they can be case-sensitive if enclosed in double quotes; this can be a reason for a non-obvious column name mismatch.
I'm trying to create a new row in a table. There are two constraints on the table -- one is on the key field (DB_ID), the other constrains a value to be one of several the the field ENV. When I do an insert, I do not include the key field as one of the fields I'm trying to insert, yet I'm getting this error:
unique constraint (N390.PK_DB_ID) violated
Here's the SQL that causes the error:
insert into cmdb_db
(narrative_name, db_name, db_type, schema, node, env, server_id, state, path)
values
('Test Database', 'DB', 'TYPE', 'SCH', '', 'SB01', 381, 'TEST', '')
The only thing I've been able to turn up is the possibility that Oracle might be trying to assign an already in-use DB_ID if rows were inserted manually. The data in this database was somehow restored/moved from a production database, but I don't have the details as to how that was done.
Any thoughts?
Presumably, since you're not providing a value for the DB_ID column, that value is being populated by a row-level before insert trigger defined on the table. That trigger, presumably, is selecting the value from a sequence.
Since the data was moved (presumably recently) from the production database, my wager would be that when the data was copied, the sequence was not modified as well. I would guess that the sequence is generating values that are much lower than the largest DB_ID that is currently in the table leading to the error.
You could confirm this suspicion by looking at the trigger to determine which sequence is being used and doing a
SELECT <<sequence name>>.nextval
FROM dual
and comparing that to
SELECT MAX(db_id)
FROM cmdb_db
If, as I suspect, the sequence is generating values that already exist in the database, you could increment the sequence until it was generating unused values or you could alter it to set the INCREMENT to something very large, get the nextval once, and set the INCREMENT back to 1.
Your error looks like you are duplicating an already existing Primary Key in your DB. You should modify your sql code to implement its own primary key by using something like the IDENTITY keyword.
CREATE TABLE [DB] (
[DBId] bigint NOT NULL IDENTITY,
...
CONSTRAINT [DB_PK] PRIMARY KEY ([DB] ASC),
);
It looks like you are not providing a value for the primary key field DB_ID. If that is a primary key, you must provide a unique value for that column. The only way not to provide it would be to create a database trigger that, on insert, would provide a value, most likely derived from a sequence.
If this is a restoration from another database and there is a sequence on this new instance, it might be trying to reuse a value. If the old data had unique keys from 1 - 1000 and your current sequence is at 500, it would be generating values that already exist. If a sequence does exist for this table and it is trying to use it, you would need to reconcile the values in your table with the current value of the sequence.
You can use SEQUENCE_NAME.CURRVAL to see the current value of the sequence (if it exists of course)
I have table in which a constraint has been set on a field called LoginId.While inserting a new row i am getting an error on this constratint associated with this field(LoginID)stating the below error.
The insert command is below:
Type 1 with sequence
insert into TemplateModule
(LoginID,MTtype, Startdate TypeId, TypeCase, MsgType, MsgLog, FileName,UserName, CrID, RegionaltypeId)
values
(MODS_SEQ.NEXTVAL,3434,2843,2453,2392,435,2390,'pension.txt','rereee',454545,3434);
Failed with error
Type 2 without sequence a hardcoded value::
insert into TemplateModule
(LoginID,MTtype, Startdate TypeId, TypeCase, MsgType, MsgLog, FileName,UserName, CrID, RegionaltypeId)
values
(3453,3434,2843,2453,2392,435,2390,'pension.txt','rereee',454545,3434)
I crosschecked many times for duplicates.But nothing found.What could be the rootcause
ORA-00001: unique constraint error (LGN_INDEX)violated
First, do a describe on LGN_INDEX on that table to make absolutely certain you are looking at the right column. Is LGN_INDEX a constraint+index or just an index? Try re-building your index to make sure it isn't corrupt. Make sure you don't have any other constraints that might be interfering.
Second, perform a SELECT MAX(LOGINID) FROM TEMPLATEMODULE and compare that to the next sequence value to make sure your sequence isn't set lower than the maximum ID you are working with.
Third, check if you have any triggers on that table.
If none of these things work, try re-creating the table using just the schema. Cross-load the data and try again. There might be a configuration setting on that table that is causing the issue. CREATE TABLE MY_TEMP AS SELECT * FROM TEMPLATEMODULE.
I encountered the same problem.
An Insert statement populating an Integer value (not in the table) to the Primary Key column.
The problem was a before trigger tied to a sequence. The next_val for the sequence was already present in the table.
The trigger fires, grabs the sequence number and fails with a Primary Key violation.
I encountered this same issue while importing from an excel file. I thought the file was free of duplicates until I tried removing duplicates in excel.
To find and remove duplicates in excel,
Select the data. Ctrl + a should work.
Click Data -> Remove Duplicates
Select the fields that have the constraints in your database and click OK
Excel should remove any duplicate records based on the fields selected at step 3 above.
You should now be able to import records from the file into your db.
I have some large tables (millions of rows). I constantly receive files containing new rows to add in to those tables - up to 50 million rows per day. Around 0.1% of the rows I receive are duplicates of rows I have already loaded (or are duplicates within the files). I would like to prevent those rows being loaded in to the table.
I currently use SQLLoader in order to have sufficient performance to cope with my large data volume. If I take the obvious step and add a unique index on the columns which goven whether or not a row is a duplicate, SQLLoader will start to fail the entire file which contains the duplicate row - whereas I only want to prevent the duplicate row itself being loaded.
I know that in SQL Server and Sybase I can create a unique index with the 'Ignore Duplicates' property and that if I then use BCP the duplicate rows (as defined by that index) will simply not be loaded.
Is there some way to achieve the same effect in Oracle?
I do not want to remove the duplicate rows once they have been loaded - it's important to me that they should never be loaded in the first place.
What do you mean by "duplicate"? If you have a column which defines a unique row you should setup a unique constraint against that column. One typically creates a unique index on this column, which will automatically setup the constraint.
EDIT:
Yes, as commented below you should setup a "bad" file for SQL*Loader to capture invalid rows. But I think that establishing the unique index is probably a good idea from a data-integrity standpoint.
Use Oracle MERGE statement. Some explanations here.
You dint inform about what release of Oracle you have. Have a look at there for merge command.
Basically like this
---- Loop through all the rows from a record temp_emp_rec
MERGE INTO hr.employees e
USING temp_emp_rec t
ON (e.emp_ID = t.emp_ID)
WHEN MATCHED THEN
--- _You can update_
UPDATE
SET first_name = t.first_name,
last_name = t.last_name
--- _Insert into the table_
WHEN NOT MATCHED THEN
INSERT (emp_id, first_name, last_name)
VALUES (t.emp_id, t.first_name, t.last_name);
I would use integrity constraints defined on the appropriate table columns.
This page from the Oracle concepts manual gives an overview, if you also scroll down you will see what types of constraints are available.
use below option, if you will get this much error 9999999 after that your sqlldr will terminate.
OPTIONS (ERRORS=9999999, DIRECT=FALSE )
LOAD DATA
you will get duplicate records in bad file.
sqlldr user/password#schema CONTROL=file.ctl, LOG=file.log, BAD=file.bad