Difference Between Insert and Append statement in SQL Loader? - oracle

Can any one tell me the Difference Between Insert and Append statement in SQL Loader?consider the below example :
Here is my control file
load_1.ctl
load data
infile 'load_1.dat' "str '\r\n'"
insert*/+append/* into table sql_loader_1
(
load_time sysdate,
field_2 position( 1:10),
field_1 position(11:20)
)
Here is my data file
load_1.dat
0123456789abcdefghij
**********##########
foo bar
here comes a very long line
and the next is
short

The documentation is fairly clear; use INSERT when you're loading into an empty table, and APPEND when adding rows to a table that (might) contains data (that you want to keep).
APPEND will still work if your table is empty. INSERT might be safer if you're expecting the table to be empty, as it will error if that isn't true, possibly avoiding unexpected results (particularly if you don't notice and don't get other errors like unique index constraint violations) and/or a post-load data cleanse.

The difference are in two points clear:
append will only add the record if at the end of statement
insert will insert anywhere you want i.e if your table have 10 column you can insert in 5 column only but in append you can't.
in append both your data and the table should have same columns means insert data in row level rather than in column level
and it's also true you cannot use insert if your table have data if it's empty then only you can do use insert.
hope it helps

Related

Replace specific junk characters from column in hive

I've an issue where one of the column loaded in a hive table contains junk character ("~) in a column suffixed with actual value (ABC). So the actual value that's visible for this column is (ABC"~).
This column can have either ABC (or any such string) or NULL. The table is huge and Update is not an option here.
I've thought of a solution of creating a temp table with this column containing either the string (ABC) or NULL, thereby want to remove this junk character ("~) completely while copying the data from original table to this temp table.
Any help on how I can remove this junk? I tried using regexp function, but no success. Any suggestions?
I was not using regexp properly; my fault.
The data loaded initially in the table had the extra characters attached to a column's values. For Ex: If the column's actual value was Adf452, then the data contained in the cell was Adf452"~.
So I loaded the data to a temp table like this:
insert overwrite table tempTable select colA, colB, colC, regexp_replace(colC,"\"~",""), partitionedCol from origTable;
This simply loaded the data in tempTable without those junk characters.

Concat_ws not working in insert statement in hive

Using hive, I'm trying to concatenate columns from one table and insert them in another table using the query
insert into table temp_error
select * from (Select 'temp_test','abcd','abcd','abcd',
from_unixtime(unix_timestamp()),concat_ws('|',sno,name,age)
from temp_test_string)c;
I get the required output till I use Select *. But as soon as I try to insert it into the table, it does not give concatenated output but gives the value of sno only instead of whole concatenated output.
Thanks guys.
I found why it was behaving that way. It's because while creating table I gave "separate fields by '|'". So what I was trying to insert as a string into the table, hive was interpreting it as different columns.

Redirect duplicate rows to update while insert

I have a insert statement on table "test". PK on column x in the table "test".
Now while inserting if duplicate row comes then the same row should get updated instead insert.
How can i achieve this.
Is it possible by dup_val_on_index?
Please help.
First create a copy of the above table without any KEY Columns and follow
Step 1: truncate the table first whenever you encounter a bunch of insert statement comes
Step 2: INSERT the above truncated tables
Step 3: Execute the MERGE statement like below
MERGE INTO TABLE_MAIN M
USING TABLE_MAIN_COPY C
ON (m.id = c.id)
WHEN MATCHED THEN UPDATE SET M.somecol = c.somecol
WHEN NOT MATCHED THEN INSERT (m.id, m.somecol)
VALUES (c.id, c.somecol);
You may incur error while on merger ORA-30926: unable to get a stable set of rows in the source tables when there is two or more rows while on update.
you may avoid that using the GROUP function related to id or like ORA-30926: unable to get a stable set of rows in the source tables

update table based on concatenated column value

I have a table with only 4 columns
First column - The concatenated column values for each row from another table.The columns are concatenated based on column id from the metadata table.The order of concatenation is the same order of column ids.
Second column -I have the comma separated primary key columns.
Now, based on the primary keys in the second column, I need to update the 3rd column which will retrieve the values for the primary key from each of the first concatenated field.
4 column _ it has the table name.
I am using cursor and string functions and it works perfectly fine but when I tested it fir huge millions of data , it failed and the performance is very poor.
Could anyone give please me a single update query for the same
There is a comparison tool which compares the data between 2 tables in different database but with same data structure and it dumps the mismatch rows into a table with all the columns concatenated(pipe seperaed).The columns are in the same order as that of column id and I know the primary keys for that table(concatenated but pipe seperated). So, based on this data I need to extract the primary key values for which there is a data mismatch.
I need to do something like
Update column4(primary key values pipe seperated extracted from column2)
Check this LINK, maybe will be useful. With that query you could concatenate a value with a character you need (this works for 11g2 version, for earlier versions use xmlagg
, xmlelement, extract method).
CREATE TABLE TEST(
FIELD INT);
INSERT INTO TEST VALUES(1);
INSERT INTO TEST VALUES(2);
INSERT INTO TEST VALUES(3);
INSERT INTO TEST VALUES(4);
SELECT listagg(FIELD,',' ) WITHIN GROUP (ORDER BY FIELD)
FROM TEST
Returns '1,2,3,4'

tracking changes with trigger - alternate field names

Using the trigger noted below, I am tracking changes in a producution table in an audit, or change-log table. My problem is that the field names in the tracking table are different from the ones in table1. The values are the same, but the names of the columns are different.
The question is then, how must the syntax change in the trigger to take the value of one field name and insert it into a field of a different name in the tracking table?
Thank you for any and all help or suggestions.
{
CREATE OR REPLACE TRIGGER track_change_trg
AFTER INSERT OR UPDATE OR DELETE
ON table1
FOR EACH ROW
BEGIN
 
IF INSERTING THEN
INSERT INTO tracking table VALUES
(:new.pname, :new.p_id, :new.p_type, :new.t1name,
'INSERTED', SYSDATE);
 
ESLIF UPDATING THEN
INSERT INTO tracking table VALUES
(:new.pname, :new.p_id, :new.p_type, :new.t1name,
'UPDATED', SYSDATE);
 
ELSIF DELETING THEN
INSERT INTO tracking table VALUES
(:old.pname, :old.p_id, :old.p_type, :old.t1name,
'DELETED', SYSDATE);
 
END IF;
END;
/
}
It makes no difference if the column names are different in the main and audit table. I'm not sure why you think that's a problem - showing any error might have helped clarifying your issue, along with the table definitions. The only error I can immediately see being thrown - assuming the space in tracking table is a mistake in transcription - is if the column order doesn't match and you're putting the wrong type or size of data in a column. Hard to guess what you're seeing though.
You've omitted the the optional column section in the insert statement that would normally list the column names. Without an explicit list of the column names, the values will be assigned based on the order of the columns in the target table, as shown in user_tab_columns.column_id or with describe. It would be better to list the columns to avoid ambiguity. and so you don't have problems if the table definition changes (e.g. a column is added, so you not don't have enough values) or the column order is different in another environment (which arguably shouldn't happen under decent source control). It's easier to spot trivial mistakes too.
Anyway, just list the column names from the table you're inserting into:
INSERT INTO tracking_table (x_name, x_id, x_type, x_t1name, x_action, x_when)
VALUES (:new.pname, :new.p_id, :new.p_type, :new.t1name, 'INSERTED', SYSDATE);
... replacing x_name etc. with the actual column names from tracking_table.

Resources