Missing right parenthesis while import data from flat files to table - oracle

I have load data infile .... one flat file. I want to load the data into table tab from this flat file. I want to pass few values like 'ab', 'cd', 'ef' in column col6 of the table. When i write the code in the flat file like this
load data infile <source-path>
into tab
fields terminated by ','
(
col1 "TRIM(:col1)" ,
............
...........
col6 "('ab','cd','ef')",
..........)
But when i load this file into the table then i found an error ORA-00907: Missing Right Parenthesis. How to resolve this error so that i can insert value of 'ab', 'cd', 'ef' in col6 of table tab.

You can use a multitable insert, with three inserts into the same table:
load data infile <source-path>
into tab
fields terminated by ','
(
col1 "TRIM(:col1)" ,
............
...........
col6 CONSTANT 'ab',
..........)
into tab
fields terminated by ','
(
col1 POSITION(1) "TRIM(:col1)" ,
............
...........
col6 CONSTANT 'cd',
..........)
into tab
fields terminated by ','
(
col1 POSITION(1) "TRIM(:col1)" ,
............
...........
col6 CONSTANT 'ef',
..........)
The POSITION(1) resets to the start of the record, so it sees the same values from the source record again fir each insert. Read more.
Alternatively you could insert into a staging table, with a single row for each record in your file, and excluding the constant-value col6 completely - which you could with SQL*Loader:
load data infile <source-path>
into staging_tab
fields terminated by ','
(
col1 "TRIM(:col1)" ,
............
...........
col5 ...
col7 ...
..........)
... or as an external table; and then insert into your real table by querying the staging table and cross-joining with a CTE containing the constant values:
insert into tab (col1, col2, ..., col6, ...)
with constants (col6) as (
select 'ab' from dual
union all select 'cd' from dual
union all select 'ef' from dual
)
select st.col1, st.col2, ..., c.col6, ...
from staging_tab st
cross join constants c;
For each row in the staging table you'll get three rows in the real table, one for each of the dummy rows in the CTE. You could do the same with with a collection instead of a CTE:
insert into tab (col1, col2, col6)
select st.col1, st.col2, c.column_value
from staging_tab st
cross join table(sys.odcivarchar2list('ab', 'cd', 'ef')) c;
This time you get one row for each element in the collection - which is expanded into multiple rows by the table collection clause. The result is the same.

Related

How to copy data from flex table

I have a huge CSV file that I loaded into Flex Table, the csv contains more columns than required.
now I would like to copy the data from the flex table to my regular table (include mapping columns ) .
I tried "insert select" but I got some error regarding casting , so I tried to run insert ignore which is not supported in Vertica.
in my case I don't care to lost messages.
I thought about write copy with rejected table but I can't find what is the right syntax.
Thanks
You need to materialize the columns you want in the flex table. So when you run the COPY command only values that match the correct data types will be loaded.
Assuming your data looks like:
col1,col2,col3,col4,col4
1.2,2019-07-01 10:00:00,1,string 2
1.2,2019-07-01 10:00:00,string 1,string 2
And you only care about col1, col2, col3. Where col3 contains mixed int and string values.
Create the flex table and load the csv:
CREATE FLEX TABLE flex_table
(
col1 float,
col2 timestamp,
col3 int
);
COPY public.flex_table FROM '/data/csv/data_june7_15.csv' PARSER fcsvparser();
Then, you can insert the data into your regular table from your flex table (no need for the view):
CREATE TABLE regular_table
(
col1 float,
col2 timestamp,
col3 int
);
INSERT INTO regular_table (col1, col2, col3) SELECT col1, col2, col3 FROM flex_table;
SELECT * FROM regular_table;
col1 | col2 | col3
------+---------------------+------
1.2 | 2019-07-01 10:00:00 |
1.2 | 2019-07-01 10:00:00 | 1

Fetch name based on comma-separated ids

I have two tables, customers and products.
products:
productid name
1 pro1
2 pro2
3 pro3
customers:
id name productid
1 cust1 1,2
2 cust2 1,3
3 cust3
i want following result in select statement,
id name productid
1 cust1 pro1,pro2
2 cust2 pro1,pro3
3 cust3
i have 300+ records in both tables, i am beginner to back end coding, any help?
Definitely a poor database design but the bad thing is that you have to live with that. Here is a solution which I created using recursive query. I don't see the use of product table though since your requirement has nothing to do with product table.
with
--Expanding each row seperated by comma
tab(col1,col2,col3) as (
Select distinct c.id,c.prdname,regexp_substr(c.productid,'[^,]',1,level)
from customers c
connect by regexp_substr(c.productid,'[^,]',1,level) is not null
order by 1),
--Appending `Pro` to each value
tab_final as ( Select col1,col2, case when col3 is not null
then 'pro'||col3
else col3
end col3
from tab )
--Displaying result as expected
SELECT
col1,
col2,
LISTAGG(col3,',') WITHIN GROUP( ORDER BY col1,col2 ) col3
FROM
tab_final
GROUP BY
col1,
col2
Demo:
--Preparing dataset
With
customers(id,prdname,productid) as ( Select 1, 'cust1', '1,2' from dual
UNION ALL
Select 2, 'cust2','1,3' from dual
UNION ALL
Select 3, 'cust3','' from dual),
--Expanding each row seperated by comma
tab(col1,col2,col3) as (
Select distinct c.id,c.prdname,regexp_substr(c.productid,'[^,]',1,level)
from customers c
connect by regexp_substr(c.productid,'[^,]',1,level) is not null
order by 1),
--Appending `Pro` to each value
tab_final as ( Select col1,col2, case when col3 is not null
then 'pro'||col3
else col3
end col3
from tab )
--Displaying result as expected
SELECT
col1,
col2,
LISTAGG(col3,',') WITHIN GROUP( ORDER BY col1,col2 ) col3
FROM
tab_final
GROUP BY
col1,
col2
PS: While using don't forget to put your actual table columns as in my example it may vary.

Insert data listing columns with partitioning field in Hive

First of all let's setup a test environment:
CREATE TABLE IF NOT EXISTS source_table (
`col1` TIMESTAMP,
`col2` STRING
);
CREATE TABLE IF NOT EXISTS dest_table (
`col1` TIMESTAMP,
`col2` STRING,
`col3` STRING
)
PARTITIONED BY (day STRING)
STORED AS AVRO;
INSERT INTO TABLE source_table VALUES ('2018-03-21 17:08:04.401', 'test1'), ('2018-03-22 12:02:04.222', 'test2'), ('2018-03-22 07:21:04.111', 'test3');
How could I list the column names during insertion and put the partition value dynamically? The following command doesn't work:
INSERT INTO TABLE dest_table(col1, col2) PARTITION(day) SELECT col1, col2, date_format(col1, 'yyyy-MM-dd') FROM source_table;
By the way, without listing the columns of dest_table inside the INSERT INTO command, having two tables with the same columns number, everything works fine. What if my dest_table has more fields than the source_table?
Thank you for helping me.
P.S.
Ok, if I hardcode NULL this works. I leave the question opened because there might be better ways to achieve that.
INSERT INTO TABLE dest_table PARTITION(day) SELECT col1, col2, NULL, date_format(col1, 'yyyy-MM-dd') FROM source_table;
Anyway, this method is strictly bounded with columns order? In a real-life scenario, how could I handle lots of columns specifying a mapping, to avoid mistakes?
The syntax for inserting into a partitioned table when you want to list the specific columns is shown below. You don't need to put null on col3 since Hive will put a default value NULL since it is not in the column list during insert.
INSERT INTO TABLE dest_table PARTITION (day)(col1, col2, day)
SELECT col1, col2, date_format(col1, 'yyyy-MM-dd') FROM source_table;
Result:
col1 col2 col3 day
2018-03-22 12:02:04.222 test2 NULL 2018-03-22
2018-03-22 07:21:04.111 test3 NULL 2018-03-22
2018-03-21 17:08:04.401 test1 NULL 2018-03-21

Error when inserting CTE table values into physical table

I have a complex query that creates a master CTE_Table form other CTE_Tables. I want to insert the results of the master CTE_Table into a physical table. I'm using Teradata version 15.10.04.03
SELECT Failed. [3707] Syntax error, expected something like a 'SELECT' keyword or '(' or a 'TRANSACTIONTIME' keyword or a 'VALIDTIME' keyword between ')' and the 'INSERT' keyword.
DROP TABLE dbname.physicalTablename ;
CREATE MULTISET TABLE dbname.physicalTablename ,
NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
col1 INTEGER,
col2 INTEGER,
col3 INTEGER
)
NO PRIMARY INDEX ;
WITH
cteTable3 AS
( SELECT A.colA, A.colB, A.colC, B.col1, B.col2, B.col3
FROM cteTable1 A INNER JOIN cteTable2 ON (blah blah blah) ),
cteTable2 AS
( SELECT col1, col2, col3 FROM SourceTableB ),
cteTable1 AS
( SELECT colA, colB, colC FROM SourceTableA )
INSERT INTO dbname.physicalTablename
( col1, col2, col3, col4, col5, col6 )
SELECT
(C3.colA, C3.colB, C3.colC, C3.col1, C3.col2, C3.col3)
FROM cteTable3 C3 ;
While you are missing the INSERT portion of the question, I think the following might clear things up. The correct format for using a CTE in an INSERT is:
INSERT INTO <tablename>
WITH <cte> AS (SELECT...)
SELECT <fields> FROM <cte>
Consider the following:
CREATE MULTISET VOLATILE TABLE tmp AS (SELECT 'bobby' as firstname) WITH DATA ON COMMIT PRESERVE ROWS;
INSERT INTO tmp
WITH cte AS (select 'carol' as firstname)
SELECT * FROM cte;
SELECT * FROM tmp;
DROP TABLE tmp;

Distinct in XMLAGG function in oracle sql

problem in avoiding duplicates using XMLAGG function
A table which is having multiple records. where each record has one column contains repetitive date.
Using XMLAGG function in the following sql
select col1, col2, XMLAGG(XMLELEMENT(E, colname || ',')).EXTRACT('//text()')
from table
group by col1, col2
i get the following output
col1 col2 col3
hareesh apartment residential, commercial, residential, residential
But i need the following output as
col3 : residential, commercial.
Anyone help me
Try using a subquery to remove duplicates:
SELECT col1, col2, XMLAGG(XMLELEMENT(E, colname || ',')).EXTRACT('//text()')
FROM (SELECT DISTINCT col1, col2, colname FROM table)
GROUP BY col1, col2

Resources