Loading a CLOB with a KEY - oracle

I need to load by using SQL Loader, a list of CLOB into a table.
What I generally do is to have a file LIST.DAT containing the list of CLOB on the Server:
FILE1
FILE2
FILE3
and then having a ctl like this:
load data
infile "LIST.DAT" stream
badfile "LIST.BAD"
append
into table MYTABLE
fields terminated by ','
trailing nullcols
(id_seq sequence(max,1),
file_name char(150),
file_lob lobfile(file_name) terminated by eof
)
so as to get something like this into MYTABLE:
ID_SEQ CLOB
1 FILE1
2 FILE2
3 FILE3
Now I need to load every CLOB with a KEY arriving externally, that is I would like to have into MYTABLE something like:
ID_SEQ KEY CLOB
1 34 FILE1
2 22 FILE2
3 78 FILE3
In this case, how should I receive for every CLOB its own KEY on the Server and how could I then load both of them?
Oracle 10g
Thank you

Related

Is there a way to merge two tables and keep only the matching values using linux bash?

I have two tables of string values, and the objetive is to make a new table that only keeps the matching values from both parent tables.
Example:
TABLE1
AX-18000257
AX-18000500
AX-18000816
AX-18000945
AX-18001189
AX-18001512
AX-18001524
TABLE2
AX-18000257
AX-18000512
AX-18000816
AX-18000947
AX-18001589
AX-18001525
AX-18001524
Expected output would be:
AX-18000257
AX-18000816
AX-18001189
AX-18001524
It could be done with both:
grep -v -f file2 file1 >> file3 #thanks David Ranieri
Or with:
join -1 1 -2 1 file1 file 2 > file3
Using join with get you a warning join: file 1 is not in sorted order
join: file 2 is not in sorted order but the result is correct

Format table using column with columns containing spaced text

I have a below table
1 Data 2021-02-04
2 Data Two 2021-02-05
If I column -t -s '' I get
1 Data 2021-02-04
2 Data Two 2021-02-05
Any way I can format this as:
1 Data 2021-02-04
2 Data Two 2021-02-05
Or format by column number 3
You have to first convert the two spaces separating columns in the original into a single character before feeding it to column. Something like:
$ sed 's/ /|/g' input.txt | column -t -s '|'
1 Data 2021-02-04
2 Data Two 2021-02-05
Use a different character that doesn't appear in the input if | is present in it.

How to replace a line in SQL statement using sed or any other command?

We have several .sql files and some of them have a where clause to get a selective data. I would like to parameterize values that will be passed onto a select statement.
For example:
file 1:
select eid,deptid,name from employee where deptid = 25;
file 2:
select studentname,studentid from student;
file 3:
select staff,deptid,name from staff,dept where dept.id = staff.deptid
and deptid =25
I need to do the following things:
Read all listed files. (i.e. file 1,2 and 3)
check if there is a filter condition on deptid
If exists, replace that variable using parameter passed as a input.
Say if I execute the below script then all deptid should get replaced by 28
readsqlfile.sh 28
for i in file1 file2 file3 fileX; do sed -i "s/deptid/${INPUT}/g" $i; done
This will replace all occurences of deptid by the value inside INPUT variable in file1 file2 file3 fileX
You can do in file replacement:
ishan#shurima:/tmp$ cat 1
select eid,deptid,name from employee where deptid = 25;
ishan#shurima:/tmp$ sed -i 's/where deptid/where 28/g' ./1
ishan#shurima:/tmp$ cat 1
select eid,deptid,name from employee where 28 = 25;
ishan#shurima:/tmp$
This will only change the condition not every occurance of deptid.

vertica copy command with based on the content of the csv

I'm trying to run copy command that populate the db based on concatination of the csv.
db columns names are:
col1,col2,col3
csv content is (just the numbers, names are the db column names):
1234,5678,5436
what i need is a way to insert data say like this:
based on my example:
i want to put in the db:
col1, col2, col3
1234, 5678 "1234_XX_5678"
should i use FILLERS?
if so what is the command?
my starting point is:
COPY SAMPLE.MYTABLE (col1,col2,col3)
FROM LOCAL
'c:\\1\\test.CSV'
UNCOMPRESSED DELIMITER ',' NULL AS 'NULL' ESCAPE AS '\' RECORD TERMINATOR '
' ENCLOSED BY '"' DIRECT STREAM NAME 'Identifier_0' EXCEPTIONS 'c:\\1\\test.exceptions'
REJECTED DATA 'c:\\1\\test.rejections' ABORT ON ERROR NO COMMIT;
can you help how to load those columns (basically col3)
thanks
There different ways to do this.
1 - Pipe the data into vsql and do the data edit on the fly using linux
Eg:
cat file.csv |sed 's/,/ , /g' | awk {'print $1 $2 $3 $4 $1"_XX_"$3'}
|vsql -U user -w passwd -d dbname -c "COPY tbl FROM STDIN DELIMITER ',';"
2 - Use Fillers
copy tbl(
v1 filler int ,
v2 filler int ,
v3 filler int,
col1 as v1,
col2 as v2,
col3 as v1||'_XX_'||v2) from '/tmp/file.csv' delimiter ',' direct;
dbadmin=> select * from tbl;
col1 | col2 | col3
------+------+--------------
1234 | 5678 | 1234_XX_5678
(1 row)
I hope this helps :)
You don't even have to make the two input columns - which you load as-is anyway - FILLERs. This will do:
COPY mytable (
col1
, col2
, col3f FILLER int
, col3 AS col1::CHAR(4)||'_XX_'||col2::CHAR(4)
)
FROM LOCAL 'foo.txt'
DELIMITER ','

Shell script: grep a table of values from a file

I have a result file that looks like this:
data data data data data...
data data data data data...
data data data data data...
#0
data data is 2
#1
data data is 2
testing data ( )
n m
256 729.44
352 1555.07
448 2649.68
#2
data data is 2
#3
data data is 2
I need to grep only the table that will always have 2 columns of n and m(it can get very long). So the output should be:
n m
256 729.44
352 1555.07
448 2649.68
I've tried using awk and grep but I can only get one line not the whole table. Any help would be appreciated it.
Assuming there is no empty lines in the table, then one can use gawk like this:
awk '$1 == "n" && $2 == "m"' RS=
It will print the block which start with n, and m in the two first fields.
Using AWK you would print all lines where the Number of Fields equals to 2:
awk 'NF == 2' data.txt

Resources