I am working on sqlldr(sql loader) in oracle 11g.
I am trying to concatenate 3 fields into a single field. Has anyone done this?
ex:
TABLE - "CELLINFO" where the fields are (mobile_no,service,longitude).
The data given is (+9198449844,idea,110,25,50) i.e. (mobile_no,service,grad,min,sec).
But while loading data into the table i need to concatenate the last 3 fields (grad,min,sec) into the longitude field of the table.
Here i cant edit manually because i have 1000's of data to be loaded.
I also tried using ||,+ and concat().... but I am not able to.
ctl may be:
load data
append
into table cellinfo
fields terminated by ","
(
mobile_no,
service,
grad BOUNDFILLER,
min BOUNDFILLER,
sec BOUNDFILLER,
latitude ":grad || :min|| :sec"
)
suposing cellinfo(mobile_no, service, latitude).
Some nice info here on orafaq
Alternatively, you can modify your input:
awk -F"," '{print $1","$2","$3":"$4":"$5}' inputfile > outputfile
Related
I have this data in Google Sheets where in I need to get the total of the filtered data columns per row. The date columns are not fixed (may increase over time, I already know how to handle this undefined number of columns). What my current challenge encountered is how can I efficiently get a summary of totals per user based on filtered date columns.
My data is like this:
My expected result is like this:
My current idea is this:
Here is a sample spreadsheet for reference:
https://docs.google.com/spreadsheets/d/1_dByPabStGQvh94TabKxwFeUyVaRFnkBCRf4ioTY5jM/edit?usp=sharing
This is a method to unpivot the data so you can work with it
=ARRAYFORMULA(
QUERY(
IFERROR(
SPLIT(
FLATTEN(
IF(ISBLANK(A2:A),,A2:A&"|"&B1:G1&"|"&B2:G)),
"|")),
"select Col1, Sum(Col3)
where
Col2 >= "&DATE(2022,1,1)&" and
Col2 <= "&DATE(2022,1,15)&"
group by Col1
label
Col1 'Person',
Sum(Col3) 'Total'"))
Basically, its creating an output of User1|44557|8 -- it then FLATTENs it all and splits by the pipe, which gives you three clean columns.
Run that through a QUERY to SUM by the person between the dates and you get what you're after. If you wanted to use cell references for dates, simply replace the dates with the cell references.
To expand the table, change B1:G1 and B2:G2 to match the width of the range.
INPUT:
I have an input file in which the first 10 characters of each line represent 2 fields - first 4 characters(field A) and the next 6 characters(field B). The file contains about 400K records.
I have a Mapping table which contains about 25M rows and looks like
Field A Field B SomeStringA SomeStringB
1628 836791 1234 783901
afgd ahutwe 1278 ashjkl
--------------------------------
--------------------------------
and so on.
Field A and Field B combined is the Primary Key for the table.
PROBLEM STATEMENT:
Replace:
Field A by SomeStringA
Field B by SomeStringB
in the input file. SomeStringA and SomeStringB are exactly the same width as Field A and B respectively.
Here's what I'm trying:
Approach 1:
Sort and Dump the mapping table into a file
spool dump_file
select * from mapping order by fieldA, fieldB;
spool off
exit;
Strip the input file and get the first 10 chars
cut -c1-10 input_file > input_file_stripped
Do something to find the lines that begin with the same string and then when they do - replace in the input_file with field 10-20 in the spooled file. - here's where I'm stuck.
Approach 2:
Take the input file and get the first 10 chars
cut -c1-10 input_file >input_file_stripped
Use sqlldr and load into a temp_table.
Select matching records from the mapping table and spool
spool matching_records
select m.* from mapping m, temp t where m.fieldA=t.fieldA and m.fieldB=t.fieldB;
spool off
exit;
Now how do I replace these in the original file ?
Given the high number of records to process, how can this be done and done fast ?
Notes:
Not a one time activity, has to be done daily so scale is important
The mapping table is unlikely to change
I've Python, shell script and Oracle database available. Any combination of these is fine.
I have the below table:
(AddressID,ShortAddress,FullAddress).
Now, The FullAdress column normally contains address like this:
Bellvue East,204-Park Avenue,Zip-203345.
I need to write a script, which will extract the first part before the first ',' in the full address and insert into ShortAddress column.
So, the Table Data before executing the script:
AddressID|ShortAddress|FullAddress
1 | NULL |Bellvue East,204-Park Avenue,Zip-203345,United Kingdom
2 | NULL |Salt Lake,Sector-50/A,Noida,UP,India
And after executing the script, it should be:
AddressID|ShortAddress|FullAddress
1 |Bellvue East|Bellvue East,204-Park Avenue,Zip-203345,United Kingdom
2 |Salt Lake|Salt Lake,Sector-50/A,Noida,UP,India
I need to write it in Oracle PL/SQL.
Any help will be highly appreciated.
Thanks in advance.
Try this UPDATE:
UPDATE yourTable
SET ShortAddress = COALESCE(SUBSTR(FullAddress, 1, INSTR(FullAddress, ',') - 1),
FullAddress)
This update query will assign the first CSV term in the full address to the short address. If no comma be present, then it will assign the entire full address.
[Correction]
I am trying fetch query result in a xls using column headers in oracle
Below is the query
select
'Student Name'
'Joining Date'
from dual
union all
select
student_name
to_char(joining_date,'MM/DD/YYYY')
from student_details
The only thing being that once I get this in a xlsfile , I am not able to sort the data using joining_date since it is converted to varchar/char.
I need a solution wherein I am able to include the headers and retain the data type (date,number) in the final xls result.(I can format cells in excel to date/number but I wish to get it done through database query itself)
Please help!
Extract data as Oracle literals:
1) numbers - 123.354
2) strings - q'{This is a string}' / 'This is another string'
3) dates - DATE'2016-04-31'
4) dates with time/timestamps - TIMESTAMP'2016-04-31 11:13:00'/TIMESTAMP'2016-04-31 11:13:00.000000'
I am trying to create a table in Hive using complex data types.
One of my column is an array of strings and the other is an array of maps.
After I have loaded the data into the table, when I try to query the data, I don't get the desired result in the third column which is an array of maps.
The following is my Hive query:
Step 1:
create table transactiondb2(order_id int,billtype array<string>,paymenttype array<map<string,int>>)ROW FORMAT
DELIMITED FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY '|'
MAP KEYS TERMINATED BY '#';
Step 2:
load data local inpath '/home/xyz/data.txt' overwrite into table transactiondb2;
Step 3:
select * from transactiondb2;
And my output is as follows:
OK
1 ["A","B"] [{"credit":null,"10":null},{"cash":null,"25":null},{"emi":null,"30":null}]
2 ["C","D"] [{"credit":null,"157":null},{"cash":null,"45":null},{"emi":null,"35":null}]
3 ["X","Y"] [{"credit":null,"25":null},{"cash":null,"38":null},{"emi":null,"50":null}]
4 ["E","F"] [{"credit":null,"89":null},{"cash":null,"105":null},{"emi":null,"85":null}]
5 ["Z","A"] [{"credit":null,"7":null},{"cash":null,"79":null},{"emi":null,"105":null}]
6 ["D","Y"] [{"credit":null,"30":null},{"cash":null,"100":null},{"emi":null,"101":null}]
7 ["A","Z"] [{"credit":null,"50":null},{"cash":null,"9":null},{"emi":null,"85":null}]
8 ["B","Z"] [{"credit":null,"70":null},{"cash":null,"38":null},{"emi":null,"90":null}]
And my input file data is as follows:
1 A|B credit#10|cash#25|emi#30
2 C|D credit#157|cash#45|emi#35
3 X|Y credit#25|cash#38|emi#50
4 E|F credit#89|cash#105|emi#85
5 Z|A credit#7|cash#79|emi#105
6 D|Y credit#30|cash#100|emi#101
7 A|Z credit#50|cash#9|emi#85
8 B|Z credit#70|cash#38|emi#90
I solved it myself.
We need not mention an array of maps explicitly by default it takes values from one map after the other
Create the table as shown below and load the data, then you will get the desired output.
create table complex(id int,bill array<string>,paytype map<string,int>)
ROW FORMAT
DELIMITED FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY '|'
MAP KEYS TERMINATED BY '#';