I have a table with a column varchar2(4000), where I got a problem inserting data, such that:
ORA-12899: value too large for column "XXX"."YYY"."ZZZ" (actual: 2132, maximum: 2048)
When I run
select * from user_tab_columns where table_name = 'YYY'
I can see a CHAR_LENGTH column of size 2048, but other than that I have no trace why it would preempt itself?
CHARACTER_SET_NAME is CHAR_CS, but content is mostly base64 encoded.. Any clues how to overcome this problem ?
Regards
Update:
Here's the full user_tab_columns, sorry for the indentation.
TABLE_NAME COLUMN_NAME DATA_TYPE DATA_TYPE_MOD DATA_TYPE_OWNER DATA_LENGTH DATA_PRECISION DATA_SCALE NULLABLE COLUMN_ID DEFAULT_LENGTH DATA_DEFAULT NUM_DISTINCT LOW_VALUE HIGH_VALUE DENSITY NUM_NULLS NUM_BUCKETS LAST_ANALYZED SAMPLE_SIZE CHARACTER_SET_NAME CHAR_COL_DECL_LENGTH GLOBAL_STATS USER_STATS AVG_COL_LEN CHAR_LENGTH CHAR_USED V80_FMT_IMAGE DATA_UPGRADED HISTOGRAM
YYY ZZZ VARCHAR2 <null> <null> 4,000 <null> <null> Y 7 <null> <null> 15 41 42 43 44 45 46 47 4d 49 49 46 75 7a 43 43 42 4b 4f 67 41 77 49 42 41 67 49 45 54 41 4c 4d 68 6a 41 4e 42 67 6b 71 0.06667 662 1 2013-06-03 929 CHAR_CS 4,000 NO NO 1,394 2,048 C NO YES NONE
The 2048 mark comes from the CHAR_LENGTH column, and CHAR_USED is C..
Update:
Managed to get the initial DDL
CREATE TABLE "XXX", "YYY"
(
...
"ZZZ" VARCHAR2 (2048 CHAR)
...
)
But I still have no clue as to how to adjust that figure?
Would it help with a simple alter table and set the varchar2(3192 CHAR)?
Your column is limited to both 2048 characters and 4000 bytes. Regardless of your character length semantics, ALL_TAB_COLUMNS.DATA_LENGTH is "Length of the column (in bytes)". AL32UTF8 can use up to 4 bytes per character, so DATA_LENGTH will be the number of characters * 4. Except it will never be larger than the Oracle limit of 4000.
For example:
create table test1(a varchar2(1 char));
create table test2(a varchar2(2 char));
create table test3(a varchar2(1000 char));
create table test4(a varchar2(4000 char));
select table_name, data_length
from all_tab_columns
where table_name like 'TEST_';
TABLE_NAME DATA_LENGTH
---------- -----------
TEST1 4
TEST2 8
TEST3 4000
TEST4 4000
You can fix your problem with alter table xxx.yyy modify zzz varchar2(4000 char);.
Related
I have a table - Base_table
create table base_table (ID number,FACTOR_1 number,FACTOR_1 number,FACTOR_3 number,FACTOR_4 number,TOTAL number, J_CODE varchar2(10))
insert into base_table values (1,10,52,5,32,140,'M1');
insert into base_table values (2,null,32,24,12,311,'M2');
insert into base_table values (3,12,null,53,null,110,'M3');
insert into base_table values (4,43,45,42,3,133,'M1');
insert into base_table values (5,432,24,null,68,581,'M2');
insert into base_table values (6,null,7,98,null,196,'M1');
ID
FACTOR_1
FACTOR_2
FACTOR_3
FACTOR_4
TOTAL
J_CODE
1
10
52
5
32
140
M1
2
null
32
24
12
311
M2
3
12
null
53
null
110
M3
4
43
45
42
3
133
M1
5
432
24
null
68
581
M2
6
null
7
98
null
196
M1
I need to insert this data into another table (FCT_T) based on certain criterias.
Also, I am trying to avaoid usage of unpivot as there are several other columns that I need to insert and manage as part of insert.
create table fct_t (id number, p_code varchar2(21), p_value number);
Logic to use -
Below values are not part of table, but needs to be used (hard-coded) in logic/criteria (perhaps CASE statements) -
M_VAL
FACT_1_CODE
FACT_2_CODE
FACT_3_CODE
FACT_4_CODE
M1
R1
R2
R3
R4
M2
R21
R65
R6
R245
M3
R1
R01
R212
R365
What I need is something similar (or any better approach available) -
insert into FCT_T values
select id,
case when FACTOR_1>0 and J_CODE = 'M1' then 'R1' end ,
factor_1
from base_table;
So far not able to figure out, how I can move factor column as rows, given an ID can have any number of rows from 1 to 4 based on criteria.
Appreciate help here.
Partial final/expected output (FCT_T) -
ID
P_CODE
P_VALUE
1
R1
10
1
R2
52
1
R3
5
1
R4
32
2
R65
32
2
R6
24
2
R245
12
You can join the table to your codes and then UNPIVOT to convert columns into rows:
INSERT INTO fct_t (id, p_code, p_value)
WITH codes (M_VAL, FACT_1_CODE, FACT_2_CODE, FACT_3_CODE, FACT_4_CODE) AS (
SELECT 'M1', 'R1', 'R2', 'R3', 'R4' FROM DUAL UNION ALL
SELECT 'M2', 'R21', 'R65', 'R6', 'R245' FROM DUAL UNION ALL
SELECT 'M3', 'R1', 'R01', 'R212', 'R365' FROM DUAL
)
SELECT id, p_code, p_value
FROM base_table b
INNER JOIN codes c
ON (b.j_code = c.m_val)
UNPIVOT (
(p_code, p_value)
FOR factor IN (
(fact_1_code, factor_1) AS 1,
(fact_2_code, factor_2) AS 2,
(fact_3_code, factor_3) AS 3,
(fact_4_code, factor_4) AS 4
)
)
WHERE p_value IS NOT NULL;
db<>fiddle here
I'm trying to write a PLSQL query that will in two separate columns print random integers.
It will print 1000 numbers total (random 1 - 50 in each row).
What I need to figure out is how I after this has been done, replace the second column with either "yes" or "no" if it matches the first column
Such as:
Col A Col B
10 NO(42)
32 NO(12)
25 YES(25)
And so on.
This is my code:
CREATE TABLE table
(random_num INTEGER NOT NULL,
match INTEGER NOT NULL);
Declare
CURSOR cur_ IS
(Select
random_num,
match
from table);
Begin
FOR rec_ IN 1..1000
LOOP
INSERT INTO "table" (random_num,match) VALUES (DBMS_RANDOM.VALUE(1,50),DBMS_RANDOM.VALUE(1,50));
END LOOP;
END;
Now this works as I get two 1000 rows of each column with random numbers, but I need to implement this select:
SELECT random_num, CASE WHEN random_num = match THEN 'yes' ELSE 'no' END as match
FROM table
Into the loop so. Any takers on how I can do?
There's something wrong in what you said. You can't put yes (string) into an INTEGER datatype column.
This makes more sense:
Sample table:
SQL> CREATE TABLE test
2 (
3 random_num_1 INTEGER NOT NULL,
4 random_num_2 INTEGER NOT NULL,
5 match VARCHAR2 (3) NOT NULL
6 );
Table created.
Procedure: use local variables to store random numbers; then it is easy to compare them.
SQL> DECLARE
2 val1 NUMBER;
3 val2 NUMBER;
4 BEGIN
5 FOR i IN 1 .. 10 --> change it to 1000
6 LOOP
7 val1 := DBMS_RANDOM.VALUE (1, 50);
8 val2 := DBMS_RANDOM.VALUE (1, 50);
9
10 INSERT INTO test (random_num_1, random_num_2, match)
11 VALUES (val1,
12 val2,
13 CASE WHEN val1 = val2 THEN 'yes' ELSE 'no' END);
14 END LOOP;
15 END;
16 /
PL/SQL procedure successfully completed.
Result:
SQL> SELECT * FROM test;
RANDOM_NUM_1 RANDOM_NUM_2 MAT
------------ ------------ ---
45 31 no
40 48 no
43 27 no
49 41 no
6 38 no
5 18 no
18 35 no
15 34 no
11 19 no
37 39 no
10 rows selected.
SQL>
First if you want to produce random integers from 1 to 50 that are equaly distributed, you must be carefull.
[DBMS_RANDOM.VALUE (1, 50)][1] returns a decimal number greater than or equal than 1 and less than 50.
example
select DBMS_RANDOM.VALUE (1, 50) col from dual;
COL
----------
30,4901593
You cast the result in INTEGER type that performs rounding, so you will see all numbers, but the 1 and 50 will appear only half frequenty as other numbers.
So a better way to get random integers 1 .. 50 is 1 + trunc(50*DBMS_RANDOM.VALUE)
VALUE without parameters returns [0,1)
Also typically if you do not need to use PL/SQL do not use it
create table tab1 as
select 1 + trunc(50*DBMS_RANDOM.VALUE) col1, 1 + trunc(50*DBMS_RANDOM.VALUE) col2
from dual connect by level <= 10 /* increase as much rows are needed */
and add the MATCH column as virtual
alter table tab1
add (match varchar2(3) generated always as (
case when col1 = col2 then 'YES' else 'NO' end ) virtual);
COL1 COL2 MAT
---------- ---------- ---
33 6 NO
26 28 NO
35 22 NO
30 27 NO
17 45 NO
31 4 NO
11 21 NO
2 48 NO
35 25 NO
39 15 NO
i have a table with 12 columns:
table1:
1
2
3
4
5
6
7
8
9
10
11
12
abc
1
000
aaa
zzz
2
234
OOO
00001
01
123
214
def
2
023
bbb
yyy
4
345
PPP
00002
02
133
224
ghi
3
011
ccc
xxx
6
456
QQQ
00003
03
143
234
jkl
4
112
ddd
www
8
567
RRR
00004
04
153
244
i would like to use 3rd column data in a loop and fetch 'best match' data from another table.
table2:
1
2
3
4
0
777
676
america
00
888
878
england
01
999
989
france
02
666
656
germany
3rd column data will be trimmed in the loop until a match in table2 is fetched.
first row:
iter 1: table1 row1 col3=000 -- no match in table
iter 2: table1 row1 col3=00 -- return england, replace table1 row1 col12=214 with 'england'
updated row: abc,1,000,aaa,zzz,2,234,OOO,00001,01,123,england
second row:
iter 1: table1 row2 col3=023 -- no match in table
iter 2: table1 row2 col3=02 -- return germany, replace table1 row1 col12=224 with 'germany'
updated row: def,2,023,bbb,yyy,4,345,PPP,00002,02,133,germany
What you will need to do is create a procedure, then within the procedure declare a cursor as well as a variable_c_row cursor_name%ROWTYPE.
Within the procedure, this will be the contents:
OPEN cursor_name
FETCH cursor_name INTO variable_c_row;
WHILE cursor_name%FOUND LOOP
-- Declare a number variable (i)
i := 0;
-- Declare a varchar variable (match)
match := variable_c_row.col3
WHILE length(match) > 0 LOOP OR l_countryname IS NULL
begin
-- Declare a yourrow%ROWTYPE variable (l_countryname)
SELECT col4 FROM table2 INTO l_countryname WHERE col1 = match;
UPDATE table1 SET col12 = l_countryname;
exception when no_data_found then null;
end;
i := i+1;
match := substr(variable_c_row.cow3, 0, length(match)-i);
END LOOP;
FETCH cursor_name INTO variable_c_row;
END LOOP;
CLOSE cursor_name;
Since the question had no DDL or DML, the most I can provide is a broad answer, which has not been tested.
we are migrating DB from Oracle 11g -> 19 and facing issue with external table. Old and new db have exactly same table definition and pointing to the same file (db running on different hosts but pointing same qtree). Old DB can query file without errors, but new one rejecting all rows with:
KUP-04023: field start is after end of record
Tables have below config:
CREATE TABLE TEST
(
AA VARCHAR2 (40 BYTE),
BB VARCHAR2 (2 BYTE),
CC VARCHAR2 (3 BYTE),
DD VARCHAR2 (12 BYTE)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TEST_DIRECTORY
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
BADFILE TEST_DIRECTORY : 'TEST.bad'
LOGFILE TEST_DIRECTORY : 'TEST.log'
FIELDS
TERMINATED BY '\t' LTRIM REJECT ROWS WITH ALL NULL FIELDS
(AA,
BB,
CC,
DD))
LOCATION (TEST_DIRECTORY:'TEST.dat'))
REJECT LIMIT UNLIMITED;
Test data (replace ^I with tabulator):
NAME1^I0^I ^IUK
NAME2^I0^I ^IUS
When I removed LTRIM, all data is read on new DB (but we need to keep LTRIM as input files contain unnecessary spaces). I've noticed that one field has value of one space and it looks to be causing that issue, but why only on new database? Any ideas what is the reason or how to easily fix?
NLS db/session parameters are same on both databases...but maybe there is some global parameter which could cause this issue?
Test data manually updated which is working on both db (replace whitespace in third column with X)
NAME1^I0^IX^IUK
NAME2^I0^IX^IUS
DEMO:
Below table created on 11g and 19c:
CREATE TABLE TEST
(
AA VARCHAR2 (40 BYTE),
BB VARCHAR2 (2 BYTE),
CC VARCHAR2 (3 BYTE),
DD VARCHAR2 (12 BYTE)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TEST_DIRECTORY
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
BADFILE TEST_DIRECTORY : 'TEST.bad'
LOGFILE TEST_DIRECTORY : 'TEST.log'
FIELDS
TERMINATED BY '\t' LTRIM
REJECT ROWS WITH ALL NULL FIELDS
(AA,
BB,
CC ,
DD))
LOCATION (TEST_DIRECTORY:'TEST.dat'))
REJECT LIMIT UNLIMITED;
Both tables sourcing same file TEST.dat (data delimited by tabulator which is shown as 2 characters ^I):
$ cat -A TEST.dat
NAME1^I0^I ^IUK$
NAME2^I0^I ^IUS$
Querying on 11g:
SQL> SELECT * FROM TEST;
AA BB CC DD
---------------------------------------- -- --- ------------
NAME1 0 UK
NAME2 0 US
SQL> SELECT dump(CC) FROM TEST;
DUMP(CC)
--------------------------------------------------------------------------------
NULL
NULL
Querying on 19c:
SQL> SELECT * FROM TEST;
no rows selected
TEST.log shows after running query on 19c:
Bad File: TEST.bad
Field Definitions for table TEST
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Reject rows with all null fields
Fields in Data Source:
AA CHAR (255)
Terminated by " "
Trim whitespace from left
BB CHAR (255)
Terminated by " "
Trim whitespace from left
CC CHAR (255)
Terminated by " "
Trim whitespace from left
DD CHAR (255)
Terminated by " "
Trim whitespace from left
KUP-04021: field formatting error for field DD
KUP-04023: field start is after end of record
KUP-04101: record 1 rejected in file /home/fff/TEST.dat
KUP-04021: field formatting error for field DD
KUP-04023: field start is after end of record
KUP-04101: record 2 rejected in file /home/fff/TEST.dat
Then, I recreated tables on both db just without LTRIM:
CREATE TABLE TEST
(
AA VARCHAR2 (40 BYTE),
BB VARCHAR2 (2 BYTE),
CC VARCHAR2 (3 BYTE),
DD VARCHAR2 (12 BYTE)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TEST_DIRECTORY
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
BADFILE TEST_DIRECTORY : 'TEST.bad'
LOGFILE TEST_DIRECTORY : 'TEST.log'
FIELDS
TERMINATED BY '\t'
REJECT ROWS WITH ALL NULL FIELDS
(AA,
BB,
CC ,
DD))
LOCATION (TEST_DIRECTORY:'TEST.dat'))
REJECT LIMIT UNLIMITED;
Querying on new table in 11g:
SQL> SELECT * FROM TEST;
AA BB CC DD
---------------------------------------- -- --- ------------
NAME1 0 UK
NAME2 0 US
SQL> SELECT dump(CC) FROM TEST;
DUMP(CC)
--------------------------------------------------------------------------------
Typ=1 Len=1: 32
Typ=1 Len=1: 32
Querying on new table in 19c:
SQL> SELECT * FROM TEST;
AA BB CC DD
---------------------------------------- -- --- ------------
NAME1 0 UK
NAME2 0 US
SQL> SELECT dump(CC) FROM TEST;
DUMP(CC)
--------------------------------------------------------------------------------
Typ=1 Len=1: 32
Typ=1 Len=1: 32
Let me try to reproduce your issue on my own environment
Using Oracle 19c on Red Hat Linux 7.2
SQL> select version from v$instance ;
VERSION
-----------------
19.0.0.0.0
Demo
Update: delimiter is tab
Content of the file
$ cat -A TEST.dat
NAME1^I0^I ^IUK$
NAME2^I0^I ^IUS$
External Table
SQL> drop table TEST_EXTERNAL_TABLE ;
Table dropped.
SQL> CREATE TABLE TEST_EXTERNAL_TABLE
2 (
3 AA VARCHAR2 (40 BYTE),
4 BB VARCHAR2 (2 BYTE),
5 CC VARCHAR2 (3 BYTE),
6 DD VARCHAR2 (12 BYTE)
7 )
8 ORGANIZATION EXTERNAL
9 (
10 TYPE ORACLE_LOADER
11 DEFAULT DIRECTORY DIR_TEST
12 ACCESS PARAMETERS (
13 RECORDS DELIMITED BY NEWLINE
14 BADFILE DIR_TEST : 'TEST.bad'
15 LOGFILE DIR_TEST : 'TEST.log'
16 FIELDS TERMINATED BY '\t' NOTRIM
17 REJECT ROWS WITH ALL NULL FIELDS
18 (AA,
19 BB,
20 CC,
21 DD))
22* LOCATION (DIR_TEST:'TEST.dat'))
SQL> /
Table created.
SQL> select * from TEST_EXTERNAL_TABLE ;
AA BB CC DD
---------------------------------------- -- --- ------------
NAME1 0 UK
NAME2 0 US
SQL> select dump(cc) from TEST_EXTERNAL_TABLE ;
DUMP(CC)
--------------------------------------------------------------------------------
Typ=1 Len=1: 32
Typ=1 Len=1: 32
In my case I am able to load, but the blank spaces remain in the field, which is the expected behaviour of NOTRIM vs LDRTRIM.
LDRTRIM is used to provide compatibility with SQL*Loader trim
features. It is the same as NOTRIM except in the following cases:
If the field is not a delimited field, then spaces will be trimmed
from the right. If the field is a delimited field with OPTIONALLY
ENCLOSED BY specified, and the optional enclosures are missing for a
particular instance, then spaces will be trimmed from the left.
Doing the same with LDRTRIM
SQL> drop table TEST_eXTERNAL_TABLE;
Table dropped.
SQL> l
1 CREATE TABLE TEST_EXTERNAL_TABLE
2 (
3 AA VARCHAR2 (40 BYTE),
4 BB VARCHAR2 (2 BYTE),
5 CC VARCHAR2 (3 BYTE),
6 DD VARCHAR2 (12 BYTE)
7 )
8 ORGANIZATION EXTERNAL
9 (
10 TYPE ORACLE_LOADER
11 DEFAULT DIRECTORY DIR_TEST
12 ACCESS PARAMETERS (
13 RECORDS DELIMITED BY NEWLINE
14 BADFILE DIR_TEST : 'TEST.bad'
15 LOGFILE DIR_TEST : 'TEST.log'
16 FIELDS TERMINATED BY '\t' LDRTRIM
17 REJECT ROWS WITH ALL NULL FIELDS
18 (AA,
19 BB,
20 CC,
21 DD))
22* LOCATION (DIR_TEST:'TEST.dat'))
SQL> /
Table created.
SQL> select * from TEST_EXTERNAL_TABLE ;
AA BB CC DD
---------------------------------------- -- --- ------------
NAME1 0 UK
NAME2 0 US
SQL> select dump(cc) from TEST_EXTERNAL_TABLE ;
DUMP(CC)
--------------------------------------------------------------------------------
Typ=1 Len=1: 32
Typ=1 Len=1: 32
SQL>
If you use LTRIM it does not work, because the white spaces are in the right side, as the field is empty. That is the default behaviour, at least since 12c is how it works and should be.
SQL> drop table TEST_EXTERNAL_TABLE ;
Table dropped.
SQL> CREATE TABLE TEST_EXTERNAL_TABLE
(
AA VARCHAR2 (40 BYTE),
2 3 4 BB VARCHAR2 (2 BYTE),
CC VARCHAR2 (3 BYTE),
5 6 DD VARCHAR2 (12 BYTE)
7 )
8 ORGANIZATION EXTERNAL
(
9 10 TYPE ORACLE_LOADER
DEFAULT DIRECTORY DIR_TEST
ACCESS PARAMETERS (
11 12 13 RECORDS DELIMITED BY NEWLINE
BADFILE DIR_TEST : 'TEST.bad'
LOGFILE DIR_TEST : 'TEST.log'
14 15 16 FIELDS TERMINATED BY '\t' LTRIM
REJECT ROWS WITH ALL NULL FIELDS
(AA,
BB,
17 18 19 20 CC,
DD))
LOCATION (DIR_TEST:'TEST.dat'))
21 22 23 REJECT LIMIT UNLIMITED;
Table created.
SQL> select * from TEST_EXTERNAL_TABLE ;
no rows selected
Now with RTRIM works as expected, because the whitespaces in the whole field are treated from right to left.
SQL> drop table TEST_EXTERNAL_TABLE ;
Table dropped.
SQL> CREATE TABLE TEST_EXTERNAL_TABLE
2 (
AA VARCHAR2 (40 BYTE),
3 4 BB VARCHAR2 (2 BYTE),
CC VARCHAR2 (3 BYTE),
DD VARCHAR2 (12 BYTE)
5 6 7 )
ORGANIZATION EXTERNAL
(
8 9 10 TYPE ORACLE_LOADER
11 DEFAULT DIRECTORY DIR_TEST
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
12 13 14 BADFILE DIR_TEST : 'TEST.bad'
LOGFILE DIR_TEST : 'TEST.log'
15 16 FIELDS TERMINATED BY '\t' RTRIM
17 REJECT ROWS WITH ALL NULL FIELDS
18 (AA,
19 BB,
20 CC,
DD))
LOCATION (DIR_TEST:'TEST.dat'))
21 22 23 REJECT LIMIT UNLIMITED;
Table created.
SQL> select * from TEST_EXTERNAL_TABLE ;
AA BB CC DD
---------------------------------------- -- --- ------------
NAME1 0 UK
NAME2 0 US
My advice: Use LDRTRIM, or even better, avoid whitespaces all together is that is an option. Regarding your test in 11g, well that is quite an old version and probably the behaviour is consequence of a bug, although I could not find any reported one explaining this behaviour.
Its not LTRIM its LDRTRIM.
SQL> create table et
2 ( c1 varchar2(16),
3 c2 varchar2(8),
4 c3 varchar2(8),
5 c4 varchar2(8),
6 c5 varchar2(8),
7 c6 varchar2(8),
8 c7 varchar2(8)
9 )
10 ORGANIZATION EXTERNAL
11 ( TYPE ORACLE_LOADER
12 DEFAULT DIRECTORY temp
13 ACCESS PARAMETERS
14 ( RECORDS DELIMITED BY NEWLINE
15 BADFILE temp: 'TEST_FILE.bad'
16 LOGFILE temp: 'TEST_FILE.log'
17 FIELDS TERMINATED BY X'20A7' LTRIM
18 REJECT ROWS WITH ALL NULL FIELDS
19 (
20 c1,c2,c3,c4,c5,c6,c7
21 ) )
22 LOCATION (temp:'TEST_FILE.dat')
23 )
24 REJECT LIMIT UNLIMITED;
Table created.
SQL>
SQL> select * from et;
C1 C2 C3 C4 C5 C6 C7
---------------- -------- -------- -------- -------- -------- --------
31234569999999 0 A X 0 Z GGGG
SQL>
SQL> drop table et;
Table dropped.
SQL>
SQL> create table et
2 ( c1 varchar2(16),
3 c2 varchar2(8),
4 c3 varchar2(8),
5 c4 varchar2(8),
6 c5 varchar2(8),
7 c6 varchar2(8),
8 c7 varchar2(8)
9 )
10 ORGANIZATION EXTERNAL
11 ( TYPE ORACLE_LOADER
12 DEFAULT DIRECTORY temp
13 ACCESS PARAMETERS
14 ( RECORDS DELIMITED BY NEWLINE
15 BADFILE temp: 'TEST_FILE.bad'
16 LOGFILE temp: 'TEST_FILE.log'
17 FIELDS TERMINATED BY X'20A7' LDRTRIM
18 REJECT ROWS WITH ALL NULL FIELDS
19 (
20 c1,c2,c3,c4,c5,c6,c7
21 ) )
22 LOCATION (temp:'TEST_FILE.dat')
23 )
24 REJECT LIMIT UNLIMITED;
Table created.
SQL>
SQL> select * from et;
C1 C2 C3 C4 C5 C6 C7
---------------- -------- -------- -------- -------- -------- --------
31234569999999 0 A X 0 GGGG
31234569999999 0 A X 0 Z GGGG
I have a table structure
ID Col_1 col_2 col_3 col_4
1 34 23 45 32
2 20 19 67 18
3 40 10 76 86
I here want the max value from col_1,col_,col_3,col_4 so my output looks like
ID Col_1 col_2 col_3 col_4 max
1 34 23 45 32 45
2 20 19 67 18 67
3 40 10 76 86 86
any help would be much appreciated.
Use a Modified Java Script Value step with the following code:
var max = Math.max(col_1,col_2,col_3,col_4);
You can use Memory Group By or Group By steps in Pentaho. Use the aggregation method as "Maximum" based on your grouping id.