I have control file with following option...
OPTIONS(DIRECT=TRUE,ROWS=100,BINDSIZE=209700000,readsize=209700000)
load data
infile 'd:\test.DH'
"str '\n'"
append
into table name
FIELDS TERMINATED by '!'
OPTIONALLY ENCLOSED by '"'
trailing nullcols
sample or records, the terminator is "!"
9334!376!15950!9109!0!29109!109!0!!10003!05.02.2015 03:51:27!05.02.2015 03:51:46!05.02.2015 03:51:27!0!0!0!S!00c08309ed178b3f!005683540!6829109!079015!0!0!!!0!F!299!!!0!0!!0!-1, 0, -1, 1423075906663, 0, 0, 0!{, 1, 24307, 3000-12-31 23:59:59, 0}!!{60200103, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0}!!!!!!!!!!!!!!!!=2, =0, =0, =0, =255, =829109, =510, , =1!1!00067!!!F,079015,,2993007,290009,5,02993007,005683540,6,6829109,,,,010006743081,0,10006743081,5,F,,,,290009,2079015,2079015,829109,93007,079015,2079015,829109,0,,0,07000,,,0,,,,,'00c08309ed178b3fH',,,,,,,0,0,0,0,0,0,0,299,0,,a2040000005b6424,7205,36899550,
338!8376!11230!333777!0!33777!333777!0!!10003!05.02.2015 03:51:04!05.02.2015 03:51:14!05.02.2015 03:51:04!0!0!0!S!6d!004382577!3333777!3407582!0!0!!!0!F!299!!!0!0!!0!-1, 0, -1, 1423075874285, 0, 0, 0!{, 1, 24927, 3000-12-31 23:59:59, 0}!!{60200103, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0}!!!!!!!!!!!!!!!!=2, =0, =0, =0, =255, =33777, =600, , =1!10595!02020!!!F,3407582,,993001,20000,5,993001,004382577,6,3333777,,,,010595,0,0202010595,5,F,,,,220000,407582,407582,33777,993001,,407582,03407582,3333777,0,,0,5874000,,,0,,,,,'6dH',,,,,,,0,0,0,0,0,0,0,299,0,_1281,a2820000005d213d,7205,36899550,
when I run this in I get exceed
Record 1: Rejected - Error on table name, column logs.
Field in data file exceeds maximum length
this field is the last column of the record... the column is 3000Byte... I know it' snot the issue of the length of the record as I tried importing the same file with [b]navicate [/b]and it loaded all without any issue... there something wrong with [b]str[/b]... and it try to load all data on column [b]logs[/b]
I tried
"str '\t'"
"str '\r'"
"str '\n'"
and none of them helped me... thanks for your time
thanks
thanks I found the issue, I had to mentioned number of character in control file infront of the field i.e
OPTIONS(DIRECT=TRUE,ROWS=100,BINDSIZE=209700000,readsize=209700000)
load data
infile 'd:\test.DH'
"str '\n'"
append
into table name
FIELDS TERMINATED by '!'
OPTIONALLY ENCLOSED by '"'
trailing nullcols
(log char(1500))
Thanks everyone for readying :D
Related
I am facing the field in data file exceeds maximum length in sql loader error while loading the data to oracle.
//Below control file is used for sqlldr
// Control File: Product_Routing_26664.ctl
//Data File: phxcase1.pr
//Bad File: Product_Routing_26664.bad
LOAD DATA
APPEND INTO TABLE PEGASUSDB_SCHEMA.PRODUCT_ROUTING
FIELDS TERMINATED BY "^"
TRAILING NULLCOLS
(
OID_INST
,SEQ
,ROUTING_TYPE CHAR "(CASE WHEN trim(:ROUTING_TYPE) IS NULL THEN ' ' ELSE trim(:ROUTING_TYPE) END)"
,ODPD_KEY
,PROD_OFFSET
,EFF_DAYS_Z CHAR "(CASE WHEN trim(:EFF_DAYS_Z) IS NULL THEN ' ' ELSE trim(:EFF_DAYS_Z) END)"
,NETWORK_RTG_ID "substr(trim(:NETWORK_RTG_ID), 3, 26)"
,WT0
,WT1
,WT2
,WT3
,WT4
,WT5
,WT6
,WT7
,WT8
,WT9
,WT10
,WT11
,WT12
,WT13
,WT14
,WT15
,WT16
,WT17
,WT18
,WT19
,WT20
,WT21
,WT22
,WT23
,WT24
,WT25
,WT26
,WT27
,WT28
,WT29
,WT30
,WT31
,WT32
,WT33
,WT34
,WT35
,PCS0
,PCS1
,PCS2
,PCS3
,PCS4
,PCS5
,PCS6
,PCS7
,PCS8
,PCS9
,PCS10
,PCS11
,PCS12
,PCS13
,PCS14
,PCS15
,PCS16
,PCS17
,PCS18
,PCS19
,PCS20
,PCS21
,PCS22
,PCS23
,PCS24
,PCS25
,PCS26
,PCS27
,PCS28
,PCS29
,PCS30
,PCS31
,PCS32
,PCS33
,PCS34
,PCS35
,PR_TYPE CHAR "(CASE WHEN trim(:PR_TYPE) IS NULL THEN ' ' ELSE trim(:PR_TYPE) END)"
,PRODUCT_ROUTING_OID "PRODUCT_ROUTING_SQ.nextval"
,COMMON_CASE_OID CONSTANT "1"
,NETWORK_RTG_OID "(select NETWORK_RTG_OID from NETWORK_RTG where NETWORK_RTG_ID = substr(TRIM(:NETWORK_RTG_ID), 3, 26) and COMMON_CASE_OID = 1)"
)
Error: Record 2: Rejected - Error on table
PEGASUSDB_SCHEMA.PRODUCT_ROUTING, column OID_INST. Field in data file
exceeds maximum length
I have tried changing the OID_INST column to OID_INST CHAR(4000) but it shows the same error.
Please help me in resolving this.
I have sql file with this strings :
(17, 14, '2015-01-20 10:38:40', 211, 'Just text\n\nFrom: Support <support#domain.com>\n Send: 20 Jan 2015 year. 10:33\n To: Admin\n Theme: [TST #0000014] Just text \n\nJust text: Text\n Test text test text\n\nJust text:\n Text\n\n-- \n Test\n Text.\n Many text words 0.84.2', 0, 2);
I want remove all text between symbols \n\ and ', 0, 2);
I want get this result:
(17, 14, '2015-01-20 10:38:40', 211, 'Just text', 0, 2);
How I can do it via sed?
I try use this example - cat file | sed 's/<b>.*</b>//g'. I changed <b> to \n\ and </b> to ', 0, 2); But it dont work, I get error in console
Thanks in advance!
You can try this command
sed 's/\\n\\.*\('\'', 0, 2);\)/\1/g' FileName
Output :
(17, 14, '2015-01-20 10:38:40', 211, 'Just text', 0, 2);
You have to escape the single quotes like '\'' as well as back slash \\
If you can find it, you can replace it with nothing.
So, depending on what you mean by \n and what you need to escape, you want something like sed 's/\\n\\.*'//g'.
Obviously, take care that this is really what you want to replace on every line. It might be worth searching for the target \\n\\.*' first, to make sure it doesn't accidentally grab too much on an unexpected line.
I have a txt file of records:
firstname lastname dob ssn status1 status2 status3 status4 firstname lastname dob ...
I can get this into an array:
tokens[0] = firstname
...
tokens[8] = firstname (of record 2).
tokens[9] = lastname (of record 2) and so on.
I want to iterate over tokens array in steps so I can say:
record1 = tokens[index] + tokens[index+1] + tokens[index+2] etc.
and the step (in the above example 8) would handle the records:
record2, record3 etc etc.
step 0 index is 0
step 1 (step set to 8 so index is 8)
etc.
I guess I should say these records are coming from a txt file that I called .split on:
file = File.open(ARGV[0], 'r')
line = ""
while !file.eof?
line = file.readline
end
##knowing a set is how many fields, do it over and over again.
tokens = line.split(" ")
Does this help?
tokens = (1..80).to_a #just an array
tokens.each_slice(8).with_index {|slice, index|p index; p slice}
#0
#[1, 2, 3, 4, 5, 6, 7, 8]
#1
#[9, 10, 11, 12, 13, 14, 15, 16]
#...
Using each_slice you could also assign variables to your fields inside the block:
tokens.each_slice(8) { |firstname, lastname, dob, ssn, status1, status2, status3, status4|
puts "firstname: #{firstname}"
}
I'm trying to copy data from a table in Sybase server into the same table in Oracle server (Oracle 11g).
I thought it would be easier to do it using my coldfusion web programming because the 2 different db server.
Unfortunately, I got the following error from Oracle. I don't think my syntax is wrong. because all the comma are there and no missing comma as the error says. I think it may be due to the DATE column that's set as DATE datatype.
Here is the error:
Error Executing Database Query.
[Macromedia][Oracle JDBC Driver][Oracle]ORA-00917: missing comma
The error occurred in C:\Inetpub\wwwroot\test.cfm: line 65
63 : #um_gs_dnrcnt_cfm_pp#,
64 : #um_gs_amt_cfm_pg_pp#,
65 : #um_gs_dnrcnt_cfm_pg_pp#)
66 : </cfquery>
67 : </cfoutput>
--------------------------------------------------------------------------------
SQLSTATE HY000
SQL INSERT INTO um_gift_sum (um_gs_fyr, um_gs_inst, um_gs_dept,
um_gs_dt_of_record, um_gs_fund_type,
um_gs_dnr_type,
um_gs_amt_cash, um_gs_dnrcnt_cash, um_gs_amt_pl,
um_gs_dnrcnt_pl, um_gs_amt_pp, um_gs_dnrcnt_pp,
um_gs_amt_pp_prior, um_gs_dnrcnt_pp_prior,
um_gs_amt_gik, um_gs_dnrcnt_gik,
um_gs_amt_pg_cash,
um_gs_dnrcnt_pg_cash, um_gs_amt_pg_pl,
um_gs_dnrcnt_pg_pl, um_gs_amt_pg_pp,
um_gs_dnrcnt_pg_pp, um_gs_amt_gft_mtch,
um_gs_dnrcnt_gft_mtch, um_gs_amt_cfm_pp,
um_gs_dnrcnt_cfm_pp, um_gs_amt_cfm_pg_pp,
um_gs_dnrcnt_cfm_pg_pp)
VALUES('1995', 'AB', 'MAA', 1995-01-31 00:00:00.0, '1', 'FR', 100.0000, 0,
0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0,
0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0)
Here is my insert statement:
<cfquery name="x" datasource="SybaseDB">
SELECT TOP 10 * FROM um_sum
</cfquery>
<cfoutput query="x">
<cfquery name="Y" datasource="OracleDB">
INSERT INTO um_sum (um_gs_fyr,
m_gs_inst,
um_gs_dept,
um_gs_dt_of_record,
um_gs_fund_type,
um_gs_dnr_type,
etc,
um_gs_dnrcnt_cfm_pp,
um_gs_amt_cfm_pg_pp,
um_gs_dnrcnt_cfm_pg_pp)
VALUES('#um_gs_fyr#',
'#um_gs_inst#',
'#um_gs_dept#',
#um_gs_dt_of_record#, <---- this is date datatype,
I suspect this may be the problem?
'#um_gs_fund_type#',
'#um_gs_dnr_type#',
#um_gs_amt_cash#,
#um_gs_dnrcnt_cash#,
#um_gs_amt_pl#,
#um_gs_dnrcnt_pl#,
#um_gs_amt_pp#,
#um_gs_dnrcnt_pp#,
#um_gs_amt_pp_prior#,
#um_gs_dnrcnt_pp_prior#,
#um_gs_amt_gik#,
#um_gs_dnrcnt_gik#,
#um_gs_amt_pg_cash#,
#um_gs_dnrcnt_pg_cash#,
#um_gs_amt_pg_pl#,
#um_gs_dnrcnt_pg_pl#,
#um_gs_amt_pg_pp#,
#um_gs_dnrcnt_pg_pp#,
#um_gs_amt_gft_mtch#,
#um_gs_dnrcnt_gft_mtch#,
#um_gs_amt_cfm_pp#,
#um_gs_dnrcnt_cfm_pp#,
#um_gs_amt_cfm_pg_pp#,
#um_gs_dnrcnt_cfm_pg_pp#)
</cfquery>
</cfoutput>
this part is not surrounded with single quotes properly.
AA', 1995-01-31 00:00:00.0, '1'
edit (based on comment)
if the single quotes don't fix it, then you could explicitly declare the date format in a to_date() function
I have an import script that imports well over 2000+ products including their images. I run this script via CLI because I feel that this is the best way to go speed-wise even though I have the same import script available and executable at the magento admin as an extension. The script runs pretty well. Almost perfect! However, sometimes the addToImageGallery somehow malfunctions and results into some images having No Image as the default product image and the only other image as not selected as defaults at all. How do I mass-update all products to set the first image in the media gallery for the product to the default 'base', 'image' and 'thumbnail' image(s)?
I found a couple of tricks on doing this (and more) on this link:
http://www.magentocommerce.com/boards/viewthread/59440/ (Thanks transio!)
Although, for Magento 1.6.2.0 (which I use), the first SQL trick there (Trick 1 - Auto-set default base, thumb, small image to first image.) needs a bit of modification.
On the second-to-the last-line there is a AND ev.attribute_id IN (70, 71, 72) part. This should point to attribute ID's which will probably not be relevant in Magento 1.6.2.0 anymore. To fix this, using any MySQL query tool (PHPMyAdmin or MySQL Query Browser), I took a look at the catalog_product_entity_varchar table. There should be entries like:
value_id, entity_type_id, attribute_id, store_id, entity_id, value
..
146649, 4, 116, 0, 1, '2'
146650, 4, 76, 0, 1, ''
146651, 4, 78, 0, 1, ''
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
146655, 4, 96, 0, 1, ''
146656, 4, 100, 0, 1, ''
146657, 4, 102, 0, 1, 'container2'
..
My money was on the group of three image paths as possible replacements. So the resulting SQL now should be:
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;
So I committed to it, ran it and.. presto! All fixed! You might also want to encapsulate this in a transaction if you want. But this is out of this question's scope.
Well, this is the fix that worked for me so far! If there are any more out there, please share!
There was:
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
So it should be:
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
instead of:
AND ev.attribute_id IN (78, 80, 81) # <-- attribute IDs updated here
Is looking for something similar.
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;