Unable to do simple insert with Oracle 11g - oracle

I'm trying to copy data from a table in Sybase server into the same table in Oracle server (Oracle 11g).
I thought it would be easier to do it using my coldfusion web programming because the 2 different db server.
Unfortunately, I got the following error from Oracle. I don't think my syntax is wrong. because all the comma are there and no missing comma as the error says. I think it may be due to the DATE column that's set as DATE datatype.
Here is the error:
Error Executing Database Query.
[Macromedia][Oracle JDBC Driver][Oracle]ORA-00917: missing comma
The error occurred in C:\Inetpub\wwwroot\test.cfm: line 65
63 : #um_gs_dnrcnt_cfm_pp#,
64 : #um_gs_amt_cfm_pg_pp#,
65 : #um_gs_dnrcnt_cfm_pg_pp#)
66 : </cfquery>
67 : </cfoutput>
--------------------------------------------------------------------------------
SQLSTATE HY000
SQL INSERT INTO um_gift_sum (um_gs_fyr, um_gs_inst, um_gs_dept,
um_gs_dt_of_record, um_gs_fund_type,
um_gs_dnr_type,
um_gs_amt_cash, um_gs_dnrcnt_cash, um_gs_amt_pl,
um_gs_dnrcnt_pl, um_gs_amt_pp, um_gs_dnrcnt_pp,
um_gs_amt_pp_prior, um_gs_dnrcnt_pp_prior,
um_gs_amt_gik, um_gs_dnrcnt_gik,
um_gs_amt_pg_cash,
um_gs_dnrcnt_pg_cash, um_gs_amt_pg_pl,
um_gs_dnrcnt_pg_pl, um_gs_amt_pg_pp,
um_gs_dnrcnt_pg_pp, um_gs_amt_gft_mtch,
um_gs_dnrcnt_gft_mtch, um_gs_amt_cfm_pp,
um_gs_dnrcnt_cfm_pp, um_gs_amt_cfm_pg_pp,
um_gs_dnrcnt_cfm_pg_pp)
VALUES('1995', 'AB', 'MAA', 1995-01-31 00:00:00.0, '1', 'FR', 100.0000, 0,
0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0,
0.0000, 0, 0.0000, 0, 0.0000, 0, 0.0000, 0)
Here is my insert statement:
<cfquery name="x" datasource="SybaseDB">
SELECT TOP 10 * FROM um_sum
</cfquery>
<cfoutput query="x">
<cfquery name="Y" datasource="OracleDB">
INSERT INTO um_sum (um_gs_fyr,
m_gs_inst,
um_gs_dept,
um_gs_dt_of_record,
um_gs_fund_type,
um_gs_dnr_type,
etc,
um_gs_dnrcnt_cfm_pp,
um_gs_amt_cfm_pg_pp,
um_gs_dnrcnt_cfm_pg_pp)
VALUES('#um_gs_fyr#',
'#um_gs_inst#',
'#um_gs_dept#',
#um_gs_dt_of_record#, <---- this is date datatype,
I suspect this may be the problem?
'#um_gs_fund_type#',
'#um_gs_dnr_type#',
#um_gs_amt_cash#,
#um_gs_dnrcnt_cash#,
#um_gs_amt_pl#,
#um_gs_dnrcnt_pl#,
#um_gs_amt_pp#,
#um_gs_dnrcnt_pp#,
#um_gs_amt_pp_prior#,
#um_gs_dnrcnt_pp_prior#,
#um_gs_amt_gik#,
#um_gs_dnrcnt_gik#,
#um_gs_amt_pg_cash#,
#um_gs_dnrcnt_pg_cash#,
#um_gs_amt_pg_pl#,
#um_gs_dnrcnt_pg_pl#,
#um_gs_amt_pg_pp#,
#um_gs_dnrcnt_pg_pp#,
#um_gs_amt_gft_mtch#,
#um_gs_dnrcnt_gft_mtch#,
#um_gs_amt_cfm_pp#,
#um_gs_dnrcnt_cfm_pp#,
#um_gs_amt_cfm_pg_pp#,
#um_gs_dnrcnt_cfm_pg_pp#)
</cfquery>
</cfoutput>

this part is not surrounded with single quotes properly.
AA', 1995-01-31 00:00:00.0, '1'
edit (based on comment)
if the single quotes don't fix it, then you could explicitly declare the date format in a to_date() function

Related

How to extract a particular number from a long string of data using Oracle SQL?

I need the numbers extracted from the string of data contained in a column of a table.
Example string :
<strong>Customer Name</strong>: Hit - julaifnaf afbafbaf Caraballo Pichardo vs PICHARDO ALBERTO<br />
<strong>Address</strong>: NA - abdcinfainaf 42982542542 vs xx<br />
<strong>Country of citizenship</strong>: NA<br />
<strong>Country of residency</strong>: NA<br />
<strong>Date of birth</strong>: NA - xx vs Nov-72<br />
<strong>Place of birth</strong>: NA<br />
<strong>Identification Number</strong>: **1**<br />
<strong>emailDetails</strong>: <br/>
<b>Subject: </b>abcdejnfanfa <br/>
<b>Sent To: </b>abced#test.com<br/>
In the above example string the number I need extracted 1.
The length of the stings and position of the record vary,
but the numbers to be extracted always come after Identification Number</strong>: and before <br /><strong>.
What function can I use to extract this data?
SELECT TO_NUMBER(
REGEXP_SUBSTR(
column_name,
'<strong>Identification Number</strong>:.*?(\d+).*?<br />',
1,
1,
NULL,
1
)
) AS id_number
FROM table_name;
Try this:
select
regexp_replace(column_name,'.*<strong>Identification Number</strong>:[^>\d]*(\d+)[^>\d]*<br\s*/>.*', '\1', 1, 0, 'inm') as id
from html;
PS it's not very reliable solution though, because you can't parse any HTML using RegExp's.
Output:
ID
-----------
1

Oracle calculations within Crystal report

I have this code in a Crystal report. It uses 2 fields, st.pass_total and st.fail_total to calculate the pass ratio. I'd like to replace this Crystal code with PL/SQL code to return just the pass_ratio:
if isnull({st.PASS_TOTAL})
and isnull({st.FAIL_TOTAL}) then pass_ratio:=""
else if (not isnull({st.PASS_TOTAL}))
and isnull({st.FAIL_TOTAL}) then pass_ratio:="100%"
else if (isnull({st.PASS_TOTAL})
or {st.PASS_TOTAL}=0)
and (not isnull({st.FAIL_TOTAL})) then pass_ratio:=""
else pass_ratio:=totext({st.PASS_TOTAL}/({st.PASS_TOTAL}+{st.FAIL_TOTAL})*100)+"%";
This is what I have in PL/SQL, is it correct?
decode((is_null(st.pass_total) AND is_null(st.fail_total)), "",
(not is_null(st.pass_total) AND not is_null(st.fail_total)), "100%",
((is_null(st.pass_total) OR st.pass_total=0) && not is_null(st.fail_total)), "",
(st.pass_total/(st.pass_total+st.fail_total)*100)||"%"))
I also have one that "calculates" the Cutoff value:
if {e.eve_cutoff}=0
or isnull({e.eve_cutoff}) then event_cutoff:="140"
else if {e.eve_cutoff}>0 then event_cutoff:=totext({e.eve_cutoff},0);
This is what I have in PL/SQL, is it correct?
decode(e.eve_cutoff, 0, "140",
e.eve_cutoff, NULL, "140",
eve_cutoff)
Your decode statements have several issues. This syntax can be greatly simplified by using function nvl():
select
case
when nvl(st.pass_total, 0) = 0 then ''
else 100 * st.pass_total / (st.pass_total + nvl(st.fail_total, 0)) ||'%'
end ratio
from st
and:
select decode(nvl(eve_cutoff, 0), 0, '140', eve_cutoff) cutoff from e
[SQLFiddle1] . [SQLFiddle2]
For first select you may also want to round values with function round(), like I did in SQLFiddle -
(if you do not do it you may get overflow error in report).

Oracle Control file issue exceeding column

I have control file with following option...
OPTIONS(DIRECT=TRUE,ROWS=100,BINDSIZE=209700000,readsize=209700000)
load data
infile 'd:\test.DH'
"str '\n'"
append
into table name
FIELDS TERMINATED by '!'
OPTIONALLY ENCLOSED by '"'
trailing nullcols
sample or records, the terminator is "!"
9334!376!15950!9109!0!29109!109!0!!10003!05.02.2015 03:51:27!05.02.2015 03:51:46!05.02.2015 03:51:27!0!0!0!S!00c08309ed178b3f!005683540!6829109!079015!0!0!!!0!F!299!!!0!0!!0!-1, 0, -1, 1423075906663, 0, 0, 0!{, 1, 24307, 3000-12-31 23:59:59, 0}!!{60200103, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0}!!!!!!!!!!!!!!!!=2, =0, =0, =0, =255, =829109, =510, , =1!1!00067!!!F,079015,,2993007,290009,5,02993007,005683540,6,6829109,,,,010006743081,0,10006743081,5,F,,,,290009,2079015,2079015,829109,93007,079015,2079015,829109,0,,0,07000,,,0,,,,,'00c08309ed178b3fH',,,,,,,0,0,0,0,0,0,0,299,0,,a2040000005b6424,7205,36899550,
338!8376!11230!333777!0!33777!333777!0!!10003!05.02.2015 03:51:04!05.02.2015 03:51:14!05.02.2015 03:51:04!0!0!0!S!6d!004382577!3333777!3407582!0!0!!!0!F!299!!!0!0!!0!-1, 0, -1, 1423075874285, 0, 0, 0!{, 1, 24927, 3000-12-31 23:59:59, 0}!!{60200103, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0}!!!!!!!!!!!!!!!!=2, =0, =0, =0, =255, =33777, =600, , =1!10595!02020!!!F,3407582,,993001,20000,5,993001,004382577,6,3333777,,,,010595,0,0202010595,5,F,,,,220000,407582,407582,33777,993001,,407582,03407582,3333777,0,,0,5874000,,,0,,,,,'6dH',,,,,,,0,0,0,0,0,0,0,299,0,_1281,a2820000005d213d,7205,36899550,
when I run this in I get exceed
Record 1: Rejected - Error on table name, column logs.
Field in data file exceeds maximum length
this field is the last column of the record... the column is 3000Byte... I know it' snot the issue of the length of the record as I tried importing the same file with [b]navicate [/b]and it loaded all without any issue... there something wrong with [b]str[/b]... and it try to load all data on column [b]logs[/b]
I tried
"str '\t'"
"str '\r'"
"str '\n'"
and none of them helped me... thanks for your time
thanks
thanks I found the issue, I had to mentioned number of character in control file infront of the field i.e
OPTIONS(DIRECT=TRUE,ROWS=100,BINDSIZE=209700000,readsize=209700000)
load data
infile 'd:\test.DH'
"str '\n'"
append
into table name
FIELDS TERMINATED by '!'
OPTIONALLY ENCLOSED by '"'
trailing nullcols
(log char(1500))
Thanks everyone for readying :D

magento - Update product default image to first image in image gallery

I have an import script that imports well over 2000+ products including their images. I run this script via CLI because I feel that this is the best way to go speed-wise even though I have the same import script available and executable at the magento admin as an extension. The script runs pretty well. Almost perfect! However, sometimes the addToImageGallery somehow malfunctions and results into some images having No Image as the default product image and the only other image as not selected as defaults at all. How do I mass-update all products to set the first image in the media gallery for the product to the default 'base', 'image' and 'thumbnail' image(s)?
I found a couple of tricks on doing this (and more) on this link:
http://www.magentocommerce.com/boards/viewthread/59440/ (Thanks transio!)
Although, for Magento 1.6.2.0 (which I use), the first SQL trick there (Trick 1 - Auto-set default base, thumb, small image to first image.) needs a bit of modification.
On the second-to-the last-line there is a AND ev.attribute_id IN (70, 71, 72) part. This should point to attribute ID's which will probably not be relevant in Magento 1.6.2.0 anymore. To fix this, using any MySQL query tool (PHPMyAdmin or MySQL Query Browser), I took a look at the catalog_product_entity_varchar table. There should be entries like:
value_id, entity_type_id, attribute_id, store_id, entity_id, value
..
146649, 4, 116, 0, 1, '2'
146650, 4, 76, 0, 1, ''
146651, 4, 78, 0, 1, ''
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
146655, 4, 96, 0, 1, ''
146656, 4, 100, 0, 1, ''
146657, 4, 102, 0, 1, 'container2'
..
My money was on the group of three image paths as possible replacements. So the resulting SQL now should be:
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;
So I committed to it, ran it and.. presto! All fixed! You might also want to encapsulate this in a transaction if you want. But this is out of this question's scope.
Well, this is the fix that worked for me so far! If there are any more out there, please share!
There was:
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
So it should be:
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
instead of:
AND ev.attribute_id IN (78, 80, 81) # <-- attribute IDs updated here
Is looking for something similar.
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;

In SQLite, do prepared statements really improve performance?

I have heard that prepared statements with SQLite should improve performance. I wrote some code to test that, and did not see any difference in performance with using them. So, I thought maybe my code was incorrect. Please let me know if you see any errors in how I'm doing this...
[self testPrep:NO dbConn:dbConn];
[self testPrep:YES dbConn:dbConn];
reuse=0
recs=2000
2009-11-09 10:39:18 -0800
processing...
2009-11-09 10:39:32 -0800
reuse=1
recs=2000
2009-11-09 10:39:32 -0800
processing...
2009-11-09 10:39:46 -0800
-(void)testPrep:(BOOL)reuse dbConn:(sqlite3*)dbConn{
int recs = 2000;
NSString *sql;
sqlite3_stmt *stmt;
sql = #"DROP TABLE test";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
sql = #"CREATE TABLE test (id INT,field1 INT, field2 INT,field3 INT,field4 INT,field5 INT,field6 INT,field7 INT,field8 INT,field9 INT,field10 INT)";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
for(int i=0;i<recs;i++){
sql = #"INSERT INTO test (id,field1,field2,field3,field4,field5,field6,field7,field8,field9,field10) VALUES (%d,1,2,3,4,5,6,7,8,9,10)";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
}
sql = #"BEGIN";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
if (reuse){
sql = #"select * from test where field1=?1 and field2=?2 and field3=?3 and field4=?4 and field5=?5 and field6=?6 and field6=?6 and field8=?8 and field9=?9 and field10=?10 and id=?11";
sqlite3_prepare_v2(dbConn, [sql UTF8String], -1, &stmt, NULL);
}
NSLog(#"reuse=%d",reuse);
NSLog(#"recs=%d",recs);
NSDate *before = [NSDate date];
NSLog([before description]);
NSLog(#"processing...");
for(int i=0;i<recs;i++){
if (!reuse){
sql = #"select * from test where field1=?1 and field2=?2 and field3=?3 and field4=?4 and field5=?5 and field6=?6 and field6=?6 and field8=?8 and field9=?9 and field10=?10 and id=?11";
sqlite3_prepare_v2(dbConn, [sql UTF8String], -1, &stmt, NULL);
}
sqlite3_bind_int(stmt, 1, 1);
sqlite3_bind_int(stmt, 2, 2);
sqlite3_bind_int(stmt, 3, 3);
sqlite3_bind_int(stmt, 4, 4);
sqlite3_bind_int(stmt, 5, 5);
sqlite3_bind_int(stmt, 6, 6);
sqlite3_bind_int(stmt, 7, 7);
sqlite3_bind_int(stmt, 8, 8);
sqlite3_bind_int(stmt, 9, 9);
sqlite3_bind_int(stmt, 10, 10);
sqlite3_bind_int(stmt, 11, i);
while(sqlite3_step(stmt) == SQLITE_ROW) {
}
sqlite3_reset(stmt);
}
sql = #"BEGIN";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
NSDate *after = [NSDate date];
NSLog([after description]);
}
Prepared statements improve performance by caching the execution plan for a query after the query optimizer has found the best plan.
If the query you're using doesn't have a complicated plan (such as simple selects/inserts with no joins), then prepared statements won't give you a big improvement since the optimizer will quickly find the best plan.
However, if you ran the same test with a query that had a few joins and used some indexes, you would see the performance difference since the optimizer wouldn't be run every time the query is.
Yes - it makes a huge difference whether your using sqlite3_exec() vs. sqlite3_prepare_v2() / sqlite3_bind_xxx() / sqlite3_step() for bulk inserts.
sqlite3_exec() is only a convenience method. Internally it just calls the same sequence of sqlite3_prepare_v2() and sqlite3_step(). Your example code is calling sqlite3_exec() over-and-over on a literal string:
for(int i=0;i<recs;i++){
sql = #"INSERT INTO test (id,field1,field2,field3,field4,field5,field6,field7,field8,field9,field10) VALUES (%d,1,2,3,4,5,6,7,8,9,10)";
sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL);
}
I don't know the inner workings of the SQLite parser, but perhaps the parser is smart enough to recognize that you are using the same literal string and then skips re-parsing/re-compiling with every iteration.
If you try the same experiment with values that change - you'll see a much bigger difference in performance.
Using prepare + step instead of execute huge performance improvements are possible.
In some cases the performance gain is more than 100% in execution time.

Resources