I have a WebObjects app, Openbase db, and I'm getting a never before seen exception when doing a raw rows (non ORM) query during a batch operation. It looks like the jdbc adaptor is throwing on a date value in the db and is unable to coerce the raw data into the proper type. It literally kills the app and ends the export process. Here's the top two relevant lines from the trace:
java.lang.IllegalArgumentException
at java.sql.Date.valueOf(Date.java:138)
at com.openbase.jdbc.f.getDate(Unknown Source)
I've tried changing the column type from date, to datetime to timestamp, and adjusting the eo model accordingly, but the exception remains. I'm wondering what I can do to resolve this, specifically if anybody knows a more sophisticated query mechanism I can employ to identify the possibly bad rows? Openbase's documentation is pretty sparse, and I'm hoping maybe somebody knows how to use patterns to identify possible bad values using openbase sql. Or, some other means of identifying the issue. Thanks.
Turns out the problem was due to a version mismatch between the Openbase version and the java version. Unfortunately, I had no choice but to rewrite the dump routine to use the bulk save openbase function, and then parse out the resulting csv. Interestingly, the same dates that were causing problems printed just fine, which enabled saving a lot more rows. Summary: stick with the open source db's; unless you're going high end, there's no advantage to solutions like Openbase anymore.
Related
I'm wondering if there's any way to manually corrupt the CLOB data for testing purpose.
I can find the steps for intensional block corruption, but can't find anything for the individual data in a table. Can anyone help me with this?
Below is what I'm trying to do and I need help for step 1:
Prepare the corrupted CLOB data
Run expdb and get ORA-01555 error
Test if my troubleshooting procedure works ok
Some background:
DB: Oracle 12.2.0.1 SE2
OS: Windows Server 2016
The app we're using (from the 3rd party) seems to occasionally corrupt the CLOB data when a certain type of data gets inserted in a table. We don't know what triggers it. The corruption doesn't affect the app's function, but leaving it unfixed gives the following error when running expdb for daily backup:
ORA-01555: snapshot too old: rollback segment number
CLOB consists of a mix of alphanumeric characters and line breaks. It gets inserted by the app, no manual insert takes place
Fixing/replacing the app isn't an option, so we've got a fixing procedure with us.
I took over this from another engineer (who's left already), but since then the app is happily working and no problem has occurred so far. I want to test run the fixing procedure in DEV environment, but the app doesn't reproduce the problem for me.
So I thought if I can manually prepare the "broken" CLOB for testing purpose
So this looks like it is caused by a known bug:
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364607910994084&parent=DOCUMENT&sourceId=833635.1&id=787004.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_200
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364470181721910&id=846079.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_53
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364481844925661&id=833635.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_102
The main point here is that the corruption isn't caused by anything inherant in the data, but is more likely caused by something like concurrent access to the LOB by multiple updates (application or end-user behavior), or just by apparently random chance. As such, I doubt that there's any way for you to easily force this condition in order to validate your test for it.
After running 'generateChangelog' on an Oracle database, the changelogFile has wrong type (or even better, simply bad value) for some fields, independently of the used driver.
More closer, some of the RAW columns are translated to STRING (it sounds okay), but values like "E01005C6842100020200000E10000000" are translated to "[B#433defed". Which seems to be some blob like entity. Also, these are the only data related differences between the original database content and backup.
When I try to restore the DB by 'update', these columns show problems "Unexpected error running Liquibase: *****: invalid hex number".
Is there any way forcing liquibase to save the problem columns "as-is", or anything else to overcome this situation? Or is it a bug?
I think more information is needed to be able to diagnose this. Ideally, if you suspect something may be a bug, you provide three things:
what steps you took (this would include the versions of things being used, relevant configuration, commands issued, etc.)
what the actual results were
what the expected results were
Right now we have some idea of 1 (ran generateChangelog on Oracle, then tried to run update) but we are missing things like what the structure of the Oracle database was, what version of Oracle/Liquibase, and what was the actual command issued. We have some idea of the actual results (columns that are of type RAW in Oracle are converted to STRING in the changelog, and it may be also converting the data in those columns to different values than you expect) and some idea of the expected results (you expected the RAW data to be saved in the changelog and then be able to re-deploy that change).
That being said, using Liquibase to back up and restore a database (especially one that has columns of type RAW/CLOB/BLOB) is probably not a good idea.
Liquibase is primarily aimed at helping manage change to the structure of a database and not so much with the data contained within it.
I hope this was not asked here before (I did search around here, and did google for an answer, but could not find an answer)
The problem is: I'm using MS Access 2010 to select records from a linked table (There are millions of records in the table). If I specify criteria (e.g. Date) directly (for example date=#1/1/2013#), the query returns in an instant. If i use parameters (add a parameter of type date/time and provide value of 1/1/2013 when prompted (or date in some different format), or reference a control in a form), the query takes minutes to load.
Please let me know if You have any ideas on what could be causing this. I do feel bad about asking such a question and possibly wasting someones time...
Here's a potential answer, I didn't know this myself and did a little digging.
If performance is important, it may be necessary to prefer dynamic SQL even for where parameter queries are suitable due to how queries are optimized. Generally, Access creates a plan for a new query upon saving. When a query contains a parameter, then Access cannot know what value the parameter may contain and has to make a "good guess". Depending on which actual values are later supplied, it may be okay or poor, resulting in sub-optimal performance. In contrast, dynamic SQL sidesteps this because the "parameters" are hard-coded into the temporary string and thus a new plan is compiled with that value, guaranteeing optimal execution plan. Since compiling a new plan at runtime is very fast, it can be the case that dynamic SQL will outperform parameter queries.
Source: http://www.utteraccess.com/wiki/index.php/Parameter_Query#Performance
Also, if I had to guess, in your parameter query, Access is requesting the ENTIRE table from Oracle and then filtering down with your where clause, but when the WHERE clause is specified, it actually just loads those records and possibly makes use of indexes.
As far as a solution, I would build your query string in VBA then execute it. It opens you up to injection, but you can handle that. So:
Instead of using a saved parameter query object in Access, try to do something like this.
dim qr as string
qr = "SELECT * FROM myTable WHERE myDate = #" & me.dateControl & "#;"
'CurrentDb.execute qr, dbFailOnError
Docmd.RunSQL qr
Or, as you replied, currentdb.openrecordset(qr)
This would force the engine to make an execution plan at runtime rather than having a saved potentially suboptimal plan. Let me know if this works out for you, I'd be interested to see.
Of course the above reference about using parameters with Access (JET/ACE) ONLY applies to access back ends, not ODBC ones like SQL server or oracle. Since you pointed out that your using Oracle here then creating a view or using a pass-though query would and should resolve this performance issue. However one does NOT want to use Access/JET paramters with data coming from some server based system - you best just send the server SQL strings, but much better would be to use a pass-though query. If the result set requires editing, then PT query are only readonly, and you have to create a view and link to that view.
I'm using Linq to SQL with SQL 2005. I'm parsing a large fixed width file and importing the data into SQL via custom entities that I have mapped to the database using property attributes.
The program runs for about 240 records before throwing this error. I've checked the columns (all four of them) and the data it's trying to put in and it shouldn't be throwing this error. I've even gone so far as to change the columns from varchar to text, and it still throws the error. When I manually insert the same values, they insert fine.
Is there a known bug or anything in Linq to SQL? I'm calling context.submitall() on every loop to insert. I've read that .NET 3.5 SP1 gives better error messages from SQL, but I'm still not seeing anything.
Thanks for any help.
Is it possible that you've changed your schema since you built the Linq to SQL classes? The designer entities won't be updated when you change your SQL schema unless you delete/recreate the class in the designer or hand-edit the properties for the designer-generated class. I know that it keeps track of the column width for string (varchar) columns in the class -- though I don't know if it actually checks it before submitting or just keeps it as a reference for any validation that you would do. I have seen similar problems with things like autogenerated ids, though, that were solved by updating the class in the designer.
Well, I found the bug, and it was definitely not a problem with Linq or with SQL. I was chopping up and changing properties on another object, not realizing that it was now attached to SQL via Linq, and it was that object throwing the errors.
Lesson learned: Do not alter properties directly on an object unless you really want to change them, and especially if it's attached to the data context.
I ported a Delphi 6 application to Delphi 2007 and it uses BDE to connect to
an Oracle 9i database. I am getting an
ORA-01426: numeric overflow exception
When I execute a stored procedure. This happens randomly and if I
re-run the stored procedure through the application with the same parameters
the exception does not occur.
The old Delphi 6 application works just fine.
Ideas anybody?
Showing a code example could make this easier, but here are a couple of hunches:
Are the data coming from another source (like Excel) that does not have explicit data types? Mixed or ambiguous data may be causing BDE to assign the wrong data type to a field that then is incompatible with the database field.
Could be a numeric formatting issue (some U.S.-centric components do not handle localization properly). Is your localization other than English(U.S.)? Is so, does changing it to English(U.S.) fix the problem?
If these completely miss, more details might help.
Does the D6 version of the app use the same version of BDE, Oracle, and the database? If so, then it's probably something about the data being passed (either content or mechanism).
Not knowing what those data are, nor how they are passed, makes it pretty hard to diagnose.