liquibase 'generateChangelog' generates wrong schema (bad data) - oracle

After running 'generateChangelog' on an Oracle database, the changelogFile has wrong type (or even better, simply bad value) for some fields, independently of the used driver.
More closer, some of the RAW columns are translated to STRING (it sounds okay), but values like "E01005C6842100020200000E10000000" are translated to "[B#433defed". Which seems to be some blob like entity. Also, these are the only data related differences between the original database content and backup.
When I try to restore the DB by 'update', these columns show problems "Unexpected error running Liquibase: *****: invalid hex number".
Is there any way forcing liquibase to save the problem columns "as-is", or anything else to overcome this situation? Or is it a bug?

I think more information is needed to be able to diagnose this. Ideally, if you suspect something may be a bug, you provide three things:
what steps you took (this would include the versions of things being used, relevant configuration, commands issued, etc.)
what the actual results were
what the expected results were
Right now we have some idea of 1 (ran generateChangelog on Oracle, then tried to run update) but we are missing things like what the structure of the Oracle database was, what version of Oracle/Liquibase, and what was the actual command issued. We have some idea of the actual results (columns that are of type RAW in Oracle are converted to STRING in the changelog, and it may be also converting the data in those columns to different values than you expect) and some idea of the expected results (you expected the RAW data to be saved in the changelog and then be able to re-deploy that change).
That being said, using Liquibase to back up and restore a database (especially one that has columns of type RAW/CLOB/BLOB) is probably not a good idea.
Liquibase is primarily aimed at helping manage change to the structure of a database and not so much with the data contained within it.

Related

How to manually corrupt the Oracle CLOB data

I'm wondering if there's any way to manually corrupt the CLOB data for testing purpose.
I can find the steps for intensional block corruption, but can't find anything for the individual data in a table. Can anyone help me with this?
Below is what I'm trying to do and I need help for step 1:
Prepare the corrupted CLOB data
Run expdb and get ORA-01555 error
Test if my troubleshooting procedure works ok
Some background:
DB: Oracle 12.2.0.1 SE2
OS: Windows Server 2016
The app we're using (from the 3rd party) seems to occasionally corrupt the CLOB data when a certain type of data gets inserted in a table. We don't know what triggers it. The corruption doesn't affect the app's function, but leaving it unfixed gives the following error when running expdb for daily backup:
ORA-01555: snapshot too old: rollback segment number
CLOB consists of a mix of alphanumeric characters and line breaks. It gets inserted by the app, no manual insert takes place
Fixing/replacing the app isn't an option, so we've got a fixing procedure with us.
I took over this from another engineer (who's left already), but since then the app is happily working and no problem has occurred so far. I want to test run the fixing procedure in DEV environment, but the app doesn't reproduce the problem for me.
So I thought if I can manually prepare the "broken" CLOB for testing purpose
So this looks like it is caused by a known bug:
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364607910994084&parent=DOCUMENT&sourceId=833635.1&id=787004.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_200
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364470181721910&id=846079.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_53
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364481844925661&id=833635.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_102
The main point here is that the corruption isn't caused by anything inherant in the data, but is more likely caused by something like concurrent access to the LOB by multiple updates (application or end-user behavior), or just by apparently random chance. As such, I doubt that there's any way for you to easily force this condition in order to validate your test for it.

Oracle SYSTPH* Type

We've noticed in the TOAD schema browser some weird types which seem to pop up randomly throughout the day on our database. We found them by using the Schema Browser in TOAD under Types -> Collection Types. They have names like these:
SYSTPHYP5bsxIC47gU0Z4MApeAw==
SYSTPHYP8cBHQYUDgU0Z4MApvyA==
SYSTPHYPwYo541RfgU0Z4MAqeTQ==
They seem to have randomly generated names and we're pretty sure our application is not creating them. They are all TABLE OF NUMBER(20)
Does anyone have an explanation of what these types are for?
These are most likely related to use of the collect aggregate function. You can find some info here on them:
http://orasql.org/2012/04/28/a-funny-fact-about-collect/
Looks like in the past there was a bug (Bug 4033868 fixed in 11g) where these types did not clean up after themselves.

Linq query will change often- how can I change it without recompiling app?

My application will be querying a database using Entity Framework. The problem is that the database table structure changes fairly often (a few times a year).
Back in the SQL days, we would store SQL queries in Resource files (.resx) and when any database changes occurred, we could just edit the one resource file and not have to edit any code in the app, recompile, etc.
Are there any good ways to do this with Linq-to-SQL?
Linq2SQL is innately code-based. If your schema is going to change, then the code will need to change.
The only way I can see around this, and still get some of the benefits of linq, is to write everything as Stored Procedures, which you can than add as method to the linq DataContext.
Then, as long as the Name, input parameters and output columns remain the same, you can change what the SP is doing on the database and the code can stay the same.

DataSet and Insert statements

I'm having some trouble with Visual Studio and the creation of DataSets from a database.
Whenever I create a new TableAdapter, the Insert-Methods parameters are, lets just say, it messes up.
The database is a MS Access 2000 Database file. If I create a new TabelAdapter, everything works just fine. I select to create DatabaseDirect Methods and it all goes through without errors.
Then, I look at the statements. All perfectly fine. But then, I check the Insert-Methods parameters and I see this:
Parameter List http://img243.imageshack.us/img243/3175/paramlist.png
All the parameters are set to default Strings with no name. I have to rename and define all of their types over again.
Interesting thing is, this does never affect the last parameter (As you see: Comment is not renamed etc) and it only happens to the Insert-Method. When I check the Update-Method (which also uses the exact same parameters), they are all correctly named and the type also fits the one in the databse.
Parameter list http://img816.imageshack.us/img816/853/paramlistnormal.png
Is this a known bug? Did I do something wrong when creating the TableAdapter?
You see, it's not that big an issue, I just can't understand why it works with every other method, just not the Insert and it is quite a fuss to rename and retype all of the parameters if you create a table adapter for a table that has significantly more fields than just the 12 I showed you.
It looks like at least one other person has had a similar problem. Although this post doesn't specifically mention Access, the symptoms seem to be the same as what you've seen.
Unfortunately, there wasn't a clear solution listed there. The OP only says that he was able to call the automatically-generated Insert command, rather than trying to create his own Insert query, and so he did not need to resolve his original issue.
Also, he mentions that everything seems to work fine with all of the other tables in his database, and that this happens with only one table. That may mean that it's not an Access-specific issue, but rather that the tables in your database have something in common with the table in this post, and that common factor is what is preventing the TableAdapter from working as it should.

IllegalArgumentException with Date value in jdbc; Openbase sql

I have a WebObjects app, Openbase db, and I'm getting a never before seen exception when doing a raw rows (non ORM) query during a batch operation. It looks like the jdbc adaptor is throwing on a date value in the db and is unable to coerce the raw data into the proper type. It literally kills the app and ends the export process. Here's the top two relevant lines from the trace:
java.lang.IllegalArgumentException
at java.sql.Date.valueOf(Date.java:138)
at com.openbase.jdbc.f.getDate(Unknown Source)
I've tried changing the column type from date, to datetime to timestamp, and adjusting the eo model accordingly, but the exception remains. I'm wondering what I can do to resolve this, specifically if anybody knows a more sophisticated query mechanism I can employ to identify the possibly bad rows? Openbase's documentation is pretty sparse, and I'm hoping maybe somebody knows how to use patterns to identify possible bad values using openbase sql. Or, some other means of identifying the issue. Thanks.
Turns out the problem was due to a version mismatch between the Openbase version and the java version. Unfortunately, I had no choice but to rewrite the dump routine to use the bulk save openbase function, and then parse out the resulting csv. Interestingly, the same dates that were causing problems printed just fine, which enabled saving a lot more rows. Summary: stick with the open source db's; unless you're going high end, there's no advantage to solutions like Openbase anymore.

Resources