I am using oracle database as back end of one of my projects.
I have a table which has a ADDRESS_TYPE column nvarchar(3)
In some scenarios the system trying to insert a text 'Business' into the column ADDRESS_TYPE. When I try it from the local it is showing an error like the value is too large for the column, this one is an expected result. But the same code I have deployed in QA and Production but in both cases the data got inserted as 'Bus' so the text 'Business' got truncated and it only inserts 'Bus'. My local instance and the QA instance pointing to the same Database.
var data= new MYTABLE();
data.ADDRESS_TYPE= 'Business';
context.MYTABLE.AddObject(data);
context.SaveChanges();
Note : I am using Entity framework.
I have tried to insert/update the data directly into the database and I am getting the error value is too large for column. This behavior only happen in the hosted sites. I am planning to put a log on the code for getting more idea , but before that I thought of asking here , so if anyone faced the same situation then they might can help me I guess.
I have tried to do the same operation locally using release mode there also getting the exception
Anyone have any idea/suggestions so that I can investigate on that way.
Related
I am developing dashboards in Tableau Desktop, for which we retrieve data via a custom SQL query (live connection to an Oracle database).
I am able to load my data. For now, we are building tables to display the data.
I unfortunately cannot provide images, but the tables have several dimensions on the rows shelf (for example, name of the product, code for this product, country where it is produced, ...), and we then have Measure Names (KPIs) on the columns shelf. The layout of the table is fixed as it is legally defined.
However, when I drag and drop fields to build the view, I encounter this error at some point:
Error "ORA-01406: fetched column value was truncated".
When I am developing the reports in the acceptance environment of my database, it doesn't happen. But as soon as I switch to production data, the error appears.
The reports will need to be published to Tableau Server, which cannot be done with this error.
The tables for which it happens are quite large, but we were able to build larger tables without this issue.
Do you have any idea on how to solve this issue?
Thanks in advance!
I'm wondering if there's any way to manually corrupt the CLOB data for testing purpose.
I can find the steps for intensional block corruption, but can't find anything for the individual data in a table. Can anyone help me with this?
Below is what I'm trying to do and I need help for step 1:
Prepare the corrupted CLOB data
Run expdb and get ORA-01555 error
Test if my troubleshooting procedure works ok
Some background:
DB: Oracle 12.2.0.1 SE2
OS: Windows Server 2016
The app we're using (from the 3rd party) seems to occasionally corrupt the CLOB data when a certain type of data gets inserted in a table. We don't know what triggers it. The corruption doesn't affect the app's function, but leaving it unfixed gives the following error when running expdb for daily backup:
ORA-01555: snapshot too old: rollback segment number
CLOB consists of a mix of alphanumeric characters and line breaks. It gets inserted by the app, no manual insert takes place
Fixing/replacing the app isn't an option, so we've got a fixing procedure with us.
I took over this from another engineer (who's left already), but since then the app is happily working and no problem has occurred so far. I want to test run the fixing procedure in DEV environment, but the app doesn't reproduce the problem for me.
So I thought if I can manually prepare the "broken" CLOB for testing purpose
So this looks like it is caused by a known bug:
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364607910994084&parent=DOCUMENT&sourceId=833635.1&id=787004.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_200
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364470181721910&id=846079.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_53
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364481844925661&id=833635.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_102
The main point here is that the corruption isn't caused by anything inherant in the data, but is more likely caused by something like concurrent access to the LOB by multiple updates (application or end-user behavior), or just by apparently random chance. As such, I doubt that there's any way for you to easily force this condition in order to validate your test for it.
I am working with a EJB(3.0)-Hibernate(3) project along with Oracle 11g DB.
First of all due to the security reason I am unable to share my code, I am really sorry for that.
Issue is :
In my Application from different locations, DB has been called for retrieving, persisting, merging records which deals with a number of tables in DB.
But, for a particular retrieve query(select query which is fetching only a single record by putting a primary key data in where clause) from my Application, it is taking too much time(almost 4 minutes) for getting the response from DB(response is proper with a single record).
I can track the time by debugging from calling point to DB inside Application and the retrieving response from DB to my Application.
So, I want to know why for a single record fetching, it is taking so much time where for other queries it's fetching within seconds or micro-seconds.
And also want to know how to track the time-stamp of [query request from Application just hitting the Database after connecting DB through Hibernate Layer] and also what is going on inside the DB for this flow.
Please give me some advice or suggestions from your entire work experience if you facing such kind of issue and also help me how to track the whole flow
Application <-> Hibernate Layer <-> Database
Thanks in advance!!!
After running 'generateChangelog' on an Oracle database, the changelogFile has wrong type (or even better, simply bad value) for some fields, independently of the used driver.
More closer, some of the RAW columns are translated to STRING (it sounds okay), but values like "E01005C6842100020200000E10000000" are translated to "[B#433defed". Which seems to be some blob like entity. Also, these are the only data related differences between the original database content and backup.
When I try to restore the DB by 'update', these columns show problems "Unexpected error running Liquibase: *****: invalid hex number".
Is there any way forcing liquibase to save the problem columns "as-is", or anything else to overcome this situation? Or is it a bug?
I think more information is needed to be able to diagnose this. Ideally, if you suspect something may be a bug, you provide three things:
what steps you took (this would include the versions of things being used, relevant configuration, commands issued, etc.)
what the actual results were
what the expected results were
Right now we have some idea of 1 (ran generateChangelog on Oracle, then tried to run update) but we are missing things like what the structure of the Oracle database was, what version of Oracle/Liquibase, and what was the actual command issued. We have some idea of the actual results (columns that are of type RAW in Oracle are converted to STRING in the changelog, and it may be also converting the data in those columns to different values than you expect) and some idea of the expected results (you expected the RAW data to be saved in the changelog and then be able to re-deploy that change).
That being said, using Liquibase to back up and restore a database (especially one that has columns of type RAW/CLOB/BLOB) is probably not a good idea.
Liquibase is primarily aimed at helping manage change to the structure of a database and not so much with the data contained within it.
I'm using Linq to SQL with SQL 2005. I'm parsing a large fixed width file and importing the data into SQL via custom entities that I have mapped to the database using property attributes.
The program runs for about 240 records before throwing this error. I've checked the columns (all four of them) and the data it's trying to put in and it shouldn't be throwing this error. I've even gone so far as to change the columns from varchar to text, and it still throws the error. When I manually insert the same values, they insert fine.
Is there a known bug or anything in Linq to SQL? I'm calling context.submitall() on every loop to insert. I've read that .NET 3.5 SP1 gives better error messages from SQL, but I'm still not seeing anything.
Thanks for any help.
Is it possible that you've changed your schema since you built the Linq to SQL classes? The designer entities won't be updated when you change your SQL schema unless you delete/recreate the class in the designer or hand-edit the properties for the designer-generated class. I know that it keeps track of the column width for string (varchar) columns in the class -- though I don't know if it actually checks it before submitting or just keeps it as a reference for any validation that you would do. I have seen similar problems with things like autogenerated ids, though, that were solved by updating the class in the designer.
Well, I found the bug, and it was definitely not a problem with Linq or with SQL. I was chopping up and changing properties on another object, not realizing that it was now attached to SQL via Linq, and it was that object throwing the errors.
Lesson learned: Do not alter properties directly on an object unless you really want to change them, and especially if it's attached to the data context.