I'm using Oracle Database 12c (12.1.0.1.0 64Bit). Some time ago I wrote a piece of software in pl/sql to import several XML-files. This seemed to work very well, but then some problems occurred. Some files are 5 to 25 MB in size, so it takes one or two minutes to import them. But for some files the import never ends and the importing process even can't be stopped, I have to restart the server to get rid of it.
I traced the problem to the following row:
INSERT INTO SB_BUFFER_XML VALUES (XMLType(bfilename('XMLDATA', '840.xml'), nls_charset_id('AL32UTF8')));
The table SB_BUFFER_XML is of type xmltable and XMLDATA points to a local directoy. The command never finishes with the file 840.xml. But it finishes with the file 613.xml. Both are similar in size, the 613.xml is even bigger:
840.xml: 6.329 KB
613.xml: 6.905 KB
So I started to compare both files looking for the problem:
both files are UTF-8 without BOM
both contain the same structure, but different data
the xml-syntax check finished successful
even in a hex editor both files start and finish with the same characters (so theres no hidden BOM or something)
both files were created in the same system in the same version
So I simply started to delete content from 840.xml to reduce the complexity and I saw that it doesn't matter what I delete. As soon as I delete a specific amount of data, even if it is a comment, the import of this file works flawless.
The strange thing is that I already did import xml-files with the same structure from the same system with a file size of over 20 MB.
Do you have any idea what could cause this problem or what I could check next?
Oracle could reproduce our problem and pointed us to the following bug:
Bug 22843562 - IMPORT OF A XML FILE WITH A COMMENT AT THE END FAILS WITH ORA-27163
This bug is fixed in 12.2. In 12.1 you can try this workaround to activate the old parser:
ALTER SESSION SET EVENTS '31156 trace name context forever, level 0x400';
While the bug title didn't describe our problem, the workaround worked for us nevertheless.
Related
I get ora-00604 & ora-1578 & ora-01110, any one have solution ????
That doesn't sound good.
ORA-01578: ORACLE data block corrupted (file # string, block # string)
Cause : The data block indicated was corrupted, mostly due to software errors.
Action: Try to restore the segment containing the block indicated.
This may involve dropping the segment and recreating it.
If there is a trace file, report the errors in it to your ORACLE representative.
system02 database file is corrupted; there's a possibility that hard disk crashed and you should check it for errors. As it is the database server, I presume that it would be safer if you replaced it (instead of just fixed it, because - when one data block gets corrupted, there's a good chance that it'll happen again (according to Murphy's law, at least)).
Furthermore, it means that you'll have to restore the database. I hope you have backup (the one you got BEFORE corruption happened).
We have a number of databases at our company. Among them an oracle 12c (12.2.0.1.0 to be precise), but we have no (qualified) oracle DBAs. The performance has deteriorated drastically in the last 6 months or so and I now have the task of finding out why. My research suggested that I should up some memory parameters in the initDBN.ora file. Here's what the original looked like:
DBN.__data_transfer_cache_size=0
DBN.__db_cache_size=50331648
DBN.__inmemory_ext_roarea=0
DBN.__inmemory_ext_rwarea=0
DBN.__java_pool_size=79691776
DBN.__large_pool_size=8388608
DBN.__oracle_base='/orabin/app/oracle'#ORACLE_BASE set from environment
DBN.__pga_aggregate_target=197132288
DBN.__sga_target=734003200
DBN.__shared_io_pool_size=12582912
DBN.__shared_pool_size=536870912
DBN.__streams_pool_size=4194304
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_16k_cache_size=8388608
*.db_32k_cache_size=8388608
*.db_4k_cache_size=8388608
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.memory_max_target=0
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=700m
*.shared_pool_size=536870912
*.streams_pool_size=4194304
*.undo_tablespace='UNDOTBS1'
Please don't blame me for this, because I did not write it. It certainly doesn't look like the sample init.ora and I am not at all certain where the syntax came from. The values I changed were:
DBN.__sga_target=1024m
*.sga_target=1024m
*.memory_max_target=1408m
DBN.__pga_aggregate_target=384m and *.pga_aggregate_target=384m
That's the order in which I made the changes. After each change I used sqlplus to firstly recreate the spffile with:
create spfile='spfileDBN.ora' from pfile='initDBN.ora';
This was followed by an attempt to startup the database with startup nomount. In each case I got an error message which lead me to make the next change.
Finally I got the error which is in the title of this post. When I tried to search for information on this, the findings were grim. Mostly the information dealt with other parameters and did not explain what this error actually meant. The only thing that gave any real background was this link from Burleson Consulting. It didn't really help me solve the problem, so I decided to revert the initDBN.ora file and do some more research. A slow database is generally better than no database.
But Hey! I still get that same error, even after reerting to the original init file. I'm getting desperate now. I have no idea how to fix this. From what I've read to date, setting "underscore variables" in your init file is a "NO NO".
Can anybody provide me with some helpful tips as to how to get rid of this error?
We don't know if the apps running on this database need specific block sizes, but if the priority is getting the database open, you can shrink the init.ora down the smallest, simplest set of parameters that gets you moving forward.
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1000m
*.undo_tablespace='UNDOTBS1'
should get you an open database. Notice I have bumped up the sga_target to 1000m which is about the minimum you need to get a database started. The true values for sga_target and pga_aggregate_target really need to be set based on your expected usage, and the server configuration. But the init.ora above should get your database running.
I am not sure that this really qualifies as a "solution", but it does fix the initial problem. As mentioned in my reply to Connor McDonald, I set the parameter _shared_pool_reserved_min_alloc to 3000 in the initDBN.ora file, which I copied from Connor's example (thanks for that). After recreating the spfile and trying to restart the database, I got the following error:
ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 11953766
That got me thinking that the value 0 in the original error was probably a standin value which really means "the maximum allowed". By actually setting the parameter, I have apparently managed to generate an error which is more meaningful.
The value of _shared_pool_reserved_min_alloc is now set to 4200, which is a value I recall reading in one of the less helpful posts. (No, that post did not say that this is a value that should be used, just that it could be used.) This time, after re-creating the spfile I was able to start the database.
Before I do any more fiddling with parameters, I will do a bit more research... or maybe a lot more.
I'm experiencing some strange behavior when I use the Quick Export - File function in Toad Datapoint, to export the result of a large sql Oracle query to Excel. Using TOAD Datapoint 4.3 (I know it's not the latest version, but this is what my employer provides).
Has anyone else experienced this?
I see 2 background processes: Writing Data, and Exporting Data. The Writing Data process completes pretty quickly and the progress bar advances. But the Exporting Data process just sits there with no progress, sometimes for 30-45 minutes. Then, for no particular reason that I'm aware of I get a message that the export is complete. Even before that happens, I can see the exported file in Windows Explorer.
So basically 2 questions:
1) Why does this happen, and how can I make it stop?
2) Once the "Writing Data" process completes, is it OK to open the Excel, or will it somehow be incomplete or get corrupted if I don't wait for the Exporting Data process to complete?
Note, I usually just copy and paste from the results grid into Excel whenever possible, but for large data sets I sometimes have to use the Export function.
Thanks in advance!
Well, I still don't know why it happens, but I guess it's specific to my environment.
Anyway, I have a pretty good workaround. Instead of exporting to excel, I export to CSV, then import the results into an excel workbook. The "Writing Data" process completes immediately that way, and the whole thing is much faster.
Thanks for reading
I got a partial answer to my problem before, and want to solve this problem fully now. The final line of my /Program Files/GNU/vim/_vimrc is
source /homedir/vimsession_file
The filenames that i edit do not change, only their content changes. But, once in a while, i would create a new session file before i exit vim, using
:mks! /homedir/vimsession_file
Everytime i start gVim, i get a message box listing all the files (which I load into the multiple tabs that i have) with a Line number and Character count listed. More detail of this can be found in my orignal post here.
Currently, i am not using the solution proposed in the link given above. The solution i got there was to replace the final line of /Program Files/GNU/vim/_vimrc with the following line:
autocmd VimEnter * source /homedir/vimsession_file
The reason I stopped using the above solution is because my buffers were all getting wiped out (as described in the original post link). So, i was forced to rebuild my buffers every once in a while, when i would restart gVim.
I did search and read in order to solve this on my own. But the closest solution that i saw was here in stackoverflow. But that solution did not work for me either, despite playing with the shortmess variable as suggested there. How can i stop this annoying message box that pops up with OK button, before the start of gVim ? I want to suppress the message box, because the only info i get from it are the line and character count for each file. (NOTE: I looked into the /homedir/vimsession_file and it is about 3500 lines long. I noticed that the file names occur with badd followed by the edit command. For example, i have line 96 and line 164 as below:
Line 96 : badd +16 \Program\ Files\GNU\vim\_vimrc
........
Line 164: edit \Program\ Files\GNU\vim\_vimrc
This pattern repeats for all the other files that get loaded into multiple windows/tabs.
Wanted to post the answer here because there seem to be very few VIM experts who regularly look at stackoverflow. I didn't have the patience to wait many days expecting an answer. I found the answer to my own question, after reading the help file in vim called "starting.txt" which explains the startup/init process of vim and the different initialization files used. The following steps removed the annoying pop-up message box for me, and also made my VIM process start much faster than before due to simplification.
According to what is suggested in starting.txt help file, I separated my numerous tabs/windows/files into different sessions. I recommend reading through this help file (atleast browsing it), if you are a regular user of VIM.
Previously i was lumping numerous files (from different projects) into one single vim session. This is messy and is not the recommended way to use a VIM session file. Before the creation of different session files (for different projects), i first saved my old vim session file so that i can reuse it, during the creation of different session files. You will understand the process clearly if you look at the help file.
I cleaned up my startup procedure further by setting up a new viminfo file, by adding the "-i" parameter to vim.exe (icon) for starting. This was pointed to a new directory and hence gave a fresh start.
The main init files used by vim are viminfo, vimrc and session file. Each is meant for a different purpose. My problem was caused by source /homedir/vimsession_file as the final line of in my vimrc. So, I removed that line, and instead issue that command manually now, after vim starts up (with an empty window). I can source different vimsession_files, in order to load different files (which belong together). On my machine this command takes about 1 second, to load many tabs/windows/files, which belong to a single project/sub-project.
As pointed in the original post URL given in my question, there maybe another way to resolve this by creating and looking at the vimlog file. But i didn't want to bother with that tedious process. The way i am setup now makes more sense to me, because I have various subprojects which properly belong in different sessions of VIM.
I maintain a program that is responsible for collecting data from a data acquisition system and appending that data to a very large (size > 4GB) binary file. Before appending data, the program must validate the header of this file in order to ensure that the meta-data in the file matches that which has been collected. In order to do this, I open the file as follows:
data_file = fopen(file_name, "rb+");
I then seek to the beginning of the file in order to validate the header. When this is done, I seek to the end of the file as follows:
_fseeki64(data_file, _filelengthi64(data_file), SEEK_SET);
At this point, I write the data that has been collected using fwrite(). I am careful to check the return values from all I/O functions.
One of the computers (windows 7 64 bit) on which we have been testing this program intermittently shows a condition where the data appears to have been written to the file yet neither the file's last changed time nor its size changes. If any of the calls to fopen(), fseek(), or fwrite() fail, my program will throw an exception which will result in aborting the data collection process and logging the error. On this machine, none of these failures seem to be occurring. Something that makes the matter even more mysterious is that, if a restore point is set on the host file system, the problem goes away only to re-appear intermittently appear at some future time.
We have tried to reproduce this problem on other machines (a vista 32 bit operating system) but have had no success in replicating the issue (this doesn't necessarily mean anything since the problem is so intermittent in the first place.
Has anyone else encountered anything similar to this? Is there a potential remedy?
Further Information
I have now found that the failure occurs when fflush() is called on the file and that the win32 error that is being returned by GetLastError() is 665 (ERROR_FILE_SYSTEM_LIMITATION). Searching google for this error leads to a bunch of reports related to "extents" for SQL server files. I suspect that there is some sort of journaling resource that the file system is reporting and this because we are growing a large file by opening it, appending a chunk of data, and closing it. I am now looking for understanding regarding this particular error with the hope for coming up with a valid remedy.
The file append is failing because of a file system fragmentation limit. The question was answered in What factors can lead to Win32 error 665 (file system limitation)?