Oracle 12c startup error: ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0 - oracle

We have a number of databases at our company. Among them an oracle 12c (12.2.0.1.0 to be precise), but we have no (qualified) oracle DBAs. The performance has deteriorated drastically in the last 6 months or so and I now have the task of finding out why. My research suggested that I should up some memory parameters in the initDBN.ora file. Here's what the original looked like:
DBN.__data_transfer_cache_size=0
DBN.__db_cache_size=50331648
DBN.__inmemory_ext_roarea=0
DBN.__inmemory_ext_rwarea=0
DBN.__java_pool_size=79691776
DBN.__large_pool_size=8388608
DBN.__oracle_base='/orabin/app/oracle'#ORACLE_BASE set from environment
DBN.__pga_aggregate_target=197132288
DBN.__sga_target=734003200
DBN.__shared_io_pool_size=12582912
DBN.__shared_pool_size=536870912
DBN.__streams_pool_size=4194304
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_16k_cache_size=8388608
*.db_32k_cache_size=8388608
*.db_4k_cache_size=8388608
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.memory_max_target=0
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=700m
*.shared_pool_size=536870912
*.streams_pool_size=4194304
*.undo_tablespace='UNDOTBS1'
Please don't blame me for this, because I did not write it. It certainly doesn't look like the sample init.ora and I am not at all certain where the syntax came from. The values I changed were:
DBN.__sga_target=1024m
*.sga_target=1024m
*.memory_max_target=1408m
DBN.__pga_aggregate_target=384m and *.pga_aggregate_target=384m
That's the order in which I made the changes. After each change I used sqlplus to firstly recreate the spffile with:
create spfile='spfileDBN.ora' from pfile='initDBN.ora';
This was followed by an attempt to startup the database with startup nomount. In each case I got an error message which lead me to make the next change.
Finally I got the error which is in the title of this post. When I tried to search for information on this, the findings were grim. Mostly the information dealt with other parameters and did not explain what this error actually meant. The only thing that gave any real background was this link from Burleson Consulting. It didn't really help me solve the problem, so I decided to revert the initDBN.ora file and do some more research. A slow database is generally better than no database.
But Hey! I still get that same error, even after reerting to the original init file. I'm getting desperate now. I have no idea how to fix this. From what I've read to date, setting "underscore variables" in your init file is a "NO NO".
Can anybody provide me with some helpful tips as to how to get rid of this error?

We don't know if the apps running on this database need specific block sizes, but if the priority is getting the database open, you can shrink the init.ora down the smallest, simplest set of parameters that gets you moving forward.
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1000m
*.undo_tablespace='UNDOTBS1'
should get you an open database. Notice I have bumped up the sga_target to 1000m which is about the minimum you need to get a database started. The true values for sga_target and pga_aggregate_target really need to be set based on your expected usage, and the server configuration. But the init.ora above should get your database running.

I am not sure that this really qualifies as a "solution", but it does fix the initial problem. As mentioned in my reply to Connor McDonald, I set the parameter _shared_pool_reserved_min_alloc to 3000 in the initDBN.ora file, which I copied from Connor's example (thanks for that). After recreating the spfile and trying to restart the database, I got the following error:
ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 11953766
That got me thinking that the value 0 in the original error was probably a standin value which really means "the maximum allowed". By actually setting the parameter, I have apparently managed to generate an error which is more meaningful.
The value of _shared_pool_reserved_min_alloc is now set to 4200, which is a value I recall reading in one of the less helpful posts. (No, that post did not say that this is a value that should be used, just that it could be used.) This time, after re-creating the spfile I was able to start the database.
Before I do any more fiddling with parameters, I will do a bit more research... or maybe a lot more.

Related

How can i CSTORE pixel data of dicom from CMOVE properly?

I have 1 running server for handle C-Move, 2 running server for handle C-Store and remote pacs server(GEPACS)
When i tried to C-Move command from remote pacs to C-Store handler, 1 server(py-netdicom) is build and save the file properly and 1 server(go-netdicom) is not.
So there was couple of problems in go-netdicom.
I fixed the code can handle hexadecimals. It originally not supported on go-netdicom.
This was fix almost every problems in my case but still cannot store pixel data properly.
For example, I got 9117252 bytes from original signal from remote pacs and I saved the data itself, but actually it needs to be 18000000 bytes(got an error). even CT images are short for 3 times(got approximately 180000, but need 524288)
I think the problem caused by might be the encapsulation of pixel-data but not sure.
Is there any tip or some help?
Thank you.
EDIT 4: I've got a clue.link here
Somehow C-STORE command have a kind of transfer syntax.
This offer to scp type(compressed or not) of data scu get.
But still I don't have a idea which part of go-netdicom has to be changed.
I'll delete "python" tag because this is not related with python anymore.
I found the solution.
Somehow, GEPACS send the certain transfer syntax for JPEG compression.
if go-netdicom doesn't have the TransferSyntaxUID then pick the GEPACS's first transfer syntax and that was for JPEG compression.
i just put bigendian and explicitvr (GEPACS default) when transfersyntax is empty.
which placed in contextmanager.go line 101 on AssociateRequest, line 127
Hope this result help someone.
Thank you
The problem here is that go-netdicom uses the first PresentationContext sent in the A_ASSOCIATE_RQ (As you can see in the last image). So it accepts "2.16.840.1.113709.1.2.2" which is a privative Transfer Syntax and it is not in the DICOM standart, so no one can manage the C-STORE at the end.
If you are reading this.. maybe you do not use go-netdicom but the problem could be the same if the error involves the transfer Syntax "2.16.840.1.113709.1.2.2", in the Centricity PACs documentation says: "It is expected that other vendors' applications will ignore all Presentation Context proposed with the GE Private Compress Express Transfer Syntax"
And that is what we are suppose to do. I see a list of PRs in go-netdicom so I suppose it is not mantained, so I will post the change for go-netdicom here. I made this changes in contextmanager.go and works like a charm:

Visual-Studio, Database save error "There was an error parsing the query. [ Token line number = 1,Token line offset = 13,Token in error = , ]"

OK so I recently started getting into programming simple programs and I have taught myself everything using guides on the internet thus far and I have been doing really well and things have been working great, until now. So i feel that I have done a lot, A LOT of searching on this question and I cant seem to find an answer or even anyone else out there that is having the same problem as me. Im going to try to make this as simple as possible so here it goes.
I have made a program in visual studio that is simply supposed to allow me to make entries in fields such as first name, last name, and so on. It works like a charm and saves data like normal, I can close out and re open it and the data is there. BUT
While messing with the program and trying to test it I found that if I DELETED a row of data that i had previously inserted then added ANY new data then clicked the SAVE button it would crash every single time.
First it would say there is no proper delete command. So i gave it one, and it stopped giving me that error message. Then it said it was the update command. So i gave it one then that stopped showing up. But now it gives me the error message. "There was an error parsing the query. [ Token line number = 1,Token line offset = 13,Token in error = , ]. And it gives me this error every time i try to save after following the above steps. And I honestly dont know what to make of it.
I know im going to need to post a sample of my code that I have but I dont know what part of it to post so I figured I would wait and see so I dont post 1,500+ lines on here.
This is the only thing preventing my program from working correctly and if I wouldnt ask if I wasnt completely stumped. And im roughly 300% stumped. So any help would be greatly appreciated.

Problems with storing oracle sqlplus query output shell script

I have a RHEL 5 system and have been trying to get a batch of 3-4 scripts each in a separate variables and they are not working as expected.
I tried using following syntax which I saw a lot of people on this site use and should really work, though for some reason it doesn't work for me.
Syntax -
testvar=`sqlplus -s foo/bar#SCHM <<EOF
set pages 0
set head off
set feed off
#test.sql
exit
EOF`
Now, I have access to the sqlplus command from the oracle bin folder and have also set the ORACLE_HOME and ORACLE_PATH variables exported (I echoed them just to ensure they are actually working).
However, I keep getting the error saying - "you need to set EXPORT for ORACLE_HOME" even though I have confirmed from everyone that I am indeed using the correct path.
Another question I have is once I get the script output (which is numeric number in bits or bytes) value in a variable (There will be 4-5 variables as there are 4-5 scripts), how do I convert into human readable output in either MBs or GBs.
Please guide me on this and I assure you that I will post everything here so that someone in future if gets stuck at the same issue, doesn't have waste time. (and your precious time won't go bad either...)
Thanks in advance,
Brian

The bizarre case of the file that both is and isn’t there

In .Net 3.5, I have the following code.
If File.Exists(sFilePath & IndexFileName & ".NX") Then
Kill(sFilePath & IndexFileName & ".NX")
End If
At runtime, on one client's machine, I get the following exception, over and over, when this code executes
Source: Microsoft.VisualBasic
TargetSite: Microsoft.VisualBasic.FileSystem.Kill
Message: No files found matching 'I:\RPG\HGIAPVXD.NX'.
StackTrace:
at Microsoft.VisualBasic.FileSystem.Kill(String PathName)
(More trace that identifies the exact line of code.)
There are two people on different machines running this code, but only one of them is getting the exception. The exception does not happen every time, but it is happening regularly. (Multiple times every hour.) The code is not in a loop, nor does it run continuously, more like once every couple of minutes or so.
On the surface, this looks like a race condition, but given how infrequently this code is run and how often the error is happening I think there must be something else going on.
I would appreciate any suggestions on how I can track down what is really going on here. A solution to keep the error from happening would be even better.
I guess the first question to ask is "IS the file really there or not?" and if so, does it have any specical attributes (Is it Read-only or Hidden, or System --- or a Directory)?
Note the Microsoft.VisualBasic.FileSystem.Kill specifically looks for, and silently skips, any file marked "System" or "Hidden". For pretty much any other problem you would have gotten a different exception.
as James pointed out the Kill functions checks if the file in case is a system or hidden, you better use System.IO.File.Delete() instead
Try
System.IO.File.Delete(sFilePath & IndexFileName & ".NX")
Catch ex As System.Exception
...
End Try
using File.Exits is not neccasary because File.Delete() checks this by itself.
Is there any chance that the I: drive is a network drive? it could be some network issue... or then maybe a race condition

Windows 7 file problem

I am using VB6 SP6
This code has work correctly for years but I am now having a problem on a WIN7 to WIN7 network. It also works correctly on an XP to Win7 network.
Open file for random as ChannelNum LEN =90
'the file is on the other computer on the network
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
'(MyAcFile is UDT that is less than 90 long)
.......... other code that does not reference file or RecNum - then
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
Close ChannelNum
The second record overwrites the first.
We had a similar problem in the past with OpportunisticLocking so we turn that off at install - along with some other keys that cause errors in data in Windows networks.
However we have had no problems like this for years, so I think MS have some new "better" option that they think will "improve" networking.
Thanks for your help
I doubt there is any "bug" here except in your approach. The file metadata that LOF() interrogates is not meant to be updated immediately by simple writes. A delay seems like a silly idea, prone to occasional failure unless a very long delay is used and sapping performance at best. Even close/reopen can be iffy: VB6's Close statement is an async operation. That's why the Reset statement exists.
This is also why things like FlushFileBuffers() and SetEndOfFile() exist at the API level. They are also relatively expensive operations from a performance standpoint.
Track your records yourself. Only rely on LOF() if necessary after you first open the file.
Hmmm... is file (as per in the open statement at the top of the code sample) UNC filename or similar to x:\ where x is the mapped drive? Are you not incrementing RecNum? Judging by the code, the RecNum is unchanged and hence appears to overwrite the first record...Sorry for sounding ummm no pun intended... basic...It would be of help to show some more code here...
Hope this helps,
Best regards,
Tom.
It can be just timing issue. In some runs your LOF() function returns more updated information than in other runs. The file system API is asynchronous, for example when some write function is called it will not be immediately reflected as the increazed size.
In short: you code have shown an old bug, which is just easier to reproduce on Windows 7.
To fix the bug the cheapest way: you may decide to add a delay (it can be significant delay of say 5 seconds).
More elaborate fix is to force the size update by closing and reopening file.

Resources