InitialLOBFetchSize = -1 crashes app pool - oracle

We have a few Oracle customers and we noticed a good performance increase by setting InitialLOBFetchsize to -1. It works really well on one customers database but we have another one where it is causing the app pool to crash. If I take out that line and leave it to default, the app pool does not crash. We are running both databases in the same environment, so I am wondering what could cause the one database to crash whereas the other one is fine with this setting?
Is there some type of parameter that could be causing the issue within the bad customers database?

Had the same problem. Check your Oracle.DataAccess.dll version. Mine was 4.112.2.0 and then I upgraded it to 4.112.3.0 and that fixed the issue.

Related

Timeout expired when Starting without debugging, but works fine while debugging

I have a simple C# Windows Forms application I'm trying to create. on Form1_Load I query my database with a simple request to fill a DataGridView.
If I debug the application, the query executes immediately without issue.
If I "Start without debugging," the connection to the database times out every time with SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
This is affecting multiple different projects which used to work flawlessly before today, so I'm pretty certain it's an issue with Visual Studio and not my code. Here's hoping someone knows what could be causing this.
After a fresh install of VS2008 on a new machine, I'm still experiencing the issue. I was able to run the project without debugging, and the first time connecting to the database worked fine, but now it always fails as described above. Possibly some kind of weird connection pool issue in VS?
Check your antivirus software.
My antivirus was flagging every new application I tried to run without debugging, but I had alerts for my antivirus turned off so I had no idea.
I turned on alerts and suddenly when I ran my app I got a notice from my antivirus. I just allowed it permanently and now everything runs smoothly.

Repeating loss of session variables occurring with web application after moving to new server

I have an old web application which formerly ran on a windows 2003 server. When I moved it to a new Windows 2008 server, I started receiving an error that I never had before. The app uses a windows login. Upon accessing the app, the user is asked for their login. After that, they are free to use to application. However, the issue is that after using it for some time, the user will be booted out and asked to login again. The system is also much slower than it was previously. It is operating on IIS7. It seems to me that there is a loss of session variables occurring, but I am unsure about why that would be the case.
Interestingly, when the user logs in again, they can generally use the application for a longer period of time before being booted out and asked to log in again. It is also worth mentioning that it seems like the more users there are on the server, the less prominent the issue is.
It is also worth mentioning that I tried moving the application to another 2008 server, and it worked perfectly fine on that one. This leads me to believe that the issue lies somewhere in the settings on the server. I looked at the settings of the two 2008 servers side-by-side and noted the differences, but was incapable of finding a difference that would cause this sort of error. One difference that might be worth noting is that the server which does not work properly is 32 bit, whereas the server which does works is 64 bit. Although, I don't see how that difference could lead to the application having a loss of session variables, but still working otherwise.
Additional information:
The code in the application on each server is identical, so that leads me to believe that the error is on the server level and not within the application itself.
Given that the code is identical, I do not believe this to be a result of Session.Abandon() being called from anywhere.
I do not believe this is due to a session timeout.
I have read that other people experience a loss of session variables due to app pool recycling, and that often the app pool recycling is from the config files being accessed (whether it be from a user or from something like an anti-virus software). I have no reason to believe that this is the case here, because all servers are under the same anti-virus and the application works fine on them.
On the server which works, the IIS authentication setting are set such that windows authentication is disabled and that anonymous authentication is enabled. Whereas, on the other server, the opposite is true.
Any help with this issue would be appreciated.
Thank you.
Make sure your app pool is running under 4.0 .Net Framework and also check your application pool identity. When your using 7.0 iis, make sure you use integrated mode.

Oracle error ORA-25188 spontaneously appearing for unknown reasons

A remote installation of some software using Oracle 11.2 is being reported with the following errors:
ORA-25188: cannot drop/disable/defer the primary key constraint for
index-organized tables or sorted hash cluster
These errors seem to be happening everywhere. Even for basic inserts. As far as I can tell, I don't drop or alter any indexes in my IOT tables. And I've never had any issues like this in the past few years (the software hasn't changed much). The other thing I should mention is that these errors just started popping up after several days of error-free activity.
So I got a dump of the entire schema. I then imported it into my own Oracle system, and connected my application to it. Everything works fine! I don't have easy access to the actual system, so I am not able to investigate directly.
Is there any indirect reason why the system could be generating ORA-25188 or some bad state that could cause this error?
Try to PURGE RECYCLEBIN or PURGE DBA_RECYCLEBIN, this can help in some situations.
For second variant you must logon as sysdba.

postgresql client overflow

I keep getting the error:
FATAL: sorry, too many clients
already
I never had the problem before, but I recently upgraded to version 9.03 and also OS X lion preview release 2. It doesn't seem to matter how idle the processes are, they never die. I have to restart postgres every half hour or so. I'm using postgres via Rails 3 and Navicat, never had a problem with either before now and stopping both clients does nothing to solve the problem.
Any ideas or settings I'm missing? Not sure what settings I should display here for my setup, but everything should be default. Installed postgres using HOMEBREW.
Most likely go into postgresql.conf and edit max_connections and post the existing value.To see how many connections you have an what they are doing select from pg_stat_activity. Maybe your bleeding connections?
See this link
http://www.postgresql.org/docs/9.0/interactive/runtime-config-connection.html

Network problem, suggestions sought

The LAN which has about a half dozen windows xp professional pcs and one windows 7 professional pc.
A jet/access '97 database file is acting as the database.
The method of acccess is via dao (DAO350.dll) and the front end app is written in vb6.
When an instance is created it immediately opens a global database object which it keeps open for the duration of its lifetime.
The windows 7 machine was acting as the fileserver for the last few months without any glitches.
Within the last week what's happened is that instances of the app will work for a while (say 30 mins) on the xp machines and then will fail on database operations, reporting connection errors (eg disk or network error or unable to find such and such a table.
Instances on the windows 7 machine work normally.
Moving the database file to one of the xp machines has the effect that the app works fine on ALL the xp machines but the error occurs on the windows 7 machine instead.
Just before the problem became apparent a newer version of the app was installed.
Uninstalling and installing the previous version did not solve the problem.
No other network changes that I know of were made although I am not entirely sure about this as the hardware guy did apparently visit about the same time the problems arose, perhaps even to do something concerning online backing up of data. (There is data storage on more than one computer) Apparently he did not go near the win 7 machine.
Finally I know not very much about networks so please forgive me if the information I provide here is superfluous or deficient.
I have tried turning off antivirus on the win 7 machine, restarting etc but nothing seems to work.
It is planned to move our database from jet to sql server express in the future.
I need some suggestions as to the possible causes of this so that I can investigate it further. Any suggestions would be gretly appreciated
UPDATE 08/02/2011
The issue has been resolved by the hardware guy who visited the client today. The problem was that on this particular LAN the IP addresses were allocated dynamically except for the Win 7 machine which had a static IP address.
The static address happened to lie within the range from which the dynamic addresses were being selected. This wasn't a problem until last week when a dynamic address was generated that matched the static one and gave rise to the problems I described above.
Thanks to everyone for their input and thanks for not closing the question.
Having smart knowledgeable people to call on is a great help when you're under pressure from an unhappy customer and the gaps in your own knowledge mean that you can't confidently state that your software is definitely not to blame.
I'd try:
Validate that same DAO and ODBC-drivers is used on both xp- and vista machines.
Is LAN single broadcast domain? If not, rewire. (If routers required make
sure WINS is working)
Upgrade to ms-sql. It could be just a day of well worth work, ;-)
regards,
//t

Resources