Though there are relevant questions on SO, I couldn't find any possible solution from those. I observe a lot of ORA-00020:maximum number of processes (X) exceeded errors that I see on my application logs and hence triggering false alerts.
**where X=200 on application
There are no application related issues that I observe but there are the above mentioned errors. The application user is under the Oracle profile APPUSER with the resource limit parameters as below:
APPUSER IDLE_TIME UNLIMITED
APPUSER CONNECT_TIME UNLIMITED
Is there an ideal setting on the above two parameters that can resolve any such issue? Please help me understand if I'm totally off-road in resolving the errors and frequent DB session drops.
Adding to this, there is an additional topic where I got to know from a question posted on ORACLE forums that if the Paging space (RHEL 5.x) isn't allotted properly then the system might randomly kill sessions to free up resources.
Can anyone shed some light on this too? Appreciate if anyone can provide some pointers / suggestions that may lead to a possible solution on this!!!
Related
We have an issue with our Oracle Applications 12.2.4 environment, whereby a user is logged into the application after entering an incorrect password 50 times. Basically you enter any username and then an invalid password. Click Login 50 times. It pops up a notification saying you have reached the maximum number of login attempts, and then when you click OK it does not log you out but instead logs you in.
Needless to say this is a high priority issue, since employees' sensitive data (salaries, performance ratings, medical information, bonuses, performance reviews, share allocations, incentives, grievances, sexual harassment trials, etc.) are exposed by this security bug. So far we have had no response to our P1 service request with Oracle Support.
Is anyone else experiencing this issue? Is there a patch to fix this? Is there a profile option that controls this/or fixes this issue?
Oracle has provided patch 26663218, which fixed the issue.
We have for some time now been experiencing problems with data being saved in our SQL database.
Sometimes records are saved with data that does not match the rest of the row, making it seem like at some point, data is being 'swapped' for something else, perhaps, another user's data, before being passed to the database.
We do use TransactionScopes throughout with Isolation Level of ReadCommitted which makes me think the data integrity issue lies within the application rather than at the Database level.
We do use the session extensively and we are starting to think that the times of the corrupt data are similar to the times we deploy updates to the system during the day.
We do use the aspnet_state service to persist the session over application restarts.
Our users rely on terminal sessions therefore multiple users all log into the same server and launch the system via a browser.
We have in the past noticed users logging in with the same domain credentials but we are now relatively confident that users now log in with unique accounts.
99.9% of the data is correct but we have been struggling to understand what could be causing this intermittent data integrity issue.
We are now limiting our deploys to outside working hours on pain of death, but this is not always possible.
Can anyone shed light on why/how this might be happening?
EDIT: We have now isolated this to the DAL layer, see SQL query returns incorrect value in multi user environment
I have recently been fighting this!, and had similar problem to yours around 95% of the data written back was correct. I looked at various reasons why, the main culprit was some users on the network had downloaded Chrome and opening the record within Chrome, breaking our session id's as Chrome ignores sessions.
The other cause had been either the users was not closing the browser or not logging off the application allowing either the same user or completely different user to pick and use the session id.
After introducing a browser check and then reject Chrome, educating the users to make sure they log off, doing any updates to outside busy periods the problem was just about gone.
I forgot to mention, also on your IIS its best to turn off caching in the Output Caching, for the user and kernal set to prevent caching.
I have a Portlet deployed on IBM Websphere Portal server and at busy times when I have a lot of users the Portal server is showing "this portlet is unavailable" when you hit it's Url.
In the logs the following exception is showing...
ServletWrappe E SRVE0068E: Could not invoke the service() method on servlet MyCystomPortlet. Exception thrown : javax.servlet.ServletException:
Session Object Internals:
id : overflowed-session
After doing some research on google I believe what is happening us that there are too many concurrent sessions. First of all can someone give me confirmation that this understanding is correct?
Secondly I believe there are settings in Websphere you can make around this. Maximum in memory session count. At the moment it's set to 1000. I would like to just increase it to 1500 but I am unsure of how to work out if this is too high and hence will risk the server falling over . Can someone please give me advice on this?
Lastly is reducing session timeouts in my portlet another effective way to try and fix this?
Thanks
Shortening shorter timeout will help if users are abandoning sessions without logging out, but its usually better to shorten it from default 30 min.
You can increase maximum sessions held in memory but you should also increase maximum heap size then. But make sure your operating system has enough memory resources to handle increased heap, because otherwise if system starts to swap you will have very poor performance.
So try to change that only for failing application (you can override session settings per application), do not change global settings in web container as they apply to all applications by default.
We have a PHP application to do load test. The application team wants to know that, up to how many users that the application can be capable to withstand without any crashes.
How to do the load test on the same.. Please help us.
Thanks in advance
Its a boardroom question, application system capacity depends upon the application design and server where application is hosted. Ultimately this depends the purpose of application (public or private) and customer requirements (number of users).
You can find notes on test strategy for loading application in MSDN website Real-World Load Testing Tips to Avoid Bottlenecks When Your Web App Goes Live. My suggestion is application should able to manage atleast 10% maximum expected user simultaneously (till its popular... !!!!).
Try record and run Jmeter.
Use Summary Report, Summary Error Report and View Results Tree to see your server's health.
Keep adding thread counts till you realized all your thread group assertion starts failing.
My application includes complex data retrieval from RDC using XML. If error are less than 2% total test, i consider it as healthy. My app can handle 50 threads consecutively easily. Try yours. good luck.
You can identify the number of actual users experimentally.
At the first, make virtual users behavior like real user. Follow this link to get it.
Then increase number of users and observe application behavior. To scedule increasing number of users, you can use Throughput Shaping Timer
We have a JPA -> Hibernate -> Oracle setup, where we are only able to crank up to 22 transactions per seconds (two reads and one write per transaction). The CPU and disk and network are not bottlenecking.
Is there something I am missing? I wonder if there could be some sort of oracle imposed limit that the DBA's have applied?
Network is not the problem, as when I do raw reads on the table, i can do 2000 reads per second. The problem is clearly writes.
CPU is not the problem on the app server, the CPU is basically idling.
Disk is not the problem on the app server, the data is completely loaded into memory before the processing starts
Might be worth comparing performance with a different client technology (or even just a simple test using SQL*Plus) to see if you can beat this performance anyway - it may simply be an under-resourced or misconfigured database.
I'd also compare the results for SQLPlus running directly on the d/b server, to it running locally on whatever machine your Java code is running on (where it is communicating over SQLNet). This would confirm if the problem is below your Java tier.
To be honest there are so many layers between your JPA code and the database itself, diagnosing the cause is going to be fun . . . I recall one mysterious d/b performance problem resolved itself as a misconfigured network card - the DBAs were rightly insistent that the database wasn't showing any bottlenecks.
It sounds like the application is doing a transaction in a bit less than 0.05 seconds. If the SELECT and UPDATE statements are extracted from the app and run them by themselves, using SQL*Plus or some other tool, how long do they take, and if you add up the times for the statements do they come pretty near to 0.05? Where does the data come from that is used in the queries, and which eventually gets used in the UPDATE? It's entirely possible that the slowdown is not the database but somewhere else in the app, such a the data acquisition phase. Perhaps something like a profiler could be used to find out where the app is spending its time.
Share and enjoy.