We have a JPA -> Hibernate -> Oracle setup, where we are only able to crank up to 22 transactions per seconds (two reads and one write per transaction). The CPU and disk and network are not bottlenecking.
Is there something I am missing? I wonder if there could be some sort of oracle imposed limit that the DBA's have applied?
Network is not the problem, as when I do raw reads on the table, i can do 2000 reads per second. The problem is clearly writes.
CPU is not the problem on the app server, the CPU is basically idling.
Disk is not the problem on the app server, the data is completely loaded into memory before the processing starts
Might be worth comparing performance with a different client technology (or even just a simple test using SQL*Plus) to see if you can beat this performance anyway - it may simply be an under-resourced or misconfigured database.
I'd also compare the results for SQLPlus running directly on the d/b server, to it running locally on whatever machine your Java code is running on (where it is communicating over SQLNet). This would confirm if the problem is below your Java tier.
To be honest there are so many layers between your JPA code and the database itself, diagnosing the cause is going to be fun . . . I recall one mysterious d/b performance problem resolved itself as a misconfigured network card - the DBAs were rightly insistent that the database wasn't showing any bottlenecks.
It sounds like the application is doing a transaction in a bit less than 0.05 seconds. If the SELECT and UPDATE statements are extracted from the app and run them by themselves, using SQL*Plus or some other tool, how long do they take, and if you add up the times for the statements do they come pretty near to 0.05? Where does the data come from that is used in the queries, and which eventually gets used in the UPDATE? It's entirely possible that the slowdown is not the database but somewhere else in the app, such a the data acquisition phase. Perhaps something like a profiler could be used to find out where the app is spending its time.
Share and enjoy.
Related
I have an Aurora MySQL database with a writer and 3 readers,
My server SQL calls are always done using the Aurora reader endpoint (except writing of course),
For some reason, too often on my app's peak hours, one of the readers will reach 100% cpu usage and the other readers will be around the 10-20% CPU usage, this scenario is causing me serious issues as if I try to make some calls from my app (or any other person for that matter) I will not get a response due to the fact that the reader will not serve me (which is weird since the other 2 are available), I'm not sure if there is something I'm missing regarding the distribution of queries and connections between the different readers, is there any other action required to be done in order to create a better distribution between the readers?
As far as I know, the moment you create multiple readers the reader endpoint should automatically act as a load balancer exactly for this purpose.
If it's of any help, the 3 readers are all db.r3.xlarge.
I found that it's possible to automatically extend Liferay's session. So that the session doesn't expire till you close your browser. Is there any limitations or disadvantages of such approach. Any performance degrade or load issues?
As with any abstract question about hypothetical performance impact (or preliminary optimization) this question is basically unanswerable - but here's some criteria:
Naturally, pinging the server in order to extend a session will incur some extra load - if that results in a performance decrease, you'll most likely have a highly congested installation in the first place. If your server is bored all day, the extra ping won't bring it down.
You may or may not have custom applications running in your installation that store data in the user's session. If those are a few bytes (like Liferay does, e.g. the currently logged in user's information): There's probably no degradation. If you store 1MB of information per session (in your own custom apps - Liferay doesn't do this), things might differ: Just multiply your session storage size by the number of concurrent users that you expect. In case this use of memory indicates a problem: Make your custom apps use the session less - it's bad style anyway.
Will your particular installation suffer from any degradation? Measure. There's no way around this.
From a system maintenance point of view: If you're running a cluster and want to take individual machines out of the load balancer: Artificially extending sessions might indicate that a machine still has sessions open, even though they're mostly on unattended browsers - you'll get inflated numbers and it takes longer to bring machines down when you need to wait for the session count to come close to zero.
My VB6 program uses ADODB to do a lot of SQL (2000) CRUD.
Sometimes the network connection between the remote clients and the data center somehow "drops" resulting in the impossibility to establish new connections (so users launching the program can't use it).
The issue is the following:
Anyone who is using the program at the moment of the "drop" can continue using it with no issues whatoever, perform every operation, update data, read data, and everything seems like is working normally.
User then proceeds to fire up a "sum-up" report which lists everything that was done (before or after the "drop").
If we then check the database, all data regarding whatever was done after the network drop is not there. User goes back into the program and everything is as it was before the network drop.
It seems like all queries where somehow performed in-memory ? I'm at a loss about how to even approach the issue (I'm familiar enough with VB6 to work with the source code but I don't know a lot about ADODB).
I haven't yet tried to replicate the behavior due to limited customer's availability (development environment is housed in their offices), I'll try starting up the program from the IDE then rip out the network cable.
Provided I can replicate the issue, how do I fix this ? Is there some setting I'm not aware of ?
On a side note: the issue is sporadic (it happened a handful of times during the last year, and the software is being used heavily and on a daily basis by mutiple concurrent users).
After reading up on Disconnected Recordsets, it seems that's what's behind this odd behavior I'm experiencing.
This is not something that can be simply "turned off".
I recently migrated my Postgres database from Windows to CentOS 6.7.
On Windows the database never used much CPU, but on Linux I see it using a constant ~30% CPU (using top). (4 core on machine)
Anyone know if this is normally, or why it would be doing this?
The application seems to run fine, and as fast or faster than Windows.
Note, it is a big database, 100gb+ data, 1000+ databases.
I tried using Pgadmin to monitor the server status, but the server status hangs, and fails to run, error "the log_filename parameter must be equal"
With 1000 databases I expect vacuum workers and stats collector to spend a lot of time checking about what needs maintenance.
I suggest you to do two things
raise the autovacuum_naptime parameter to reduce the frequency of checks
put the stats_temp_directory on a ramdisk
You probably also set a high max_connections limit to allow your clients to use those high number of databases and this is another probable source of CPU load, due to the high number of 'slots' to be checked every time a backend has to synchronize with the others.
There could be multiple reasons for increasing server loads.
If you are looking for query level loads on server then you should match a specific Postgres backend ID to a system process ID using the pg_stat_activity system table.
SELECT pid, datname, usename, query FROM pg_stat_activity;
Once you know what queries are running you can investigate further (EXPLAIN/EXPLAIN ANALYZE; check locks, etc.)
You may have lock contention issues, probably due to very high max_connections. Consider lowering max_connections and using a connection pooler if this is the case. But that can increase turn around time for clients connections.
Might be Windows System blocking connections and not allowing to use system. And now Linus allowing its connections to use CPU and perform faster. :P
Also worth read:
How to monitor PostgreSQL
Monitoring CPU and memory usage from Postgres
I have created a site that uses MongoDB as the database engine and at the moment it is still under construction so it is not getting much traffic. This means that there are periods of no requests and therefor, no queries to the database.
When I do eventually hit the site pages that use the database, MongoDB seems to take 4 or 5 seconds to come back but from that request on, it is very fast.
I can't find any information on there being a timeout or anything like that. Is it just that the database in memory is being paged out and it takes a few seconds to page it back in? It is running on a Windows Server 2008 VM and I am running it as a windows service.
Any help would be appreciated.
MongoDB allows the OS Kernel to handle what is kept in Memory (the current "Working Set"). Even if nothing is happening, the system will still page objects out of RAM into the page/swap, even if the RAM capacity is not being taxed.
One way around this would be to monitor for idleness and send queries in the background, or even have a background process cat the files on-disk. This is especially helpful in pre-warming databases after startup, and likewise if your usage forms cyclical patterns.
Like most styles of databases recent query results can be cached, the execution plans can be stored in some but it doesn't seem like mongodb stores query caches.
Also to improve performance make sure you implement indexes well, so you don't mistakenly create full table scans and leverage some form of index. use the explain command to see your query execution plan. (http://docs.mongodb.org/manual/reference/method/cursor.explain/)
http://docs.mongodb.org/manual/faq/fundamentals/
Does MongoDB handle caching?
Yes. MongoDB keeps all of the most recently used data in RAM. If you have created indexes for your queries and your working data set fits in RAM, MongoDB serves all queries from memory.
MongoDB does not implement a query cache: MongoDB serves all queries directly from the indexes and/or data files.