JMeter JDBC Pool configuration - oracle

Is there a practical use of JMeter's JDBC Pool configuration
I tried to use Max Number of Connections 10 and it caused issues with Oracle max connection reached.
It seems from documentation below that its usage is discourage, so I still wonder if there's scenarios that it could be useful.
Max Number of Connections Maximum number of connections allowed in the
pool. In most cases, set this to zero (0). This means that each thread
will get its own pool with a single connection in it, i.e. the
connections are not shared between threads. If you really want to use
shared pooling (why?), then set the max count to the same as the
number of threads to ensure threads don't wait on each other.
Note In code I see it uses for connection pool org.apache.commons.dbcp2.BasicDataSource.

The practical use is that you should start with JDBC Connection Configuration which will be a replica of your production JDBC pool configuration in order to have realistic load pattern(s).
If you detect a database performance problem you could play with pool settings (connections number, transaction isolation, etc) to determine the most performing configuration, once you have the evidence that these or that pool settings provide better performance you can report it to developers or devops and amend your application database connectivity settings according to your findings. Check out Using JDBC Sampler in JMeter for JMeter connection pool settings explained.
From Oracle perspective I believe Connection Pooling and Caching and High-Performance Oracle JDBC Programming will help a lot.

Related

Spring book 2 Hikari connection not available when concurrent api calls more than 10000 in aws

I am using spring boot 2 for APIs, hosted on aws ecs fargate. and database is postgress 10.6 on RDS with 16 gb ram and 4 cpu.
my hikari configuration is as follows:
spring.datasource.testWhileIdle = true
spring.datasource.validationQuery = SELECT 1
spring.datasource.hikari.maximum-pool-size=100
spring.datasource.hikari.minimum-idle=80
spring.datasource.hikari.connection-timeout=30000
spring.datasource.hikari.idle-timeout=500000
spring.datasource.hikari.max-lifetime=1800000
Now generally this works perfectly.. but when load comes on server, say around 5000 concurrent API request..(which is also not huge though.. ), my application crashes.
Have enabled debug log for hikari.. so getting below messages:
hikaripool-1 - pool stats (total=100 active=100 idle=0 waiting=100)
Exception message says connection is not available:
HikariPool-1 - Connection is not available, request timed out after 30000ms.
org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC
At the same time when I see RDS postgress performance insighter, max query execution time is < 0.03 second..And CPU utilization also under 50%. So no issue with the Database server.
I am using entitymager and JPA only.. not using any query by opening connection manually. so closing connection or connection leak might not be an issue. But after enabling leak detection:
spring.datasource.hikari.leakDetectionThreshold=2000
Getting warn in logs saying apparent connection leak detected:
when I check the method pointing to this error: then it's just JPA findById() method..
so what should be the root cause of connection not available and request time out for just 10k api request.. with 100 pool size.. why its not releasing any connection after active connection goes to 100 and wait is 100? My ECS application server restarts automatically with this error and accessible only after 5-7 minutes after that..
HikariCP recommend removing minimumIdle when there's spike demands as you are testing
for maximum performance and responsiveness to spike demands, we recommend not setting this value and instead allowing HikariCP to act as a fixed size connection
And if you remove it. also idle-timeout is irrelevant
See also configure HikariCP for PostgreSQL
It is likely that your application is throttling it self into timeouts because of the wrong connection pool size in your configuration. A pool size of 100 is 10 times too many and this will affect performance and stability
HikariCP Pool size formula can be found in their wiki, but it looks like this:
((core_count * 2) + effective_spindle_count). Core count should not include
HT threads, even if hyperthreading is enabled.
If you have 4 cores then your connection pool size can be left at the default size of 10.
if this might help, I was facing this issue recently and it gave me a tough time.
The server accepts too many requests which hikari pool is not able to handle and so hikari tries to obtain extra connections to serve this spike in demand.
Eg. For tomcat with 200 default threads, if your maxPoolSize = 10, on spike demands, your server would try to serve 200 threads at the same time. If the connections in the pool are busy, hikari would try to obtain 190 connections and this is what you see in the waiting.
Here is how I am managing it.
I made sure that tomcat threads do not exceed the number of hikari maxPoolSize. In that way, there won't be need to ask for more connections during spike.
in spring boot, this is the config I used.
server.tomcat.threads.max = 50
spring.datasource.hikari.maximumPoolSize = 50
Note: 50 is variable based on your server capacity

Database bottleneck identification using jmeter

I have created JDBC Test Plan and i am using mysql database, with 'max number of connections:2' in 'connection pool configuration' but when i change it to 10 average response time increases.
My Question is 'max number of connections:2'in 'connection pool configuration' is referring to number of users?
I do not have ide about database performance testing, how database performance testing plan should be? i am assuming that increase number of users in thread group and report the response time. Can some one guid me with the sample database test plan as i do not know what component should be modified when testing for the performance.
Number of connections != number of users.
This "Max Number of Connections" is applicable for JDBC Connection Pool. JDBC Connections are very "expensive" to create hence normal practice is creating a pool so the connections can be established and used/reused by threads(users).
Ideally you should configure JDBC parameters to match your application settings, if you figure out that database responds slower due to the lack of available connections - you can test new configuration and suggest DBAs to change settings.
Number of users you should be setting on Thread Group level.
See The Real Secret to Building a Database Test Plan With JMeter article to learn more about concepts of database load testing using JMeter

Oracle: Difference between non-pooled connections and DRCP

I am actually reading Oracle-cx_Oracle tutorial.
There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing.
So could somebody help me understand what are they and how they are different to each other.
Thank you.
Web tier and mid-tier applications typically have many threads of execution, which take turns using RDBMS resources. Currently, multi-threaded applications can share connections to the database efficiently, allowing great mid-tier scalability. Starting with Oracle 11g, application developers and administrators and DBAs can use Database Resident Connection Pooling to achieve such scalability by sharing connections among multi-process as well as multi-threaded applications that can span across mid-tier systems.
DRCP provides a connection pool in the database server for typical Web application usage scenarios where the application acquires a database connection, works on it for a relatively short duration, and then releases it. DRCP pools "dedicated" servers. A pooled server is the equivalent of a server foreground process and a database session combined.
DRCP complements middle-tier connection pools that share connections between threads in a middle-tier process. In addition, DRCP enables sharing of database connections across middle-tier processes on the same middle-tier host and even across middle-tier hosts. This results in significant reduction in key database resources needed to support a large number of client connections, thereby reducing the database tier memory footprint and boosting the scalability of both middle-tier and database tiers. Having a pool of readily available servers also has the additional benefit of reducing the cost of creating and tearing down client connections.
DRCP is especially relevant for architectures with multi-process single threaded application servers (such as PHP/Apache) that cannot perform middle-tier connection pooling. The database can still scale to tens of thousands of simultaneous connections with DRCP.
DRCP stands for Database Resident Connection Pooling as opposed to "non-pooled" connections
In short, with DRCP, Oracle will cache all the connections opened, making a pool out of them, and will use the connections in the pool for future requests.
The aim of this is to avoid that new connections are opened if some of the existing connections are available/free, and thus to safe database ressources and gain time (the time to open a new connection).
If all connections in the pool are being used, then a new connection is automatically created (by Oracle) and added to the pool.
In non pooled connections, a connection is created and (in theory) closed by the application querying a database.
For instance, on a static PHP page querying the database, you have always the same scheme :
Open DB connection
Queries on the DB
Close the DB connection
And you know what your scheme will be.
Now suppose you have a dynamic PHP page (with AJAX or something), that will query the database only if the user makes some specific actions, the scheme becomes unpredictable. There DRCP can become healthy for your database, especially if you have a lot of users and possible requests.
This quote from the official doc fairly summarize the concept and when it should be used :
Database Resident Connection Pool (DRCP) is a connection pool in the
server that is shared across many clients. You should use DRCP in
connection pools where the number of active connections is fairly less
than the number of open connections. As the number of instances of
connection pools that can share the connections from DRCP pool
increases, the benefits derived from using DRCP increases. DRCP
increases Database server scalability and resolves the resource
wastage issue that is associated with middle-tier connection pooling.
DRCP increases the level of "centralization" of the pools:
Classic connection pool are managed within the client middleware. This means that if for instance you have several independent web servers, likely each one will have their own server-managed connection pool. There is a pool per server and the server is responsible for managing it. For instance you may have 3 separate pools with a limit of 50 connections per pool. Depending on usage patterns it may be a waste, because you may end up using the total 150 connection very seldom, and on the other hand you may hit the individual limit of 50 connections very often.
DRCP is a single pool managed by the DB server, not the client servers. This can lead to more efficient distribution of the connections. In the example above, the 3 servers may share the same pool, database-managed, of less than 150 connections, say 100 connections. And if two servers are idle, the third server can take up all the 100 connections if needed.
See Oracle Database 11g: The Top New Features for DBAs and Developers for more details and About Database Resident Connection Pooling:
This results in significant reduction in key database resources needed to support a large number of client connections, thereby reducing the database tier memory footprint and boosting the scalability of both middle-tier and database tiers
In addition, DRCP compensates the complete lack of middleware connection pools in certain technologies (quoted again from About Database Resident Connection Pooling):
DRCP is especially relevant for architectures with multi-process single threaded application servers (such as PHP/Apache) that cannot perform middle-tier connection pooling. The database can still scale to tens of thousands of simultaneous connections with DRCP.
As a further reference see for instance Connection pooling in PHP - Stack Overflow for instance.

WAS PMI metrics - Connection Pool Allocate vs Create Count

I am not able to understand the values in WAS PMI for ConnectionPoolModule.
In one application I am monitoring, I am getting perf metrics for "Allocate Count", and in other I am getting perf metrics for "Create Count"
In the case of the AllocateCount - i can see that this value keeps increasing over time, and not sure what the effects of this state is.
What are the differences between these count types?
What should I be looking for to review connection pools?
Why are these metrics not showing up at the same time?
Should I be bothered about the increase in AllocateCount, or should I match it with other metrics to review the application state?
Thanks!
With these metrics, an allocate is an application request for a connection, e.g. a DataSource.getConnection(). The WebSphere pool manager either satisfies the request with an already-pooled connection, or creates a new one, and in the latter case the create count gets incremented. So if your allocate and create counts were the same, you'd be doing no pooling, probably a bad thing!
But that's not necessarily the best thing to monitor. Things like the average wait time could be the best starting point.
Let me refer you to some other links to help you monitor WebSphere JDBC connection pool data:
JDBC chapter in WebSphere Application Server performance cookbook
WebSphere Application Server Performance Tuning Toolkit with video
Older but still relevant, some slides specifically detailing some connection pool monitoring techniques.

How to release database connections in Pentaho BI server??

I am using Pentaho-BI server installation in my web application as a third party installation.I am using its saiku analytics and reporting files by embedding their specific links in iframe of my application. Problem is I am not getting how it creates database connections, in terms of numbers?? Because many times it throws error regarding 'No connection is available in pool'. I know there are properties like max available connection, max idle connections , wait and sql validation. But How to release connections?? And if Pentaho handles it in its own way then how?? Because increasing number of max connections available will create load on database server, when many users are using my BI server.
One solution I found is just to restart my BI server, but It's not a valid solution for production environment. Other solution I think is scheduler, but I have no clues about it and not getting proper info on net.
The defaults for max connections are incredibly low. This is standard tomcat connection pooling stuff, I would definitely try increasing the default, see if that helps. you can monitor concurrent connections on the db side - just because you have 100 connections to the db it doesn't necessarily mean they'll be all used at once.
Also; Are you using mysql? You should try the c3po pooling driver it handles timeouts and things better than the standard driver so you shouldnt ever get dead connections sitting in the pool.

Resources