I am using Pentaho-BI server installation in my web application as a third party installation.I am using its saiku analytics and reporting files by embedding their specific links in iframe of my application. Problem is I am not getting how it creates database connections, in terms of numbers?? Because many times it throws error regarding 'No connection is available in pool'. I know there are properties like max available connection, max idle connections , wait and sql validation. But How to release connections?? And if Pentaho handles it in its own way then how?? Because increasing number of max connections available will create load on database server, when many users are using my BI server.
One solution I found is just to restart my BI server, but It's not a valid solution for production environment. Other solution I think is scheduler, but I have no clues about it and not getting proper info on net.
The defaults for max connections are incredibly low. This is standard tomcat connection pooling stuff, I would definitely try increasing the default, see if that helps. you can monitor concurrent connections on the db side - just because you have 100 connections to the db it doesn't necessarily mean they'll be all used at once.
Also; Are you using mysql? You should try the c3po pooling driver it handles timeouts and things better than the standard driver so you shouldnt ever get dead connections sitting in the pool.
Related
need some software architecture insight on this. Which of the following is more efficient in terms of resource (cpu, memory, database)?
Having a single database connection in one flow? (Close connection only after everything is done, including business logic)
Having multiple database connections in one flow? (Open then close the database connection immediately after the query is executed)
By business logic, this is where data returned from the query is sanitized, or manipulated according to business rules.
Attaching here is the diagram for visual representation.
UPDATE:
Programming language: PHP (Laravel for web app, Lumen for API)
Database: MySQL
Host: AWS
opening new connection between your runtime and your database needs your OS to create new socket ( if runtime and database are on the same system this socket probably is linux socket, else this socket is tcp/udp socket)
this socket creation has overhead itself.
so I don't suggest to open and close connections after each database usage.
but there are specific conditions you want do that.
for example your database has a limited number of concurrent connections and you have thousands of long running processes using this connections, maybe in this situation you can use second approach.
Reading this article: http://go-database-sql.org/accessing.html
It says that the sql.DB object is designed to be long-lived and that we should not Open() and Close() databases frequently. But what should I do if I have 10 different MySQL servers and I have sharded them in a way that I have 511 databases in each server for example the way Pinterest shards their data with MySQL?
https://medium.com/#Pinterest_Engineering/sharding-pinterest-how-we-scaled-our-mysql-fleet-3f341e96ca6f
Then would I not need to constantly access new nodes with new databases all the time? As I understand then I have to Open and Close the database connection all the time depending on which node and database I have to access.
It also says that:
If you don’t treat the sql.DB as a long-lived object, you could
experience problems such as poor reuse and sharing of connections,
running out of available network resources, or sporadic failures due
to a lot of TCP connections remaining in TIME_WAIT status. Such
problems are signs that you’re not using database/sql as it was
designed.
Will this be a problem? How should I solve this issue then?
I am also interested in the question. I guess there could be such solution:
Minimize number of idle connection in pool db.SerMaxIdleConns(N)
Make map[serverID]*sql.DB. When you have no such connection - add it to map.
Make Dara more local - so backends usually go to “their” databases. However Pinterest seems not to use it.
Increase number of sockets and files on backend machines so they can keep more open connections.
Provide some reasonable idle timeout so very old unused connections could be closed.
I'm doing a load test on a web application, and with minimum of 14-15 users am getting this connection reset issue and I ensure the following from my end:
Request retries has been set to 1 in user.properties files
stale check is set to true
Test data and lan connectivity is good.
number of users are less hence it wont need more RAM for jmeter
Hence could this be concluded as an issue in application design and not an issue from Jmeter?
To avoid long trail of comments, I'll try to summarize it and answer.
This issue looks from application deployment system.
JMeter ---------------> ( Web server <-> App server <-> DB )
Find out in which area bottleneck is present using profilers.
Issue could be in anyone of below layers,
Web Server :
If Web server is bottleneck then try to tune the web server for handling more load. Like more threadpool size, more timeouts, buffers, queues
Application Server :
If app server is bottleneck then tune your application server. Again check configurations, any specific settings for handling more load and if required code improvement should be done.
Database Server :
If DB is bottleneck then check queries, indexes, statistics and optimize them for your needs. config settings also help sometimes.
For all layers check server resource utilization. If it is not much then there is room for perf. improvement else server vertical/horizontal scaling is required.
You are saying problem is because some ids were not generated in DB. so you can start with DB layer for possible bottlenecks.
Hope this helps :)
I have a servlet which connects to Oracle DB using JDBC (ojdbc6.jar) and BoneCP. I now need to port my BoneCP-using code to something which will work in WebLogic out-of-the-box, without having BoneCP in the package.
What would be the recommended approach? What WebLogic feature I can use, specifically to get an equivalent of BoneCP's:
Performance
Ability to log failed SQL statements
Auto-resume from lost DB connection
Thanks in advance.
The best approach would be to create a standard Oracle JDBC connection pool pointing to your database. Tune it according to your necessities (number of connections, etc.). Next you would need to refactor out of your code any explicit reference to your former connection pool implementation. If you have been working with java.sql.* interfaces in your code, there should be few to no references at all.
Once all that is refactorized, you will have only a bit of code (or config file) telling your app to recover something implementing javax.sql.DataSource from a given JNDI name and getting Connections out of it. The rest should be the same - just do whatever you need and close your ResultSets, Statements and Connections as you must have been doing until now.
About your questions, you will find extensive information on how to monitor your connection pool, and its fail recovery policies, here (depending on your app server version, I paste here the one I have used):
http://docs.oracle.com/cd/E15051_01/wls/docs103/jdbc_admin/jdbc_datasources.html
About performance, I have no accurate data nor benchmarks comparing both implementations; for your tranquility, I would say you that I have never found a database performance problem in the connection pool implementation - this does not mean that it cannot exist, but it is the last place I would look for it ;)
What database connection pool could be used to load-balance connections from a Tomcat web container to one of several Oracle database servers without using RAC clustering?
I'm assuming these are read-only databases or you're not concerned connections will get different data. If you want the data to be the same, you can use streams replication which is doesn't require RAC.
The connection load balancing and failover happens in the listener. There's a lot of flexibility in how this works and this should get you started:
http://download.oracle.com/docs/cd/E11882_01/network.112/e10836/advcfg.htm#sthref858
The first part shows a simple client based load balance which is essentially picking a connection at random. Farther down it shows how to load balance based on actual server load.
Look into DRCP if using 11g