Why Querydsl always requires connection to be transactional? - spring

Recently I tried a new tool for db access called Querydsl in my Spring Boot app, here is how I configure the context in #Configuration class:
#Bean
public com.querydsl.sql.Configuration querydslConfiguration() {
SQLTemplates templates = OracleTemplates.builder().build();
com.querydsl.sql.Configuration configuration = new com.querydsl.sql.Configuration(templates);
configuration.setExceptionTranslator(new SpringExceptionTranslator());
return configuration;
}
#Bean
public SQLQueryFactory queryFactory(DataSource dataSource) {
Provider<Connection> provider = new SpringConnectionProvider(dataSource);
return new SQLQueryFactory(querydslConfiguration(), provider);
}
My query is a quite simple select:
fun detailedEntityByIds(ids: Set<String>): List<DetailedEntity> {
val qDetails = QTContainerDetails.tContainerDetails
return sqlQueryFactory.select(qDetails).from(qDetails)
.where(qDetails.id.`in`(ids))
.fetch().map { mapper.qDslEntToModel(it) }
}
Then I faced with was the following exception:
java.lang.IllegalStateException: Connection is not transactional
I quickly found this question: [QueryDSL/Spring]java.lang.IllegalStateException: Connection is not transactional with an advice to use #Transactional for solving this problem.
Why does Querydsl requires connections to be transactional? I used to put #Transactional on a service layer methods where I really need it. Now Querydsl 'forces' me to put it on a whole DAO class, because looks like it is required for every Querydsl query.

From the Javadoc
/**
* {#code SpringConnectionProvider} is a Provider implementation which provides a transactionally bound connection
*
* <p>Usage example</p>
* <pre>
* {#code
* Provider<Connection> provider = new SpringConnectionProvider(dataSource());
* SQLQueryFactory queryFactory = SQLQueryFactory(configuration, provider);
* }
* </pre>
*/
The reason is for resource management. You don't have access to the underlying JDBC implementation. The transaction will close ResultSets, Statements, Connections, etc. Without a transaction, every connection would be left open, the connection pool saturated, the database running out of resources, etc.
If you want to manage your own resources, you could write your own Provider<Connection> and pass in the DataSource E.G.
private static Provider<Connection> getConnection(DataSource dataSource) {
return () -> org.springframework.jdbc.datasource.DataSourceUtils.getConnection(dataSource);
}

Related

Getting too many connection for role when using DataSource

I have a Rest service and when it gets it has to do some insertion and updation to almost 25 database. So when I tried like the below code, it was working in my localhost but when I deploy to my staging server I was getting FATAL: too many connections for role "user123"
List<String> databaseUrls = null;
databaseUrls.forEach( databaseUrl -> {
DataSource dataSource = DataSourceBuilder.create()
.driverClassName("org.postgresql.Driver")
.url(databaseUrl)
.username("user123")
.password("some-password")
.build();
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update("Some...Update...Query");
});
As per my understanding DataSource need not to be closed because it is never opened.
Note:
A DataSource implementation need not be closed, because it is never
“opened”. A DataSource is not a resource, is not connected to the
database, so it is not holding networking connections nor resources on
the database server. A DataSource is simply information needed when
making a connection to the database, with the database server's
network name or address, the user name, user password, and various
options you want specified when a connection is eventually made.
Can someone tell why I am getting this issue
The problem is in DataSourceBuilder, it actually creates of the connection pools which spawns some number of connections and keeps them running:
private static final String[] DATA_SOURCE_TYPE_NAMES = new String[] {
"org.apache.tomcat.jdbc.pool.DataSource",
"com.zaxxer.hikari.HikariDataSource",
"org.apache.commons.dbcp.BasicDataSource" };
Javadoc says:
/**
* Convenience class for building a {#link DataSource} with common implementations and
* properties. If Tomcat, HikariCP or Commons DBCP are on the classpath one of them will
* be selected (in that order with Tomcat first). In the interest of a uniform interface,
* and so that there can be a fallback to an embedded database if one can be detected on
* the classpath, only a small set of common configuration properties are supported. To
* inject additional properties into the result you can downcast it, or use
* <code>#ConfigurationProperties</code>.
*/
Try to use e.g. SingleConnectionDataSource, then your problem will gone:
List<String> databaseUrls = null;
Class.forName("org.postgresql.Driver");
databaseUrls.forEach( databaseUrl -> {
SingleConnectionDataSource dataSource;
try {
dataSource = new SingleConnectionDataSource(
databaseUrl, "user123", "some-password", true /*suppressClose*/);
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update("Some...Update...Query");
} catch (Exception e) {
log.error("Failed to run queries for {}", databaseUrl, e);
} finally {
// release resources
if (dataSource != null) {
dataSource.destroy();
}
}
});
First thing it is very bad architecture decision to have single application managing 50 database. Anyway instead of creating DataSource in for loop, you should make use of Factory Design pattern to create DataSource for each DB. You should add some connection pooling mechanism to your system . HijariCP and TomcatPool are most widely used. Analyse logs of failure thread for any further issues.

Spring Boot with CXF Client Race Condition/Connection Timeout

I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.

Why is Ebean ORM throwing java.sql.SQLException: IJ031021: You cannot rollback during a managed transaction

I have a Jax-RS Rest service that uses Ebean to query the database. On any query I make this exception is thrown.
For example.
User currentUser = new QUser().where().id.eq(currentUserID)).findUnique();
Logs
ERROR [io.ebeaninternal.server.transaction.JdbcTransaction] (default task-10) Error when ending a query only transaction via ROLLBACK: java.sql.SQLException: IJ031021: You cannot rollback during a managed transaction
Now the query returns the appropriate user and doesn't interfere with the Jax-RS.
But I can't ignore the large code-smell
And the huge log that is created because it gets thrown on every query.
My Configuration
ServerConfig config = new ServerConfig();
config.setDataSource(ds);
config.setName("db");
config.setAutoCommitMode(false);
config.setDatabasePlatform(new PostgresPlatform());
config.setRegister(true);
config.setDefaultServer(true);
config.setTransactionRollbackOnChecked(true);
config.addPackage(User.class.getPackage().getName());
EbeanServer es = EbeanServerFactory.create(config);
When using ebean inside Java EE you need to configure the EbeanServer before it is used. A typical place to do it is in a #PostConstruct method in a #Startup #Singleton bean-managed transaction ejb. And you need to configure it to use the JTA transaction manager so it doesn't try to begin/commit the transactions on its own.
#Singleton
#Startup
#TransactionManagement(TransactionManagementType.BEAN)
public class AtStartup {
#Resource(mappedName = "java:jboss/datasources/EbeanTestDS")
private DataSource ds;
#SneakyThrows
#PostConstruct
public void startup() {
new MigrationRunner(new MigrationConfig()).run(ds); // begin/commits transaction for the migration...
ServerConfig config = new ServerConfig();
config.setDataSource(ds);
config.addPackage(Customer.class.getPackage().getName());
config.setUseJtaTransactionManager(true); // This is important !
config.setAutoCommitMode(false);
EbeanServerFactory.create(config);
}

Connecting to multiple MySQL db instances using jooq in spring boot application

I have a spring boot application which is using gradle as build tool and jooq for dao class generation and db connection. Previously my application was connecting to single mysql instance. Below are the configuration we used for connecting to single db instance:
spring.datasource.username=user
spring.datasource.password=password
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.name=ds_name
spring.datasource.schema=ds_schema
spring.jooq.sql-dialect=MYSQL
Current project structure is
a) Main application project MainApp having application.properties with above key-value pairs.
b) Separate application project as DBProject which has jooq's generated DAO classes. MainApp include DBProject as a jar.
I am using gradle as build tool for this.
Everything is working fine till here. But now I have to connect to one more instance of MySQL. So, I have created another db project as DBProject2 which also contains dao classes generated by jooq using another mysql schema. I have created DBProject2 exactly as DBProject is created.
Now, my question is if I include both DBProjects in MainApp as jar then both will use same db configuration as in application.properties. How I can make separate db jars to point to their respective db schemas. I googled alot about this but couldn't find helpful solution.
This is what I do to connect to multiple (additional) data sources in my Play app. I am not sure if it is the best approach, but it works great for me. I have changed names below to be generic.
// In my application.conf
// default data source
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost:3306/myDb?useSSL=false"
db.default.username=myuser
db.default.password="mypassword"
// additional data source
db.anothersource.driver=com.mysql.jdbc.Driver
db.anothersource.url="jdbc:mysql://localhost:3306/myothersource?useSSL=false"
db.anothersource.username=myuser
db.anothersource.password="mypassword"
// Then in Java, I create a JooqContextProvider class to expose both connections.
public class JooqContextProvider {
#Inject
Database db;
#Inject
play.Configuration config;
public JooqContextProvider(){}
/**
* Creates a default database connection for data access.
* #return DSLConext.
*/
public DSLContext dsl() {
return DSL.using(new JooqConnectionProvider(db), SQLDialect.MYSQL);
}
/**
* Creates an anothersource database connection for data access.
* #return DSLConext for anothersource.
*/
public DSLContext anotherDsl() {
return DSL.using(
new JooqAnotherSourceConnectionProvider(
config.getString("db.anothersource.url"),
config.getString("db.anothersource.username"),
config.getString("db.anothersource.password")),
SQLDialect.MYSQL);
}
}
// Then I needed to implement my JooqAnotherSourceConnectionProvider
public class JooqAnotherSourceConnectionProvider implements ConnectionProvider {
private Connection connection = null;
String url;
String username;
String password;
public JooqAnotherSourceConnectionProvider(String url, String username, String password){
this.url = url;
this.username = username;
this.password = password;
}
#Override
public Connection acquire() throws DataAccessException {
try {
connection = DriverManager.getConnection(url, username, password);
return connection;
}
catch (java.sql.SQLException ex) {
throw new DataAccessException("Error getting connection from data source", ex);
}
}
#Override
public void release(Connection releasedConnection) throws DataAccessException {
if (connection != releasedConnection) {
throw new IllegalArgumentException("Expected " + connection + " but got " + releasedConnection);
}
try {
connection.close();
connection = null;
}
catch (SQLException e) {
throw new DataAccessException("Error closing connection " + connection, e);
}
}
}
// Then in Java code where I need to access one or the other data sources...
jooq.dsl().select().from().where()...
jooq.anotherDsl().select().from().where()...

Camel: use datasource configured by spring-boot

I have a project and in it I'm using spring-boot-jdbc-starter and it automatically configures a DataSource for me.
Now I added camel-spring-boot to project and I was able to successfully create routes from Beans of type RouteBuilder.
But when I'm using sql component of camel it can not find datasource. Is there any simple way to add Spring configured datasource to CamelContext? In samples of camel project they use spring xml for datasource configuration but I'm looking for a way with java config. This is what I tried:
#Configuration
public class SqlRouteBuilder extends RouteBuilder {
#Bean
public SqlComponent sqlComponent(DataSource dataSource) {
SqlComponent sqlComponent = new SqlComponent();
sqlComponent.setDataSource(dataSource);
return sqlComponent;
}
#Override
public void configure() throws Exception {
from("sql:SELECT * FROM tasks WHERE STATUS NOT LIKE 'completed'")
.to("mock:sql");
}
}
I have to publish it because although the answer is in the commentary, you may not notice it, and in my case such a configuration was necessary to run the process.
The use of the SQL component should look like this:
from("timer://dbQueryTimer?period=10s")
.routeId("DATABASE_QUERY_TIMER_ROUTE")
.to("sql:SELECT * FROM event_queue?dataSource=#dataSource")
.process(xchg -> {
List<Map<String, Object>> row = xchg.getIn().getBody(List.class);
row.stream()
.map((x) -> {
EventQueue eventQueue = new EventQueue();
eventQueue.setId((Long)x.get("id"));
eventQueue.setData((String)x.get("data"));
return eventQueue;
}).collect(Collectors.toList());
})
.log(LoggingLevel.INFO,"******Database query executed - body:${body}******");
Note the use of ?dataSource=#dataSource. The dataSource name points to the DataSource object configured by Spring, it can be changed to another one and thus use different DataSource in different routes.
Here is the sample/example code (Java DSL). For this I used
Spring boot
H2 embedded Database
Camel
on startup spring-boot, creates table and loads data. Then camel route, runs "select" to pull the data.
Here is the code:
public void configure() throws Exception {
from("timer://timer1?period=1000")
.setBody(constant("select * from Employee"))
.to("jdbc:dataSource")
.split().simple("${body}")
.log("process row ${body}")
full example in github

Resources