How to handle a connection properly with jooq? - oracle

I am getting a lot of inactive sessions in the database which not only take up all the resources but also cause the database to crash occasionally requiring me to restart it.
I am using jooq with kotlin, and this is how I establish a connection.
#Component
class EstDBConnection(private val cfg: DatabaseConfig, private val jooqExecuteListener: PromJooqExecuteListener) {
init {
cfg.migrateFlyway()
}
fun <T> acquire(f: (DSLContext) -> T): T {
return DSL.using(DriverManager.getConnection(cfg.url, cfg.username, cfg.password), SQLDialect.ORACLE10G).use {
jooqExecuteListener.attach(it)
f(it)
}
}
}

You're never closing the connections that you're creating. Please use a connection pool (e.g. HikariCP) to manage your connections. Unless your writing a simple batch script, or some proof of concept, you should never resort to using DriverManager.getConnection directly

Related

How can transactions be implemented in spring webflux without r2dbc driver

General problem description
Due to compatibility issues with the provided database I can not use the provided r2dbc driver for the database. The only possible option is using the standard jdbc driver but I have faced some issues getting transactions to work in the spring-weflux/ project reactor context.
Transactions with jdbc usually rely on the requirement of the connection being thread-local. In project reactor Flux/Mono it is not guaranteed that each flux execution is performed in the same thread. Even more i assume one of the major benefits of reactive programming is the ability to switch threads without having to worry about it. For this reason the standard spring jdbc TransactionManager can not be used and for r2dbc a ReactiveTransactionManager is implemented. As I am using jdbc in this case neither can I use the JdbcTransactionManager, nor is a ReactiveTransactionManager available.
First of all: Is there a simple solution to this Problem?
"Hacky" solution
I will now elaborate further on the steps I already took to solve this issue for me. My idea was implementing a custom ReactiveTransactionManager, which is based on the provided JdbcTransactionManager. My assumption was that it would be possible to wrap a transaction around a Mono/Flux this way. The issue is that I did not take into account the issue described above: It works currently only in a ThreadLocal context as the underlying JdbcTransactions still rely on it. Due to this the inner transactions are handled (commit,rollback) individually if the thread is changed in between.
The following class is the implementation of my custom transaction manager to be included in a reactive stream.
public class JdbcReactiveTransactionManager implements ReactiveTransactionManager {
// Jdbc or connection based transaction manager
private final DataSourceTransactionManager transactionManager;
// ReactiveTransaction delegates everything to TransactionStatus.
static class JdbcReactiveTransaction implements ReactiveTransaction {
public JdbcReactiveTransaction(TransactionStatus transactionStatus) {
this.transactionStatus = transactionStatus;
}
private final TransactionStatus transactionStatus;
public TransactionStatus getTransactionStatus() {
return transactionStatus;
}
// [...]
}
#Override
public #NonNull Mono<ReactiveTransaction> getReactiveTransaction(TransactionDefinition definition)
throws TransactionException {
return Mono.just(transactionManager.getTransaction(definition)).map(JdbcReactiveTransaction::new);
}
#Override
public #NonNull Mono<Void> commit(#NonNull ReactiveTransaction transaction) throws TransactionException {
if (transaction instanceof JdbcReactiveTransaction t) {
transactionManager.commit(t.getTransactionStatus());
return Mono.empty();
} else {
return Mono.error(new IllegalTransactionStateException("Illegal ReactiveTransaction type used"));
}
}
#Override
public #NonNull Mono<Void> rollback(#NonNull ReactiveTransaction transaction) throws TransactionException {
if (transaction instanceof JdbcReactiveTransaction t) {
transactionManager.rollback(t.getTransactionStatus());
return Mono.empty();
} else {
return Mono.error(new IllegalTransactionStateException("Illegal ReactiveTransaction type used"));
}
}
The implemented solution works in all scenarios where the tread does not change. But a fixed thread is not what one usually wants to archive using reactive approaches. Therefore the thread must be fixed using publishOn and subscribeOn. This is all very hacky and I myself consider this a good solution but I do not see a better alternative currently. As this is only required for one use case right now I can probably do but I would really like to find a better solution.
Pinning the Thread
The example below shows the situation that I need to use both: publishOn and subscribeOn to pin the thread. If I omit either on of these some statements wont be executed in the same thread. My current assumption is that Netty executes the parsing in a separate thread (or eventloop). Therefore the additional publishOn is required.
public Mono<ServerResponse> allocateFlows(ServerRequest request) {
final val single = Schedulers.newSingle("AllocationService-allocateFlows");
return request.bodyToMono(FlowsAllocation.class)
.publishOn(single) // Why do I need this although I execute subscribeOn later?
.flatMapMany(this::someProcessingLogic)
.concatMapDelayError(this::someOtherProcessingLogic)
.as(transactionalOperator::transactional)
.subscribeOn(single, false)
.then(ServerResponse.ok().build());
}

Spring Reactor and consuming websocket messages

I'm creating a spring reactor application to consume messages from websockets server, transform them and later save them to redis and some sql database, saving to redis and sql database is also reactive. Also, before writing to redis and sql database, messages will be windowed (with different timespans) and aggregated.
I'm not sure if the way I've accomplished what I want to achieve is a proper reactive wise, it means, I'm not losing reactive benefits (performance).
First, let me show you what I got:
#Service
class WebSocketsConsumer {
public ConnectableFlux<String> webSocketFlux() {
return Flux.<String>create(emitter -> {
createWebSocketClient()
.execute(URI.create("wss://some-url-goes-here.com"), session -> {
WebSocketMessage initialMessage = session.textMessage("SOME_MSG_HERE");
Flux<String> flux = session.send(Mono.just(initialMessage))
.thenMany(session.receive())
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(emitter::next);
Flux<String> sessionStatus = session.closeStatus()
.switchIfEmpty(Mono.just(CloseStatus.GOING_AWAY))
.map(CloseStatus::toString)
.doOnNext(emitter::next)
.flatMapMany(Flux::just);
return flux
.mergeWith(sessionStatus)
.then();
})
.subscribe(); //1: highlighted by Intellij Idea: `Calling subsribe in not blocking context`
})
.publish();
}
private ReactorNettyWebSocketClient createWebSocketClient() {
return new ReactorNettyWebSocketClient(
HttpClient.create(),
() -> WebsocketClientSpec.builder().maxFramePayloadLength(131072 * 100)
);
}
}
And
#Service
class WebSocketMessageDispatcher {
private final WebSocketsConsumer webSocketsConsumer;
private final Consumer<String> reactiveRedisConsumer;
private final Consumer<String> reactiveJdbcConsumer;
private Disposable webSocketsDisposable;
WebSocketMessageDispatcher(WebSocketsConsumer webSocketsConsumer, Consumer<String> redisConsumer, Consumer<String> dbConsumer) {
this.webSocketsConsumer = webSocketsConsumer;
this.reactiveRedisConsumer = redisConsumer;
this.reactiveJdbcConsumer = dbConsumer;
}
#EventListener(ApplicationReadyEvent.class)
public void onReady() {
ConnectableFlux<String> messages = webSocketsConsumer.webSocketFlux();
messages.subscribe(reactiveRedisConsumer);
messages.subscribe(reactiveJdbcConsumer);
webSocketsDisposable = messages.connect();
}
#PreDestroy
public void onDestroy() {
if (webSocketsDisposable != null) webSocketsDisposable.dispose();
}
}
Questions:
Is it a proper use of reactive streams? Maybe redis and database writes should be done in flatMap, however IMO they can't as I want them to happen in the background and they will also aggregate messages with different time windows. Also note comment 1 from the code above where idea lints my code, code works however I wonder what this lint may result in? Maybe I should use doOnNext not to call emitter::next but to invoke some dispatcher of messages there with some funcion like doOnNext(dispatcher::dispatchMessage) ?
I want websockets client to start immediately after application is ready and stop consuming messages when application shuts down, are #EventListener(ApplicationReadyEvent.class) and #PreDestroy annotations and code shown above a proper way to handle this scenario in reactive world?
As I said saving to redis and sql database is also reactive, i.e. those saves are also producing Mono<T> is subscribing to those Monos inside subscribe of websockets flux ok or it should be accomplished some other way (comments 2 and 3 in code above)

Jooq configuration per request

I'm struggling to find a way to define some settings in DSLContext per request.
What I want to achieve is the following:
I've got a springboot API and a database with multiple schemas that share the same structure.
Depending on some parameters of each request I want to connect to one specific schema, if no parameters is set I want to connect to no schema and fail.
To not connect to any schema I wrote the following:
#Autowired
public DefaultConfiguration defaultConfiguration;
#PostConstruct
public void init() {
Settings currentSettings = defaultConfiguration.settings();
Settings newSettings = currentSettings.withRenderSchema(false);
defaultConfiguration.setSettings(newSettings);
}
Which I think works fine.
Now I need a way to set schema in DSLContext per request, so everytime I use DSLContext during a request I get automatically a connection to that schema, without affecting other requests.
My idea is to intercept the request, get the parameters and do something like "DSLContext.setSchema()" but in a way that applies to all usage of DSLContext during the current request.
I tried to define a request scopeBean of a custom ConnectionProvider as follows:
#Component
#RequestScope
public class ScopeConnectionProvider implements ConnectionProvider {
#Override
public Connection acquire() throws DataAccessException {
try {
Connection connection = dataSource.getConnection();
String schemaName = getSchemaFromRequestContext();
connection.setSchema(schemaName);
return connection;
} catch (SQLException e) {
throw new DataAccessException("Error getting connection from data source " + dataSource, e);
}
}
#Override
public void release(Connection connection) throws DataAccessException {
try {
connection.setSchema(null);
connection.close();
} catch (SQLException e) {
throw new DataAccessException("Error closing connection " + connection, e);
}
}
}
But this code only executes on the first request. Following requests don't execute this code and hence it uses the schema of the first request.
Any tips on how can this be done?
Thank you
Seems like your request-scope bean is getting injected into a singleton.
You're already using #RequestScope which is good, but you could forget to add #EnableAspectJAutoProxy on your Spring configuration class.
#Configuration
#EnableAspectJAutoProxy
class Config {
}
This will make your bean run within a proxy inside of the singleton and therefore change per request.
Nevermind, It seems that the problem I was having was caused by an unexpected behaviour of some cacheable function I defined. The function is returning a value from the cache although the input is different, that's why no new connection is acquired. I still need to figure out what causes this unexpected behaviour thought.
For now, I'll stick with this approach since it seems fine at a conceptual level, although I expect there is a better way to do this.
*** UPDATE ***
I found out that this was the problem I had with the cache Does java spring caching break reflection?
*** UPDATE 2 ***
Seems that setting schema in the underlying datasource is ignored. I'm currently trying this other approach I just found (https://github.com/LinkedList/spring-jooq-multitenancy)

Determining when FileWrittenEvent has completed writing the entire file

This is my first question here so please bear with me.In a recent release of Spring 5.2 there were certain and extremely helpful components added to Spring Integration as seen in this link:https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-server-eventsApache MINA was integrated with a new listener "ApacheMinaSftpEventListener" which
listens for certain Apache Mina SFTP server events and publishes them as ApplicationEvents
So far my application can capture the application events as noted in the documentation from the link provided but I can't seem to figure out when the event finishes... if that makes sense (probably not).In a process flow the application starts up and activates as an SFTP Server on a specified port.I can use the user name and password to connect to and "put" a file on the system which initiates the transfer.When I sign on I can capture the "SessionOpenedEvent"When I transfer a file I can capture the "FileWrittenEvent"When I sign off or break the connection I can capture the "SessionClosedEvent"When the file is a larger size I can capture ALL of the "FileWrittenEvent" events which tells me the transfer occurs on a stream of a predetermined or calculated sized buffer.What I'm trying to determine is "How can I find out when that stream is finished". This will help me answer "As an SFTP Server accepting a file, when can I access the completed file?"
My Listener bean (which is attached to Apache Mina on start up via the SubSystemFactory)
#Configuration
public class SftpConfiguration {
#Bean
public ApacheMinaSftpEventListener apacheMinaSftpEventListener() {
return new ApacheMinaSftpEventListener();
}
}
SftpSubsystemFactory subSystem = new SftpSubsystemFactory();
subSystem.addSftpEventListener(listener);
My Event Listener: this is here so I can see some output in a logger which is when I realized, on a few GB file, the FileWrittenEvent went a little crazy.
#Async
#EventListener
public void sftpEventListener(ApacheMinaSftpEvent sftpEvent) {
log.info("Capturing Event: ", sftpEvent.getClass().getSimpleName());
log.info("Event Details: ", sftpEvent.toString());
}
These few pieces were all I really needed to start capturing the eventsI was thinking that I would need to override a method to help me capture when the stream finishes so I can move on with my business logic but I'm not sure which one.I seem to be able to access the file (read/write) prior to the stream being done so I don't seem to be able to use logic that attempts to "move" the file and wait for it to throw an error, though that approach seemed like bad practice to me.Any guidance would be greatly appreciated, thank you.
Versioning Information
Spring 5.2.3
Spring Boot 2.2.3
Apache Mina 2.1.3
Java 1.8
This may not be helpful for others but I've found a way around my initial problem by integrating a related solution combined with the new Apache MINA classes found in this answer:https://stackoverflow.com/a/45513680/12806809
My solution:
Create a class that extends the new ApacheMinaSftpEventListener while overridding the 'open' and 'close' methods to ensure my SFTP Server business logic know when a file is done writing.
public class WatcherSftpEventListener extends ApacheMinaSftpEventListener {
...
...
#Override public void open(ServerSession session, String remoteHandle, Handle localHandle) throws IOException {
File file = localHandle.getFile().toFile();
if (file.isFile() && file.exists()) {
log.debug("File Open: {}", file.toString());
}
// Keep around the super call for now
super.open(session, remoteHandle, localHandle);
}
#Override
public void close(ServerSession session, String remoteHandle, Handle localHandle) {
File file = localHandle.getFile().toFile();
if (file.isFile() && file.exists()) {
log.debug("RemoteHandle: {}", remoteHandle);
log.debug("File Closed: {}", file.toString());
for (SftpFileUploadCompleteListener listener : fileReadyListeners) {
try {
listener.onFileReady(file);
} catch (Exception e) {
String msg = String.format("File '%s' caused an error in processing '%s'", file.getName(), e.getMessage());
log.error(msg);
try {
session.disconnect(0, msg);
} catch (IOException io) {
log.error("Could not properly disconnect from session {}; closing future state", session);
session.close(false);
}
}
}
}
// Keep around the super call for now
super.close(session, remoteHandle, localHandle);
}
}
When I start the SSHD Server I added my new listener bean to the SftpSubsystemFactory which uses a customized event handler class to apply my business logic against the incoming files.
watcherSftpEventListener.addFileReadyListener(new SftpFileUploadCompleteListener() {
#Override
public void onFileReady(File file) throws Exception {
new WatcherSftpEventHandler(file, properties.getSftphost());
}
});
subSystem.addSftpEventListener(watcherSftpEventListener);
There was a bit more to this solution but since this question isn't getting that much traffic and it's more for my reference and learning than anything now, I won't provide anything more unless asked.

Spring Boot with CXF Client Race Condition/Connection Timeout

I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.

Resources