We have a spring application which integrates with a DB2 (LUW) database.
In a specific flow we have a method annotated with #Transactional(timeout=60)
On database heavy load we have observed that the above timeout of 60 seconds fails to throw an
exception on time. It does it only when the database processing finishes either successfully or
with an error.
The failing messages are like the one below:
2020-02-21 18:45:32,463 ERROR ... Transaction timed out: deadline was Fri Feb 21 18:40:14 EET 2020
Notice that the exception was thrown after the resources got released by the databases, with a lock timeout error in the specific case, about 5 minutes later than I would expect due to the configured transaction timeout.
I tried to reproduce this behaviour by causing manually a delay in the database. Specifically, I call
the sleep DB2 procedure from my app, for a period that is longer than the configured transaction timeout. My tests have the same result, the exception is thrown only after the sleep operation ends successfully.
I wanted to check a similar scenario with another database so I created a simple Spring boot project with 2 different profiles, one for DB2 and one for Postgres.
Running this example I observe similar behaviour for DB2, i.e. transaction timeout does not cause any error or it does it only after the sleep time configured for DB2 (30s), which is larger than the configured transaction timeout (10s), ends.
On the contrary Postgres behaviour is more or less what I would expect. Connection with the DB ends with an exception the exact moment that the transaction timeout time has passed (10s), without waiting the sleep operation to finish (30s).
The example project is here. A sample of what described here follows:
package com.example.demo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
#Service
public class DemoService {
#Autowired
private DemoRepository repository;
#Transactional(timeout = 10)
public void sleep() {
repository.sleep();
}
}
package com.example.demo;
public interface DemoRepository {
void sleep();
}
package com.example.demo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
#Repository
#Profile("db2")
public class Db2DemoRepository implements DemoRepository {
#Autowired
private JdbcTemplate template;
#Override
public void sleep() {
template.execute("call SYSIBMADM.DBMS_ALERT.SLEEP(30)");
}
}
package com.example.demo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Profile;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
#Repository
#Profile("postgres")
public class PostgresDemoRepository implements DemoRepository {
#Autowired
private JdbcTemplate template;
#Override
public void sleep() {
template.execute("select pg_sleep(30);");
}
}
I guess that transaction timeout sets the query timeout in Postgres and fails to do so in DB2.
Also I have tried with several values for the following DB2 configuration properties, without any luck: timerLevelForQueryTimeOut, interruptProcessingMode, queryTimeout.
So to my questions:
Does the way that I am trying to reproduce the problem and test the several DBs makes sense or am I missing something?
This is more important,: Is there a way to make DB2 connection fail the exact moment the transaction time out reaches its limit?
Since there has been a while since I posted the question and no one posted an answer I ll move my findings from the edited section of my question, here:
The solution seems to be setting DB2 config property queryTimeoutInterruptProcessingMode=2 (and not interruptProcessingMode as I originally thought)
example: jdbc:db2://localhost:50000/demo:queryTimeoutInterruptProcessingMode=2;
With this modification, an exception is thrown when the transaction timeout ends. I would like to hear an opinion from the experts though. For example, is it safe to modify the specific property? What is the impact?
Related
When I read this tutorial about transaction, I notice timeout property, which I have never used before in any of REST services I have developed.
For example, in this code:
#Service
#Transactional(
isolation = Isolation.READ_COMMITTED,
propagation = Propagation.SUPPORTS,
readOnly = false,
timeout = 30)
public class CarService {
#Autowired
private CarRepository carRepository;
#Transactional(
rollbackFor = IllegalArgumentException.class,
noRollbackFor = EntityExistsException.class,
rollbackForClassName = "IllegalArgumentException",
noRollbackForClassName = "EntityExistsException")
public Car save(Car car) {
return carRepository.save(car);
}
}
What is the benefit or advantage of using timeout property? is it a good practice to use it? can anyone tell me about use-cases of timeout property?
As Spring Docs explain:
Timeout enables client to control how long the transaction runs before timing out and being rolled back automatically by the
underlying transaction infrastructure.
So, the benefit is evidently obvious - to control how long the transaction (and queries under that) may be lasting, until they're rolled back.
Q: Why controlling the transaction time is useful/good?
A: If you are deliberately expecting your transaction not to take too long - it's a good time to use this configuration; if you're expecting that your transaction might take longer than its default maximum time, it is, agian, helpful to provide this configuration.
One is to stop records being locked for long and unable to serve any other requests.
Let says you are booking a ticket. On the final submission page, it is talking so long and will your user wait forever? So you set http client time out. But now you have the http client time out, what happens if you don't have transaction time out? You displayed error to user saying it didn't succeed but your transaction takes it time as it does not have any timeout and commits after the your http client has timed out.
All of the above answers are correct, but something you should note is that:
this property exclusively designed for use with Propagation.REQUIRED
or Propagation.REQUIRES_NEW since it only applies to newly started
transactions.
as documentations describes.
I didnt see enough examples on web using apache camel with websphere mq to send and receive messages. I had a example code but I got struck at the middle of code. could any one help on this..
import org.apache.camel.CamelContext;
import org.apache.camel.Endpoint;
import org.apache.camel.Exchange;
import org.apache.camel.ExchangePattern;
import org.apache.camel.Producer;
import org.apache.camel.util.IOHelper;
import org.springframework.context.support.AbstractApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
/**
* Client that uses the Mesage Endpoint
* pattern to easily exchange messages with the Server.
* <p/>
* Notice this very same API can use for all components in Camel, so if we were using TCP communication instead
* of JMS messaging we could just use <code>camel.getEndpoint("mina:tcp://someserver:port")</code>.
* <p/>
* Requires that the JMS broker is running, as well as CamelServer
*/
public final class CamelClientEndpoint {
private CamelClientEndpoint() {
//Helper class
}
// START SNIPPET: e1
public static void main(final String[] args) throws Exception {
System.out.println("Notice this client requires that the CamelServer is already running!");
AbstractApplicationContext context = new ClassPathXmlApplicationContext("camel-client.xml");
CamelContext camel = context.getBean("camel-client", CamelContext.class);
// get the endpoint from the camel context
Endpoint endpoint = camel.getEndpoint("jms:queue:numbers");
// create the exchange used for the communication
// we use the in out pattern for a synchronized exchange where we expect a response
Exchange exchange = endpoint.createExchange(ExchangePattern.InOut);
// set the input on the in body
// must be correct type to match the expected type of an Integer object
exchange.getIn().setBody(11);
// to send the exchange we need an producer to do it for us
Producer producer = endpoint.createProducer();
// start the producer so it can operate
producer.start();
// let the producer process the exchange where it does all the work in this oneline of code
System.out.println("Invoking the multiply with 11");
producer.process(exchange);
// get the response from the out body and cast it to an integer
int response = exchange.getOut().getBody(Integer.class);
System.out.println("... the result is: " + response);
// stopping the JMS producer has the side effect of the "ReplyTo Queue" being properly
// closed, making this client not to try any further reads for the replies from the server
producer.stop();
// we're done so let's properly close the application context
IOHelper.close(context);
}
}
I got struck at this point of code..
exchange.getIn()
Do I have to use exchange.getOut() to send message?? and How to construct message using string and add headers to it.
Welcome to stackoverflow!
I am still not sure what exactly is the problem you are stuck at and it prevents me (and possibly others as well) in helping you resolve your roadblock.
Perhaps you need to familiarize a bit more on what camel is and how it works. Camel in Action is a great book to help you with that.
If you are unable to get a copy at this point, a preview of the first few chapters of the book is available online and it should give you much better leverage. Source code repository for chapter 2 should give you some more ideas around how to process JMS messages.
In addition to it. Please don't expect full blown solutions from StackOverflow. You may read this page on how to ask a good question
Hi I am using camel to get the messages from the JMS queue, process the message store it in Hadoop using FS java API and then transfer it to another queue
Currently my JMS concurrent is 20. so at one shot camel JMS consumes 20 messages at a time. For every message, I create the fs connection and perform an operation that creates a file and writes a file.
Here is the issue
What i see is sometimes while writing the content to the file, due to some reason my namenode goes down, in this case, I want to switch my name to the active name node
Here is a log I get
[Camel (camel-1) thread #1 a failover occurred since the last call #169 ClientNamenodeProtocolTranslatorPD.getFileInfo over {name_node_address_allias/ip:port}
When I debug it I show Operation category WRITE is not supported in state standby
I want at that time to switch to the active name node
Hadoop Java API Sample code
package org.myorg;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileStatus;
public class HdfsTest {
public static void main(String args[]) {
conf.addResource("path-of-core-site.xml");
conf.addResource("path-of-hdfs-site.xml");
conf.set("fs.defaultFS", "hdfs://cloudera:8020");
conf.set("hadoop.security.authentication", "kerberos");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("hdfs#CLOUDERA", "/etc/hadoop/conf/hdfs.keytab");
FileSystem fs = FileSystem.get(conf);
//logic to create a file and write
//close the cfile and connection
}
}
APPLICATION INFO:
Code below: reads from IBM MQ queue and then posts the message to a REST service
(note: reading from the MQ queue is fast and not an issue - rather, it is the post operation performance I am having trouble improving)...
PROBLEM:
Unable to output/post more than 44-47 messages per second...
QUESTION:
How can I improve the performance of the JbossFuse (v6.3) DSL route code below?... (What techniques are available that would make it faster?)
package aaa.bbb.ccc;
import org.apache.camel.Exchange;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.cdi.ContextName;
#ContextName("rest-dsl")
public class Netty4HttpSlowRoutes extends RouteBuilder {
public Netty4HttpSlowRoutes() {
}
private final org.apache.camel.Processor proc1 = new Processor1();
#Override
public void configure() throws Exception {
org.apache.log4j.MDC.put("app.name", "netty4HttpSlow");
System.getProperties().list(System.out);
errorHandler(defaultErrorHandler().maximumRedeliveries(3).log("***FAILED_MESSAGE***"));
from("wmq:queue:mylocalqueue")
.log("inMessage=" + (null==body()?"":body().toString()))
.to("seda:node1?concurrentConsumers=20");
from("seda:node1")
.streamCaching()
.threads(20)
.setHeader(Exchange.HTTP_METHOD, constant(org.apache.camel.component.http4.HttpMethods.POST))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.toD("netty4-http:http://localhost:7001/MyService/myServiceThing?textline\\=true");
}
}
Just a couple of thoughts. First things first: did you measure the slowness? How much time do you spend in Camel VS how much time you spend sending the HTTP request?
If the REST service is slow there's nothing you can do in Camel. Depending on what the service does, you could try reducing the number of threads.
Try to disable streamCaching since it looks like you're not using it.
Then use a to instead of toD to invoke the service, I see that the URL is always the same. In the docs of ToD I read
By default the Simple language is used to compute the endpoint.
There may be a little overhead while parsing the URI string each time you invoke the route.
This might be a repeat but i couldn't find suitable post myself.
My question is, how does it really work (how the spring/hibernate support) to manage a single transaction with multiple DAO classes?
Does it really mean that same JDBC connection is used across multiple DAOs that are participating in a transaction? I would like to understand the fundamentals here.
Thanks in advance
Harinath
Using a simple example:
#Controller
#Transactional
#RequestMapping("/")
public class HomeController {
#Inject
private UserRepository userRepository;
#Inject
private TagRepository tagRepository;
...
#RequestMapping(value = "/user/{user_id}", method = RequestMethod.POST)
public #ResponseBody void operationX(#PathVariable("user_id") long userId) {
User user = userRepository.findById(userId);
List<Tags> tags = tagRepository.findTagsByUser(user);
...
}
...
}
In this example your controller has the overarching transaction, thus the entity manager will keep track of all operations in this operationX method and commit the transaction at the end of the method. Spring's #Transactional annotation creates a proxy for the annotated class which wraps it's methods in a transaction when called. It achieves this through the use of AOP.
Regarding the connection to the database - it is generally obtained from a connection pool and uses the connection for the duration of the transaction, whereafter it returns it to the connection pool. Similar question answered here: Does the Spring transaction manager bind a connection to a thread?
EDIT:
Furthermore, for the duration of the transaction, the connection is bound to the thread. In subsequent database operations, the connection is obtained every time by getting the connection mapped to the thread in question. I believe the TransactionSynchronizationManager is responsible for doing this. The docs of which you can find here: TransactionSynchronizationManager