I am using activiti 5.18.
Behind the scenes : There are few task which are getting routed though a workflow. Some of these tasks are eligible for escalation. I have written my escalation listener as follows.
#Component
public class EscalationTimerListener implements ExecutionListener {
#Autowired
ExceptionWorkflowService exceptionWorkflowService;
#Override
public void notify(DelegateExecution execution) throws Exception {
//Process the escalated tasks here
this.exceptionWorkflowService.escalateWorkflowTask(execution);
}
}
Now when I start my tomcat server activiti framework internally calls the listener even before my entire spring context is loaded. Hence exceptionWorkflowService is null (since spring hasn't inejcted it yet!) and my code breaks.
Note : this scenario only occurs if my server isn't running at the escalation time of tasks and I start/restart my server post this time. If my server is already running during escalation time then the process runs smoothly. Because when server started it had injected the service and my listener has triggered later.
I have tried delaying activiti configuration using #DependsOn annotation so that it loads after ExceptionWorkflowService is initialized as below.
#Bean
#DependsOn({ "dataSource", "transactionManager","exceptionWorkflowService" })
public SpringProcessEngineConfiguration getConfiguration() {
final SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setAsyncExecutorActivate(true);
config.setJobExecutorActivate(true);
config.setDataSource(this.dataSource);
config.setTransactionManager(this.transactionManager);
config.setDatabaseSchemaUpdate(this.schemaUpdate);
config.setHistory(this.history);
config.setTransactionsExternallyManaged(this.transactionsExternallyManaged);
config.setDatabaseType(this.dbType);
// Async Job Executor
final DefaultAsyncJobExecutor asyncExecutor = new DefaultAsyncJobExecutor();
asyncExecutor.setCorePoolSize(2);
asyncExecutor.setMaxPoolSize(50);
asyncExecutor.setQueueSize(100);
config.setAsyncExecutor(asyncExecutor);
return config;
}
But this gives circular reference error.
I have also tried adding a bean to SpringProcessEngineConfiguration as below.
Map<Object, Object> beanObjectMap = new HashMap<>();
beanObjectMap.put("exceptionWorkflowService", new ExceptionWorkflowServiceImpl());
config.setBeans(beanObjectMap);
and the access the same in my listener as :
Map<Object, Object> registeredBeans = Context.getProcessEngineConfiguration().getBeans();
ExceptionWorkflowService exceptionWorkflowService = (ExceptionWorkflowService) registeredBeans.get("exceptionWorkflowService");
exceptionWorkflowService.escalateWorkflowTask(execution);
This works but my repository has been autowired into my service which hasn't been initialized yet! So it again throws error in service layer :)
So is there a way that I can trigger escalation listeners only after my entire spring context is loaded?
Have you tried binding the class to ApplicationListener?
Not sure if it will work, but equally I'm not sure why your listener code is actually being executed on startup.
Try to set the implementation type of listeners using Java class or delegate expression and then in the class implement JavaDelegate instead of ExecutionListener.
Related
I do integration tests using Spring Boot, TestContainers, redis and Junit 5.
I am facing a weird behavior, when I all the integration tests, I keep having this log displaying :
Cannot reconnect to [localhost:55133]: Connection refused: localhost/127.0.0.1:55133
and this exception :
org.springframework.dao.QueryTimeoutException: Redis command timed out; nested exception is io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s)
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:70)
But I run the tests individually, I dont have this behavior.
I use Junit5 and I am using Junit5 extension to start and stop my redis container :
public class RedisTestContainerExtension implements BeforeAllCallback, AfterAllCallback {
private GenericContainer<?> redis;
#Override
public void beforeAll(ExtensionContext extensionContext) throws Exception {
redis = new GenericContainer<>(DockerImageName.parse("redis:5.0.3-alpine"))
.withCommand("redis-server","--requirepass", "password")
.waitingFor(Wait.forListeningPort())
.withStartupTimeout(Duration.ofMinutes(2))
.withExposedPorts(6379);
redis.start();
System.setProperty("APP_REDIS_CONVERSATIONS_HOST",redis.getHost());
System.setProperty("APP_REDIS_CONVERSATIONS_PORT",redis.getFirstMappedPort().toString());
System.setProperty("APP_REDIS_CONVERSATIONS_PASSWORD","password");
System.setProperty("APP_REDIS_CONVERSATIONS_TTL","600m");
}
#Override
public void afterAll(ExtensionContext extensionContext) throws Exception {
if(redis != null){
redis.stop();
}
}
}
And I add this file as an extension to my integration test :
#ExtendWith({SpringExtension.class, RedisTestContainerExtension.class})
#SpringBootTest(classes = ConversationsApplication.class)
class MyIntegrationTest {
...
}
Can anyone help me fix this situation.
We had a similar issue. The issue was occured only when we execute all tests (or at least not only one specific)
We have another test setup - we are using a base class to manage test testcontainers - where the port-mapping of the containers was applied by overriding the properties via DynamicPropertySource
Our fix was to mark the base-test-class with #DirtiesContext that spring does not reuse the application-context over the tests-classes - see documentation of DynamicPropertySource:
NOTE: if you use #DynamicPropertySource in a base class and discover that tests in subclasses fail because the dynamic properties change between subclasses, you may need to annotate your base class with #DirtiesContext to ensure that each subclass gets its own ApplicationContext with the correct dynamic properties.
Example:
#Slf4j
#SpringBootTest
#DirtiesContext
#Testcontainers
public abstract class AbstractContainerTest {
#Container
private static final ElasticsearchContainer elasticsearchContainer = new DealElasticsearchContainer();
#Container
private static final RedisCacheContainer redisCacheContainer = new RedisCacheContainer();
#DynamicPropertySource
static void databaseProperties(DynamicPropertyRegistry registry) {
log.info("Override properties to connect to Testcontainers:");
log.info("* Test-Container 'Elastic': spring.elasticsearch.rest.uris = {}",
elasticsearchContainer.getHttpHostAddress());
log.info("* Test-Container 'Redis': spring.redis.host = {} ; spring.redis.port = {}",
redisCacheContainer.getHost(), redisCacheContainer.getMappedPort(6379));
registry.add("spring.elasticsearch.rest.uris", elasticsearchContainer::getHttpHostAddress);
registry.add("spring.redis.host", redisCacheContainer::getHost);
registry.add("spring.redis.port", () -> redisCacheContainer.getMappedPort(6379));
}
}
So maybe give it a try to use #DirtiesContext or switch to a setup which uses DynamicPropertySource to override the properties. It was especially build for this case:
Method-level annotation for integration tests that need to add properties with dynamic values to the Environment's set of PropertySources.
This annotation and its supporting infrastructure were originally designed to allow properties from Testcontainers based tests to be exposed easily to Spring integration tests. However, this feature may also be used with any form of external resource whose lifecycle is maintained outside the test's ApplicationContext.
Problem with connection to Neo4j test container using Spring boot 2 and JUnit5
int test context. Container started successfully but spring.data.neo4j.uri property has a wrong default port:7687, I guess this URI must be the same when I call neo4jContainer.getBoltUrl().
Everything works fine in this case:
#Testcontainers
public class ExampleTest {
#Container
private static Neo4jContainer neo4jContainer = new Neo4jContainer()
.withAdminPassword(null); // Disable password
#Test
void testSomethingUsingBolt() {
// Retrieve the Bolt URL from the container
String boltUrl = neo4jContainer.getBoltUrl();
try (
Driver driver = GraphDatabase.driver(boltUrl, AuthTokens.none());
Session session = driver.session()
) {
long one = session.run("RETURN 1",
Collections.emptyMap()).next().get(0).asLong();
assertThat(one, is(1L));
} catch (Exception e) {
fail(e.getMessage());
}
}
}
But SessionFactory is not created for the application using autoconfiguration following to these recommendations - https://www.testcontainers.org/modules/databases/neo4j/
When I try to create own primary bean - SessionFactory in test context I get the message like this - "URI cannot be returned before the container is not loaded"
But Application runs and works perfect using autoconfiguration and neo4j started in a container, the same cannot be told about the test context
You cannot rely 100% on Spring Boot's auto configuration (for production) in this case because it will read the application.properties or use the default values for the connection.
To achieve what you want to, the key part is to create a custom (Neo4j-OGM) Configuration bean. The #DataNeo4jTest annotation is provided by the spring-boot-test-autoconfigure module.
#Testcontainers
#DataNeo4jTest
public class TestClass {
#TestConfiguration
static class Config {
#Bean
public org.neo4j.ogm.config.Configuration configuration() {
return new Configuration.Builder()
.uri(databaseServer.getBoltUrl())
.credentials("neo4j", databaseServer.getAdminPassword())
.build();
}
}
// your tests
}
For a broader explanation have a look at this blog post. Esp. the section Using with Neo4j-OGM and SDN.
I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.
I have a Jax-RS Rest service that uses Ebean to query the database. On any query I make this exception is thrown.
For example.
User currentUser = new QUser().where().id.eq(currentUserID)).findUnique();
Logs
ERROR [io.ebeaninternal.server.transaction.JdbcTransaction] (default task-10) Error when ending a query only transaction via ROLLBACK: java.sql.SQLException: IJ031021: You cannot rollback during a managed transaction
Now the query returns the appropriate user and doesn't interfere with the Jax-RS.
But I can't ignore the large code-smell
And the huge log that is created because it gets thrown on every query.
My Configuration
ServerConfig config = new ServerConfig();
config.setDataSource(ds);
config.setName("db");
config.setAutoCommitMode(false);
config.setDatabasePlatform(new PostgresPlatform());
config.setRegister(true);
config.setDefaultServer(true);
config.setTransactionRollbackOnChecked(true);
config.addPackage(User.class.getPackage().getName());
EbeanServer es = EbeanServerFactory.create(config);
When using ebean inside Java EE you need to configure the EbeanServer before it is used. A typical place to do it is in a #PostConstruct method in a #Startup #Singleton bean-managed transaction ejb. And you need to configure it to use the JTA transaction manager so it doesn't try to begin/commit the transactions on its own.
#Singleton
#Startup
#TransactionManagement(TransactionManagementType.BEAN)
public class AtStartup {
#Resource(mappedName = "java:jboss/datasources/EbeanTestDS")
private DataSource ds;
#SneakyThrows
#PostConstruct
public void startup() {
new MigrationRunner(new MigrationConfig()).run(ds); // begin/commits transaction for the migration...
ServerConfig config = new ServerConfig();
config.setDataSource(ds);
config.addPackage(Customer.class.getPackage().getName());
config.setUseJtaTransactionManager(true); // This is important !
config.setAutoCommitMode(false);
EbeanServerFactory.create(config);
}
I am working on a proof of concept of Hazelcast Transactional Map. To accomplish this I am writing an Spring Boot app and using Atomikos as my JTA/XA implementation.
This app must update a transactional map and also update a database table by inserting a new row all within the same transaction.
I am using JPA / SpringData / Hibernate to work with the database.
So the app have a component (a JAVA class annotated with #Component) that have a method called agregar() (add in spanish). This method is annotated with #Transactional (org.springframework.transaction.annotation.Transactional)
The method must performe two task as a unit: first must update a TransactionalMap retrieved from Hazelcast instance and, second, must update a database table using a repository extended from JpaRepository (org.springframework.data.jpa.repository.JpaRepository)
This is the code I have written:
#Transactional
public void agregar() throws NotSupportedException, SystemException, IllegalStateException, RollbackException, SecurityException, HeuristicMixedException, HeuristicRollbackException, SQLException {
logger.info("AGRENADO AL MAPA ...");
HazelcastXAResource xaResource = hazelcastInstance.getXAResource();
UserTransactionManager tm = new UserTransactionManager();
tm.begin();
Transaction transaction = tm.getTransaction();
transaction.enlistResource(xaResource);
TransactionContext context = xaResource.getTransactionContext();
TransactionalMap<TaskKey, TaskQueue> mapTareasDiferidas = context.getMap("TAREAS-DIFERIDAS");
TaskKey taskKey = new TaskKey(1L);
TaskQueue taskQueue = mapTareasDiferidas.get(taskKey);
Integer numero = 4;
Task<Integer> taskFactorial = new TaskImplFactorial(numero);
taskQueue = new TaskQueue();
taskQueue.getQueue().add(taskFactorial);
mapTareasDiferidas.put(taskKey, taskQueue);
transaction.delistResource(xaResource, XAResource.TMSUCCESS);
tm.commit();
logger.info("AGRENADO A LA TABLA ...");
PaisEntity paisEntity = new PaisEntity(100, "ARGENTINA", 10);
paisRepository.save(paisEntity);
}
This code is working: if one of the tasks throw an exception then both are rolled back.
My questions are:
Is this code actually correct?
Why #Transactional is not taking care of commiting the changes in the map and I must explicitylly do it on my own?
The complete code of the project is available en Github: https://github.com/diegocairone/hazelcast-maps-poc
Thanks in advance
Finally i realized that i must inject the 'UserTransactionManager' object and take the transaction from it.
Also is necessary to use a JTA/XA implementation. I have chosen Atomikos and XA transactions must be enable in MS SQL Server.
The working example is available at Github https://github.com/diegocairone/hazelcast-maps-poc on branch atomikos-datasource-mssql
Starting with Hazelcast 3.7, you can get rid of the boilerplate code to begin, commit or rollback transactions by using HazelcastTransactionManager which is a PlatformTransactionManager implementation to be used with Spring Transaction API.
You can find example here.
Also, Hazelcast can participate in XA transaction with Atomikos. Here's a doc
Thank you
I have updated to Hazelcast 3.7.5 and added the following code to HazelcastConfig class.
#Configuration
public class HazelcastConfig {
...
#Bean
public HazelcastInstance getHazelcastInstance() {
....
}
#Bean
public HazelcastTransactionManager getTransactionManager() {
HazelcastTransactionManager transactionManager = new HazelcastTransactionManager(getHazelcastInstance());
return transactionManager;
}
#Bean
public ManagedTransactionalTaskContext getTransactionalContext() {
ManagedTransactionalTaskContext transactionalContext = new ManagedTransactionalTaskContext(getTransactionManager());
return transactionalContext;
}
When I run the app I get this exception:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
bean named 'transactionManager' available: No matching
PlatformTransactionManager bean found for qualifier
'transactionManager' - neither qualifier match nor bean name match!
The code is available at Github on a new branch: atomikos-datasource-mssql-hz37
Thanks in advance