NullPointerException error in AbstractStateMachine.acceptEvent method - spring-statemachine

Below is my one class project, which is using spring-statemachine-core-1.0.2.RELEASE-sources.jar, which implements a simple transition from one state to another. It's coughing up null pointer exception on currentState in AbstractStateMachine.acceptEvent method. Appreciate help/thoughts.
java.lang.NullPointerException
at org.springframework.statemachine.support.AbstractStateMachine.acceptEvent(AbstractStateMachine.java:591)
import java.util.Arrays;
import java.util.HashSet;
import org.springframework.statemachine.StateMachine;
import org.springframework.statemachine.config.StateMachineBuilder;
import org.springframework.statemachine.config.StateMachineBuilder.Builder;
public class Processor {
public static void main(String[] args) throws Exception {
Builder<String, String> builder = StateMachineBuilder.builder();
builder.configureStates()
.withStates()
.initial("INIT").end("END")
.states(new HashSet<String>(Arrays.asList("INIT","MIDDLE","END")));
builder.configureTransitions()
.withExternal()
.source("INIT").target("MIDDLE").event("START")
.and()
.withExternal()
.source("MIDDLE").target("END");
builder.configureConfiguration().withConfiguration().autoStartup(true);
StateMachine<String, String> stateMachine = builder.build();
stateMachine.start();
stateMachine.sendEvent("START");
stateMachine.stop();
}
}

Yes with manual builder machine doesn't get a default taskExecutor. It's already fixed in master and 1.0.x branch but we haven't released 1.0.3 yet. Workaround is to set it manually:
builder
.configureConfiguration()
.withConfiguration()
.taskExecutor(new SyncTaskExecutor())
.autoStartup(true);

Related

Running a quarkus main (command line like) from an AWS lambda handler method

I have a quarkus-camel batch application that needs to run under a lambda in AWS. This is working fine with pure java and spring-boot.
I need to be able to start the Quarkus Application from the AWS lambda handler method.
Running in batch works fine, but under lambda I get the following error:
Caused by: io.quarkus.bootstrap.BootstrapException: Failed to determine the Maven artifact associated with the application /var/task
This is the main java class. I need to know what to do in the handleRequest method to start the Quarkus (CAMEL) application.
package com.example;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.arc.Arc;
import io.quarkus.runtime.QuarkusApplication;
import org.apache.camel.quarkus.core.CamelRuntime;
import javax.inject.Inject;
import org.apache.camel.CamelContext;
import org.apache.camel.ProducerTemplate;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
#QuarkusMain
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
Gson gson = new GsonBuilder().setPrettyPrinting().create();
public static void main(String... args) {
Quarkus.run(CamelApp.class, args);
}
public static class CamelApp implements QuarkusApplication {
#Inject
ProducerTemplate camelProducer;
#Inject
CamelContext camelContext;
#Override
public int run(String... args) throws Exception {
System.out.println("Hello Camel");
CamelRuntime runtime = Arc.container().instance(CamelRuntime.class).get();
runtime.start(args);
camelProducer.sendBody("direct:lambda", "how about this?");
return runtime.waitForExit();
}
}
public Object handleRequest(final Object input, final Context context) {
logger.info("input: {}", gson.toJson(input));
logger.info("context: {}", gson.toJson(context));
Quarkus.run(CamelApp.class);
// CamelRuntime runtime = Arc.container().instance(CamelRuntime.class).get();
// runtime.start(new String[] {"A","B","C"});
// camelProducer.sendBody("direct:lambda", "how about this?");
// runtime.waitForExit();
return input;
}
}

Need to send Json to JMS using Apache Camel Spring Boot

I am using spring-boot for Apache Camel and I am able to send messages from one queue to another queue.
blow is the code
import com.google.gson.Gson;
import org.apache.camel.Exchange;
import org.apache.camel.LoggingLevel;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;
#Component
public class JmsRoute extends RouteBuilder {
static final Logger log = LoggerFactory.getLogger(JmsRoute.class);
#Override
public void configure() throws Exception {
from("{{inbound.endpoint}}")
.transacted()
.log(LoggingLevel.INFO, log, "Recived Message")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Student student = new Student();
Gson gson = new Gson();
String json = gson.toJson(student);
log.info("Exchange: {}", exchange.getMessage().getBody());
log.info("**********:{}", exchange.getMessage());
}
})
.loop()
.simple("{{outbound.loop.count}}")
.to("{{outbound.endpoint}}")
.log(LoggingLevel.INFO, log, "Message Sent")
.end();
}
}
I need to send to convert Object to JSON(Which I can convert using Gson) and then send it over the queue.
I am new to Camel and tried to find the solution for this over the internet but couldn't get any help.
Can anyone please help here ?
You are not setting the json to the exchange body.
public void process(Exchange exchange) throws Exception {
Student student = new Student();
Gson gson = new Gson();
String json = gson.toJson(student);
exchange.getIn().setBody(json); //processor does not do this automatically
log.info("Exchange: {}", exchange.getMessage().getBody());
log.info("**********:{}", exchange.getMessage());
}
I recommend checking out the new documentation pages for apache camel. They are great. Especially if you are just starting to use the framework. See https://camel.apache.org/manual/latest/getting-started.html

Multiple RedisConnectionFactories in Spring Boot Application

My application uses one "main" redis instance for things like session storage and cache but needs to talk to a separate "external" instance for other reasons. I am trying to determine the "best" ("most idiomatic"? "simplest"?) way to configure this in my Spring Boot application.
Ideally I'd just like to use the default auto-configuration for the main instance but as soon as I register a connection factory for the external instance the #ConditionalOnMissngBean({RedisConnectionFactory.class}) condition in LettuceConnectionConfiguration becomes false and so the default instance isn't created. Looking at what else is going on in LettuceConnectionConfiguration etc. I feel like I'd rather not manually configure it if I don't need to.
I could just not expose the "external" connection factory as a bean and only use it internally to create the beans that depend on it but, while that would be ok in my specific case, I'd like to understand if there's a better solution where both factories can be exposed.
Is there some way I can expose the second RedisConnectionFactory without disabling the default one provided by auto configuration? Is there a clear "right way" to do this sort of thing?
you must implement the BeanDefinitionRegistryPostProcessor to adjust the RedisConnectionFactory order
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.BeanDefinitionRegistryPostProcessor;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.stereotype.Component;
#Component
public class MultipleRedisConnectionFactoryRegistrar implements BeanDefinitionRegistryPostProcessor {
#Override
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
BeanDefinition bd1 = registry.getBeanDefinition("redisConnectionFactory");
if (bd1 != null) {
BeanDefinition bd = new RootBeanDefinition(ExternalRedisConnectionFactoryBean.class);
registry.registerBeanDefinition("externalRedisConnectionFactory", bd);
}
}
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
}
}
in ExternalRedisConnectionFactoryBean, you can create your own RedisConnectionFactory
import org.springframework.beans.factory.FactoryBean;
import org.springframework.data.redis.connection.RedisConnectionFactory;
public class ExternalRedisConnectionFactoryBean implements FactoryBean<RedisConnectionFactory> {
#Override
public RedisConnectionFactory getObject() throws Exception {
//you can mannually create your external redis connection factory here
return null;
}
#Override
public Class<?> getObjectType() {
return RedisConnectionFactory.class;
}
}
if you want to use the multiple RedisConnectionFactory, you #Qualifier is the right choice, for example
#Autowired
#Qualifier("redisConnectionFactory")
private RedisConnectionFactory defaultRedisConnectionFactory;
#Autowired
#Qualifier("externalRedisConnectionFactory")
private RedisConnectionFactory externalRedisConnectionFactory;

Register beforeCommit callback for all Spring managed transactions

I am using the ChainedTransactionManager to implement Best Effort 1PC across ActiveMQ and MySQL, in my case the Database transaction commits first. In order to reduce the window of failure I want to check whether the ActiveMQConnection has failed just before committing the database transaction. I can do this once a transaction has started with the TransactionSynchronizationManager.registerSynchronization, but what I want is to register a block of code that runs for every transaction without having to do it in my code.
I could just subclass the ChainedTransactionManager but this does not seem the cleanest. Is there a better way to do this?
EDIT: Looks like subclassing ChainedTransactionManager is not a good idea as it relies on MultiTransactionStatus which is not public. Creating a new PlatformTransactionManager that delegates to a ChainedTransactionManager is an alternative.
This is how I have decided to implement it:
import org.apache.activemq.ActiveMQConnection;
import org.springframework.data.transaction.ChainedTransactionManager;
import org.springframework.jms.connection.JmsResourceHolder;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.TransactionDefinition;
import org.springframework.transaction.TransactionException;
import org.springframework.transaction.TransactionStatus;
import org.springframework.transaction.support.TransactionSynchronizationAdapter;
import org.springframework.transaction.support.TransactionSynchronizationManager;
public class JmsTransportFailureHandlingTransactionManager implements PlatformTransactionManager {
private final ChainedTransactionManager chainedTransactionManager;
public JmsTransportFailureHandlingTransactionManager(ChainedTransactionManager chainedTransactionManager) {
this.chainedTransactionManager = chainedTransactionManager;
}
#Override
public TransactionStatus getTransaction(TransactionDefinition definition) throws TransactionException {
TransactionStatus transaction = chainedTransactionManager.getTransaction(definition);
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
public void beforeCommit(boolean readOnly) {
for (Object resource : TransactionSynchronizationManager.getResourceMap().values()) {
if (resource instanceof JmsResourceHolder) {
ActiveMQConnection connection = (ActiveMQConnection) ((JmsResourceHolder) resource).getConnection();
if (connection.isTransportFailed()) {
throw new IllegalStateException("ActiveMQ transport failed.");
}
}
}
}
});
return transaction;
}
#Override
public void commit(TransactionStatus status) throws TransactionException {
chainedTransactionManager.commit(status);
}
#Override
public void rollback(TransactionStatus status) throws TransactionException {
chainedTransactionManager.rollback(status);
}
}

#Autowired not working on jersey resource

workflowService is null. The bean configuration is correct because manual injection works fine in other portions of the application.
Here's my resource:
#Path("/workflowProcess")
#Consumes({MediaType.APPLICATION_JSON})
#Produces({MediaType.APPLICATION_JSON})
public class WorkflowProcessResource {
#Autowired
WorkflowService workflowService;
#Autowired
WorkflowProcessService workflowProcessService;
#GET
#Path ("/getWorkflowProcesses/{uuid}")
public Collection<WorkflowProcessEntity> getWorkflows (#PathParam("uuid") String uuid) {
WorkflowEntity workflowEntity = workflowService.findByUUID(uuid);
return workflowEntity.getWorkflowProcesses();
}
}
From what I keep finding on Google on sites like http://www.mkyong.com/webservices/jax-rs/jersey-spring-integration-example/, it looks like ContextLoaderListener is the key. But I've already added that to the application context.
import com.sun.jersey.spi.container.servlet.ServletContainer;
import com.sun.jersey.spi.spring.container.servlet.SpringServlet;
import org.atmosphere.cpr.AtmosphereFramework;
import org.atmosphere.cpr.AtmosphereServlet;
import org.atmosphere.handler.ReflectorServletProcessor;
import org.glassfish.grizzly.servlet.ServletRegistration;
import org.glassfish.grizzly.servlet.WebappContext;
import org.glassfish.grizzly.websockets.WebSocketAddOn;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import java.io.IOException;
import java.util.logging.Logger;
public class Main {
protected static final Logger logger = Logger.getLogger(Main.class.getName());
public static void main(String[] args) throws IOException {
logger.info("Starting server...");
final HttpServer server = HttpServer.createSimpleServer(".", 8181);
WebappContext ctx = new WebappContext("Socket", "/");
//enable annotation configuration
ctx.addContextInitParameter("contextClass", "org.springframework.web.context.support.AnnotationConfigWebApplicationContext");
ctx.addContextInitParameter("contextConfigLocation", "com.production");
//allow spring to do all of it's stuff
ctx.addListener("org.springframework.web.context.ContextLoaderListener");
//add jersey servlet support
ServletRegistration jerseyServletRegistration = ctx.addServlet("JerseyServlet", new SpringServlet());
jerseyServletRegistration.setInitParameter("com.sun.jersey.config.property.packages", "com.production.resource");
jerseyServletRegistration.setInitParameter("com.sun.jersey.spi.container.ContainerResponseFilters", "com.production.resource.ResponseCorsFilter");
jerseyServletRegistration.setInitParameter("com.sun.jersey.api.json.POJOMappingFeature", "true");
jerseyServletRegistration.setLoadOnStartup(1);
jerseyServletRegistration.addMapping("/api/*");
What you need here, I think, is #InjectParam instead of #Autowired
#InjectParam worked fine instead of #Autowired, with a slight change
#InjectParam cannot be applied to the constructor itself hence has to be applied to the arguments to the constructor.
public OrderService(#InjectParam OrderValidationService service,
#InjectParam OrderCampaignService campaignService) {
this.service = service;
this.submissionErrorHandler = submissionErrorHandler;
this.campaignService = campaignService;
}

Resources