RedisTemplate multi thread not work, why and how to fix? - spring

I want to use RedisTemplate, more specific StringRedisTemplate in Junit used with multi thread, but it fails. And if I use only test thread, it work. Why? Isn't StringRedisTemplate thread safe? How to fix that?
#Test
void multiThreadStringRedisTemplate(#Autowired RedisConnectionFactory redisConnectionFactory){
Runnable runnable = () -> {
StringRedisTemplate stringRedisTemplate = new StringRedisTemplate(redisConnectionFactory);
for(int i = 0;i < 50;++i){
String value = stringRedisTemplate.opsForValue().get("ok");
System.out.println(i + ":" + value);
}
};
// multi thread not work
ExecutorService executorService = Executors.newCachedThreadPool();
for(int i = 0;i < 1;++i){
executorService.execute(runnable);
}
// runnable.run(); this work fine
}
I try to use spring's RedisTemplate in multi thread, but not work. The test process just exit with 0 without any exception info.
pom's dependency and properties.yml as:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
<exclusions>
<exclusion>
<artifactId>lettuce-core</artifactId>
<groupId>io.lettuce</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
</dependency>
</dependencies>
I've enabled the connection poll with default configuration.
spring:
redis:
host: ${romote_host}
port: 6379
password: ${password}
database: 0
connect-timeout: 3000ms
jedis:
pool:
enabled: true

You need to wait for other threads in your executor service to do their work. I think adding this snippet at the end of your test would suffice
taskExecutor.shutdown();
try {
taskExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
...
}

Related

SSE in Jersey: Could not find a suitable constructor in javax.ws.rs.sse.Sse class

I am implementing Server Sent Events with Jersey 2.28 and I can't get the basic examples to work.
Other answers mention a dependency problem, I've added all the necessary dependencies to no avail:
<dependencies>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-grizzly2-http</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.containers</groupId>
<artifactId>jersey-container-grizzly2-servlet</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-moxy</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.inject</groupId>
<artifactId>jersey-hk2</artifactId>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-sse</artifactId>
</dependency>
</dependencies>
I basically copy-pasted the example and ran it, I've tried variations found in tutorials such as https://www.baeldung.com/java-ee-jax-rs-sse but it just won't work
#GET
#Path("/locations")
#Produces(MediaType.SERVER_SENT_EVENTS)
public void getServerSentEvents(#Context SseEventSink eventSink, #Context Sse sse) {
new Thread(() -> {
for (int i = 0; i < 10; i++) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
final OutboundSseEvent event = sse.newEventBuilder().name("message-to-client")
.data(String.class, "Hello world " + i + "!").build();
eventSink.send(event);
}
}).start();
}
Can anyone provide me with the correct implementation for basic SSE using Jersey (I'm using the grizzly implementation)

ClassNotFoundException: org.junit.Assert exception when using MockRestServiceServer

I'm trying to run the following test
import org.springframework.test.web.client.MockRestServiceServer;
....
#Test
void successPost () {
MockRestServiceServer server = MockRestServiceServer.bindTo(restTemplate).build();
server.expect(once(), requestTo("http://localhost:8080/test"))
.andExpect(header("X-AUTH", "myToken"))
.andRespond(withSuccess("{ \"msg\":\"hi\"}", MediaType.APPLICATION_JSON));
WebClient<String, DummyResponse> client = new WebClient<>(restTemplate, retryTemplate, DummyResponse.class);
DummyResponse result = client.post("http://localhost:8080","/test", "myRequest", "myToken");
// Verify all expectations met
server.verify();
assertThat(result.getMsg(), is("hi"));
}
and it fails with when I have .andExpect(header("X-AUTH", "myToken"))
ClassNotFoundException: org.junit.Assert
Project contains only JUnit 5 dependencies and
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<version>2.1.0.RELEASE</version>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</exclusion>
</exclusions>
</dependency>

Spring boot Turbine.stream with ssl Not working

We are trying to enable https in turbine stream. And we are facing below problem:
ERROR 10644 --- [o-eventloop-3-1] r.n.p.h.s.ServerRequestResponseConverter : Invalid HTTP request recieved. Decoder error.
java.lang.IllegalArgumentException: invalid version format: ■\ᅦ:4'|"￀+￀/￀,￀0ᅩ로또ᅩ￀ ￀￀
at io.netty.handler.codec.http.HttpVersion.<init>(HttpVersion.java:130) ~[netty-codec-http-4.0.27.Final.jar!/:4.0.27.Final]
at io.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:84) ~[netty-codec-http-4.0.27.Final.jar!/:4.0.27.Final]
Please provide any suggestion
Thank you.
UPDATE
Do we need to enable ssl to message broker which is handling the stream
Code
#SpringBootApplication
#RestController
#EnableDiscoveryClient
#EnableTurbineStream
public class DemoHystrixApplication {
public static void main(String[] args) {
SpringApplication.run(DemoHystrixApplication.class, args);
}
#RequestMapping(value ="/test")
public String helloHystrix() {
ObjectMapper mapper = new ObjectMapper();
String output = null;
try {
output = mapper.writeValueAsString("{ \"message\" : \"test : Welcome to test Notification Detail Page.\"}");
} catch (JsonProcessingException e) {
e.getMessage();
}
return output;
}
#RequestMapping(value ="/test2")
public String testhello() {
ObjectMapper mapper = new ObjectMapper();
String output = null;
try {
output = mapper.writeValueAsString("{ \"message\" : \"test : Welcome to test1 Page.\"}");
} catch (JsonProcessingException e) {
e.getMessage();
}
return output;
}
}
POM Dependency
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-context</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-turbine-stream</artifactId>
<version>1.1.5.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
</dependencies>
application.properties
server.port=8083
spring.cloud.consul.host=localhost
spring.cloud.consul.port=8500
spring.application.name=testApp
security.basic.enabled=false
eureka.client.enabled=false
server.ssl.key-store=consul.pfx
server.ssl.key-password=changeit
server.ssl.key-store-type=PKCS12
server.ssl.trust-store=root.pfx
server.ssl.trust-store-password=changeit
server.ssl.trust-store-type=PKCS12
spring.cloud.consul.discovery.scheme=https

Spring Boot Spring Batch partitioned job not stopping after COMPLETION

I recently started writing spring batch program using java config way and using spring batch and starter packages. I used partitioned step and task executor to do my work.The problem I am facing is once the job is COMPLETED the batch process won't stop and it is keep running in my eclipse and in the Linux box. I manually find and kill the job. Can you please help with this.
This working fine when I run the job without partitioned step and in a single threaded way.
My Job Config:
#Bean
#StepScope
public ItemReader<MediaAsset> metaDataExportReader(#Value("#{jobParameters[sourceSystemCode]}") String sourceSystemCode,#Value("#{jobParameters[assetType]}") String assetType,#Value("#{stepExecutionContext[startingMediaAssetId]}") long startingMediaAssetId,
#Value("#{stepExecutionContext[endingMediaAssetId]}") long endingMediaAssetId,#Value("#{stepExecutionContext[threadName]}") String threadName) throws Exception {
logger.debug("Reader is called...."+sourceSystemCode);
logger.debug("page size---------->"+jobConfig.getPageOrChunkSizeMetaDataExport());
logger.debug("startingMediaAssetId---------->"+startingMediaAssetId);
logger.debug("endingMediaAssetId"+endingMediaAssetId);
logger.debug("threadName"+threadName);
final Map<String,Object> parameters = new HashMap<>();
parameters.put("startingMediaAssetId",startingMediaAssetId);
parameters.put("endingMediaAssetId",endingMediaAssetId);
JdbcPagingItemReader<MediaAsset> jdbcPagingItemReader = getJdbcPagingItemReader(sourceSystemCode, assetType);
jdbcPagingItemReader.setParameterValues(parameters);
return jdbcPagingItemReader;
}
#Bean(destroyMethod="close")
#StepScope
public ItemWriter<MediaAsset> metaDataExportWriter(#Value("#{jobParameters[sourceSystemCode]}") String sourceSystemCode,#Value("#{jobParameters[assetType]}") String assetType,#Value("#{stepExecutionContext[startingMediaAssetId]}") long startingMediaAssetId,
#Value("#{stepExecutionContext[endingMediaAssetId]}") long endingMediaAssetId,#Value("#{stepExecutionContext[threadName]}") String threadName) throws Exception {
logger.debug("Coming here Item Writer,..."+threadName);
logger.debug("getItemsPerFile---------->"+jobConfig.getPageOrChunkSizeMetaDataExport());
//for xml file creation
StaxEventItemWriter<MediaAsset> staxEventItemWriter = new StaxEventItemWriter<>();
staxEventItemWriter.setRootTagName(DL3ConstantUtil.EXPORT_ASSET_METADATA_BY_SOURCESYSTEM_CODE_ROOT_TAG);
staxEventItemWriter.setMarshaller(marshaller);
staxEventItemWriter.setOverwriteOutput(true);
//for splitting the files into multiple files based on record size
MultiResourceItemWriter<MediaAsset> multiResourceItemWriter = new MultiResourceItemWriter<>();
multiResourceItemWriter.setItemCountLimitPerResource(jobConfig.getPageOrChunkSizeMetaDataExport());
multiResourceItemWriter.setDelegate(staxEventItemWriter);
multiResourceItemWriter.setResourceSuffixCreator(new ResourceSuffixCreator() {
#Override
public String getSuffix(int index) {
return DL3ConstantUtil.UNDERSCORE+threadName+DL3ConstantUtil.UNDERSCORE+startingMediaAssetId+DL3ConstantUtil.UNDERSCORE+endingMediaAssetId+DL3ConstantUtil.UNDERSCORE+index+DL3ConstantUtil.EXPORT_ASSET_METADATA_BY_SOURCESYSTEM_CODE_FILE_NAME_SUFFIX;
}
});
logger.debug("writer sourceSystemCode"+sourceSystemCode);
switch (assetType) {
case DL3ConstantUtil.IMAGE_ASSET:
switch (sourceSystemCode) {
case DL3ConstantUtil.LIGHTBOX:
multiResourceItemWriter.setResource(new FileSystemResource(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"IA"+jobConfig.getBackSlash()+"DPL"+jobConfig.getBackSlash()+DL3ConstantUtil.EXPORT_ASSET_METADATA_BY_SOURCESYSTEM_CODE_LIGHT_BOX_FILE_NAME_PREFIX_NAME_IMG));
break;
case DL3ConstantUtil.SOLAR:
multiResourceItemWriter.setResource(new FileSystemResource(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"IA"+jobConfig.getBackSlash()+"SOLAR"+jobConfig.getBackSlash()+DL3ConstantUtil.EXPORT_ASSET_METADATA_BY_SOURCESYSTEM_CODE_SOLAR_BOX_FILE_NAME_PREFIX_NAME_IMG));
break;
case DL3ConstantUtil.MANUAL_UPLOAD:
multiResourceItemWriter.setResource(new FileSystemResource(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"IA"+jobConfig.getBackSlash()+"DDDS"+jobConfig.getBackSlash()+DL3ConstantUtil.EXPORT_ASSET_METADATA_BY_SOURCESYSTEM_CODE_DDDS_BOX_FILE_NAME_PREFIX_NAME_IMG));
break;
default:
break;
}
break;
case DL3ConstantUtil.DOCUMENT_ASSET:
switch (sourceSystemCode) {
case DL3ConstantUtil.SOLAR:
multiResourceItemWriter.setResource(new FileSystemResource(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"DA"+jobConfig.getBackSlash()+"SOLAR"+jobConfig.getBackSlash()+DL3ConstantUtil.EXPORT_ASSET_METADATA_BY_SOURCESYSTEM_CODE_SOLAR_BOX_FILE_NAME_PREFIX_NAME_DOC));
break;
case DL3ConstantUtil.MANUAL_UPLOAD:
multiResourceItemWriter.setResource(new FileSystemResource(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"DA"+jobConfig.getBackSlash()+"DDDS"+jobConfig.getBackSlash()+DL3ConstantUtil.EXPORT_ASSET_METADATA_BY_SOURCESYSTEM_CODE_DDDS_BOX_FILE_NAME_PREFIX_NAME_DOC));
break;
default:
break;
}
break;
default:
throw new Exception("no matching assetType ");
}
return multiResourceItemWriter;
}
#Bean(name="GenerateXMLFilesMaster")
public Step generateXMLFilesMaster(ItemReader<MediaAsset> metaDataExportReader,ItemWriter<MediaAsset> metaDataExportWriter) {
logger.debug("Master Step initialization...");
return stepBuilderFactory.get("GenerateXMLFilesMaster").
partitioner(generateXMLFilesSlave(metaDataExportReader,metaDataExportWriter)).
partitioner("GenerateXMLFilesSlave",metaDataExportPartioner(null,null,null)).
partitionHandler(metaDataExportPartionHandler(metaDataExportReader,metaDataExportWriter)).
build();
}
#Bean(name="GenerateXMLFilesSlave")
public Step generateXMLFilesSlave(ItemReader<MediaAsset> metaDataExportReader,ItemWriter<MediaAsset> metaDataExportWriter) {
return stepBuilderFactory.get("GenerateXMLFilesSlave")
.<MediaAsset, MediaAsset> chunk(jobConfig.getPageOrChunkSizeMetaDataExport())
.reader(metaDataExportReader)
.writer(metaDataExportWriter)
.build();
}
#Bean(name="uploadTaskletMetaData")
#StepScope
public Tasklet uploadTaskletMetaData(#Value("#{jobParameters[sourceSystemCode]}") String sourceSystemCode,#Value("#{jobParameters[assetType]}") String assetType){
MetaDataUploadTasklet metaDataUploadTasklet = new MetaDataUploadTasklet();
logger.debug("sourceSystemCode----->"+sourceSystemCode);
logger.debug("assetType----->"+assetType);
metaDataUploadTasklet.setTargetFolder(jobConfig.getTargetMetaDataRootPath());
switch (assetType) {
case DL3ConstantUtil.IMAGE_ASSET:
switch (sourceSystemCode) {
case DL3ConstantUtil.LIGHTBOX:
metaDataUploadTasklet.setSourceDirectory(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"IA"+jobConfig.getBackSlash()+"DPL"+jobConfig.getBackSlash());
//metaDataUploadTasklet.setTargetFolder(jobConfig.getTargetMetaDataRootPath()+"/IA/DPL");
break;
case DL3ConstantUtil.SOLAR:
metaDataUploadTasklet.setSourceDirectory(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"IA"+jobConfig.getBackSlash()+"SOLAR"+jobConfig.getBackSlash());
//metaDataUploadTasklet.setTargetFolder(jobConfig.getTargetMetaDataRootPath()+"/IA/SOLAR");
break;
case DL3ConstantUtil.MANUAL_UPLOAD:
metaDataUploadTasklet.setSourceDirectory(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"IA"+jobConfig.getBackSlash()+"DDDS"+jobConfig.getBackSlash());
//metaDataUploadTasklet.setTargetFolder(jobConfig.getTargetMetaDataRootPath()+"/IA/DDDS");
break;
default:
break;
}
break;
case DL3ConstantUtil.DOCUMENT_ASSET:
switch (sourceSystemCode) {
case DL3ConstantUtil.SOLAR:
metaDataUploadTasklet.setSourceDirectory(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"DA"+jobConfig.getBackSlash()+"SOLAR"+jobConfig.getBackSlash());
//metaDataUploadTasklet.setTargetFolder(jobConfig.getTargetMetaDataRootPath()+"/DA/SOLAR");
break;
case DL3ConstantUtil.MANUAL_UPLOAD:
metaDataUploadTasklet.setSourceDirectory(jobConfig.getTargetFileLocation()+jobConfig.getBackSlash()+"DA"+jobConfig.getBackSlash()+"DDDS"+jobConfig.getBackSlash());
//metaDataUploadTasklet.setTargetFolder(jobConfig.getTargetMetaDataRootPath()+"/DA/DDDS");
break;
default:
break;
}
break;
default:
break;
}
return metaDataUploadTasklet;
}
#Bean(name="UploadXMLFiles")
public Step uploadXMLFiles(){
return stepBuilderFactory.get("UploadXMLFiles").tasklet(uploadTaskletMetaData(null,null)).build();
}
#Bean
#StepScope
public Partitioner metaDataExportPartioner(#Value("#{jobParameters[sourceSystemCode]}") String sourceSystemCode,#Value("#{jobParameters[assetType]}") String assetType,#Value("#{jobExecutionContext[totalCount]}") String totalCount){
logger.debug("source system code--->"+sourceSystemCode);
logger.debug("assetType--->"+assetType);
MetaDataExportPartioner metaDataExportPartioner = new MetaDataExportPartioner();
metaDataExportPartioner.setSourceSystemCode(sourceSystemCode);
metaDataExportPartioner.setAssetType(assetType);
logger.debug("In the partioner initiliazation------>"+totalCount);
metaDataExportPartioner.setTotalCount(StringUtils.isEmpty(totalCount)?0:Integer.parseInt(totalCount));
return metaDataExportPartioner;
}
#Bean
public PartitionHandler metaDataExportPartionHandler(ItemReader<MediaAsset> reader,ItemWriter<MediaAsset> writer){
logger.debug("Initializing partionHandler------>");
TaskExecutorPartitionHandler partitionHandler = new TaskExecutorPartitionHandler();
partitionHandler.setStep(generateXMLFilesSlave(reader,writer));
partitionHandler.setGridSize(6);
partitionHandler.setTaskExecutor(taskExecutor());
return partitionHandler;
}
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(10);
taskExecutor.setCorePoolSize(10);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
#Bean()
public JobExecutionListener metaDataExportJobExecutionListener(){
JobExecutionListener jobExecutionListener = new MetaDataExportJobListener();
return jobExecutionListener;
}
#Bean
public Job exportMetaDataJob(JobExecutionListener metaDataExportJobExecutionListener) throws Exception {
return jobBuilderFactory.get("ExportMetaDataJob")
.incrementer(new RunIdIncrementer())
.listener(metaDataExportJobExecutionListener)
.flow(generateXMLFilesMaster(metaDataExportReader(null,null,0L,0L,null),metaDataExportWriter(null,null,0L,0L,null)))
//.next(uploadXMLFiles())
.end()
.build();
}
My pom file entries:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.3.2.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.8</java.version>
<spring-cloud-version>1.0.4.RELEASE</spring-cloud-version>
<spring-batch-admin.version>1.3.0.RELEASE</spring-batch-admin.version>
</properties>
<dependencies>
<!-- <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId>
</dependency> -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-batch</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-mail</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- <dependency> <groupId>org.springframework.batch</groupId> <artifactId>spring-batch-admin-manager</artifactId>
<version>${spring-batch-admin.version}</version> <exclusions> <exclusion>
<artifactId>slf4j-log4j12</artifactId> <groupId>org.slf4j</groupId> </exclusion>
<exclusion> <artifactId>slf4j-api</artifactId> <groupId>org.slf4j</groupId>
</exclusion> </exclusions> </dependency> -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-aws-context</artifactId>
<version>${spring-cloud-version}</version>
</dependency>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>sqljdbc4</artifactId>
<version>4.0</version>
</dependency>
<dependency>
<groupId>com.oracle</groupId>
<artifactId>ojdbc14</artifactId>
<version>10.2.0.3.0</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-oxm</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.3</version>
</dependency>
<!-- <dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-dbcp2</artifactId>
<version>2.0.1</version>
</dependency> -->
<!-- <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId>
<version>2.0.1</version> </dependency> -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
JVMs shut down automatically when there are zero non-daemon threads running. In your non-partitioned situation, you have no non-daemon threads running when the job is complete so the JVM shuts down. However, in your partitioned use case you must have something waiting for work still preventing the application to shut down. Doing a thread dump would help diagnose the issue however my bet is that the threads being held by the ThreadPoolTaskExecutor are the issue. If it is, you may want to look at an option that doesn't create a pool of threads (preventing the JVM from shutting down).

Arquillian TomEE embedded test with web sockets

I' trying to write a test of a serverendpoint using websockets in an arquillian test. I get an error saying
Caused by: org.glassfish.tyrus.core.HandshakeException: Response code was not 101: 404.
When deploying, I get a warning in the log of the tomee:
WARNING: Can't set TomEE ServerEndpointConfig$Configurator
java.lang.NoSuchFieldException: defaultImpl
at java.lang.Class.getDeclaredField(Class.java:2070)
at org.apache.tomee.catalina.TomcatWebAppBuilder.forceEEServerEndpointConfigurator(TomcatWebAppBuilder.java:338)
at org.apache.tomee.catalina.TomcatWebAppBuilder.<init>(TomcatWebAppBuilder.java:284)
at org.apache.tomee.catalina.TomcatLoader.initialize(TomcatLoader.java:222)
at org.apache.tomee.embedded.Container.start(Container.java:293)
....
The endpoints are defined like this:
#ServerEndpoint("/games")
public class GameEndPoint {
#Inject
GameManager gameManager;
#Inject
private GameSessionHandler sessionHandler;
#OnOpen
public void open(Session session) {
}
#OnClose
public void close(Session session) {
}
#OnError
public void onError(Throwable error) {
}
#OnMessage
public void handleMessage(String payload, Session session) {
}
}
#ClientEndpoint
public class SocketClient {
#OnOpen
public void onOpen(Session session) {
}
#OnMessage
public void onMessage(String message, Session session) {
}
#OnClose
public void onClose(Session session, CloseReason closeReason) {
LOGGER.info(String.format("Session %s close because of %s", session.getId(), closeReason));
}
public void openConnection(URL url) {//URL injected arquillian resource, where url is for http connection..
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
try {
URI uri = URI.create(url.toString().replace("http", "ws") + "games");
container.connectToServer(this, uri);
} catch (DeploymentException | IOException ex) {
LOGGER.log(Level.SEVERE, null, ex);
throw new RuntimeException(ex);
}
}
}
My test is written in spock, which fails in setup
#RunWith(ArquillianSputnik)
class GameServiceSocketIT extends Specification {
#Deployment
public static Archive archive() {
return createDeployment();//shrinkwrap stuff
}
#ArquillianResource
URL url;
#Inject
SocketClient client;
def Game currentGame = null
def setup() { // run before every feature method
client.openConnection(url);
}
def 'init new game' () {
given: 'blabla'
blabla
when: 'blalba'
blabla
then: 'blabla'
blablabla...
}
}
My pom dependencies:
<dependency>
<groupId>javax.websocket</groupId>
<artifactId>javax.websocket-api</artifactId>
<version>1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.spock</groupId>
<artifactId>arquillian-spock-container</artifactId>
<version>1.0.0.Beta3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.spockframework</groupId>
<artifactId>spock-core</artifactId>
<version>0.7-groovy-2.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>2.1.8</version>
<scope>test</scope>
</dependency>
<!-- For Arquillian Integration tests in TOMEE -->
<dependency>
<groupId>org.apache.openejb</groupId>
<artifactId>arquillian-tomee-embedded</artifactId>
<version>1.7.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.openejb</groupId>
<artifactId>tomee-embedded</artifactId>
<version>1.7.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.glassfish.tyrus</groupId>
<artifactId>tyrus-container-jdk-client</artifactId>
<version>1.8.3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-web-api</artifactId>
<version>6.0</version>
<scope>provided</scope>
</dependency>
Edit:
my createDeployment method:
return ShrinkWrap.create(WebArchive.class, "test.war")
.addAsWebInfResource(EmptyAsset.INSTANCE, ArchivePaths.create("beans.xml"))
.addPackages(true, Filters.exclude(".*IT.class"), "engine")
.addPackages(true, Filters.exclude(".*IT.class"), "socket")
.addPackages(true, Filters.exclude(".*IT.class"), "persistence");
Replacing the dependency:
<dependency>
<groupId>javax.websocket</groupId>
<artifactId>javax.websocket-api</artifactId>
<version>1.0</version>
<scope>provided</scope>
</dependency>
with the dependency:
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat7-websocket</artifactId>
<version>7.0.59</version>
<scope>provided</scope>
</dependency>
Fixes the problem and I now get the client to connect to my server, hower now I'm having problems with CDI injection in my ServerEndPoint. That seems to be another issue..

Resources