I'm configuring my lettuce rediscluster pool. When I configure it according to the official documentation, the connection pool cannot be initialized properly and I can't get the connection. The official docs state:
RedisClusterClient clusterClient =
RedisClusterClient.create(RedisURI.create(host, port));
AsyncPool<StatefulRedisConnection<String, String>> pool = AsyncConnectionPoolSupport.createBoundedObjectPool( () -> clusterClient.connectAsync(StringCodec.UTF8), BoundedPoolConfig.create());
// execute work
CompletableFuture<String> setResult = pool.acquire().thenCompose(connection -> {
RedisAsyncCommands<String, String> async = connection.async();
async.set("key", "value");
return async.async.set("key2", "value2").whenComplete((s, throwable) -> pool.release(c));
});
// terminating
pool.closeAsync();
// after pool completion
client.shutdownAsync();
this config not work in my environment .then i add minIdle config :
final BoundedPoolConfig.Builder builder = BoundedPoolConfig.builder();
builder.minIdle(9);
It worked at the beginning, but when I loop through the connection pool and send commands from multiple times, the following exeception is thrown:
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: AsyncPool is closed at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
Here are all my code:
private String passwd = "xxxxx";
private String ip = "10.0.0.204";
;
#Bean
#Scope("singleton")
public ClientResources clientResources() {
final DefaultClientResources defaultClientResources = DefaultClientResources.builder()
.ioThreadPoolSize(4)
.computationThreadPoolSize(4)
.build();
return defaultClientResources;
}
#Bean(destroyMethod = "shutdown")
#Scope("singleton")
public RedisClusterClient clusterClient(ClientResources clientResources) {
final String ip = "10.0.0.204";
final String passwd = "dingXiang123";
final RedisURI redisURI1 = RedisURI.Builder.redis(ip, 7001).withPassword(passwd).build();
final RedisURI redisURI2 = RedisURI.Builder.redis(ip, 7002).withPassword(passwd).build();
final RedisURI redisURI3 = RedisURI.Builder.redis(ip, 7003).withPassword(passwd).build();
final RedisURI redisURI4 = RedisURI.Builder.redis(ip, 7004).withPassword(passwd).build();
final RedisURI redisURI5 = RedisURI.Builder.redis(ip, 7005).withPassword(passwd).build();
final RedisURI redisURI6 = RedisURI.Builder.redis(ip, 7006).withPassword(passwd).build();
RedisClusterClient clusterClient = null;
try {
final List<RedisURI> redisURIS = Arrays.asList(redisURI1, redisURI2, redisURI3, redisURI4, redisURI5, redisURI6);
clusterClient = RedisClusterClient.create(clientResources, redisURIS);
ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()
.enableAdaptiveRefreshTrigger(ClusterTopologyRefreshOptions.RefreshTrigger.MOVED_REDIRECT, ClusterTopologyRefreshOptions.RefreshTrigger.PERSISTENT_RECONNECTS)
//连接池refresh超时时间
.adaptiveRefreshTriggersTimeout(Duration.ofMinutes(3))
.build();
clusterClient.setOptions(ClusterClientOptions.builder()
.topologyRefreshOptions(topologyRefreshOptions)
.autoReconnect(true)
.pingBeforeActivateConnection(true)
.build());
final RedisAdvancedClusterAsyncCommands<String, String> async = clusterClient.connect().async();
final RedisFuture<String> set = async.set("aa", "aaaaa");
set.get();
log.info("客户端初始化成功");
return clusterClient;
} catch (Exception e) {
log.error("lettce客户端初始化失败,{}", e);
if (clusterClient != null) {
clusterClient.shutdown();
}
}
return null;
}
/**
* 初始化异步的 Cluter 模式链接池
*
* #param clusterClient
* #return
*/
#Bean()
#DependsOn("clusterClient")
#Scope("singleton")
public BoundedAsyncPool<StatefulRedisClusterConnection<String, String>> lettucePool(RedisClusterClient clusterClient) {
final BoundedPoolConfig.Builder builder = BoundedPoolConfig.builder();
builder.minIdle(9);
final BoundedPoolConfig boundedPoolConfig = builder.build();
final BoundedAsyncPool<StatefulRedisClusterConnection<String, String>> lettucePool = AsyncConnectionPoolSupport.createBoundedObjectPool(
() -> clusterClient.connectAsync(StringCodec.UTF8)
, boundedPoolConfig
);
log.info("连接池初始化成功");
return lettucePool;
}
/**
* 从连接池获取链接
*
* #param lettucePool
*/
#Bean
#DependsOn("lettucePool")
public CompletableFuture<StatefulRedisClusterConnection<String, String>> clusterAsync(BoundedAsyncPool<StatefulRedisClusterConnection<String, String>> lettucePool) {
final CompletableFuture<StatefulRedisClusterConnection<String, String>> acquire = lettucePool.acquire();
return acquire;
}
Have you encountered this problem again, how did you solve it?
The other point is that I don't really like redisTemplate to operate the letuce API, so I am looking for a configuration solution for the native Lettuce cluster pool.
Have you done the configuration of the original cluster pool or the use of Api before, or have seen the detailed Demo documentation, if you have, please recommend it to me (of course, I also read the official document, I may Need a Demo Application to study )
Related
I want to load the values from json file upon the Spring Boot Application is started.
My code for the Configuration File is like the below:
#Configuration
#Getter
public class FedexAPIConfig {
private final static String JSON_FILE = "/static/config/fedex-api-credentials.json";
private final boolean IS_PRODUCTION = false;
private FedexAPICred apiCredentials;
public FedexAPIConfig() {
try (InputStream in = getClass().getResourceAsStream(JSON_FILE);
BufferedReader reader = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8))) {
JSONObject json = new JSONObject();
// this.apiCredentials = new JSONObject(new JSONTokener(reader));
if (IS_PRODUCTION) {
json = new JSONObject(new JSONTokener(reader)).getJSONObject("production");
} else {
json = new JSONObject(new JSONTokener(reader)).getJSONObject("test");
}
System.out.println(json.toString());
this.apiCredentials = FedexAPICred.builder()
.url(json.optString("url"))
.apiKey(json.optString("api_key"))
.secretKey(json.optString("secret_key"))
.build();
} catch (FileNotFoundException fnfe) {
fnfe.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
}
and with this, when the application is in progress of startup, values are successfully printed on the console.Startup console log
When I tried to call this value from other ordinary class, like the below:, it brings nothing but just throws NullPointerException... What are my faults and what shall I do?
public class FedexOAuthTokenManager extends OAuthToken {
private static final String VALIDATE_TOKEN_URL = "/oauth/token";
private static final String GRANT_TYPE_CLIENT = "client_credentials";
private static final String GRANT_TYPE_CSP = "csp_credentials";
#Autowired
private FedexAPIConfig fedexApiConfig;
#Autowired
private Token token;
#Override
public void validateToken() {
// This is the part where "fedexApiConfig" is null.
FedexAPICred fedexApiCred = fedexApiConfig.getApiCredentials();
Response response = null;
try {
RequestBody body = new FormBody.Builder()
.add("grant_type", GRANT_TYPE_CLIENT)
.add("client_id", fedexApiCred.getApiKey())
.add("client_secret", fedexApiCred.getSecretKey())
.build();
response = new HttpClient().post(fedexApiCred.getUrl() + VALIDATE_TOKEN_URL, body);
if (response.code() == 200) {
JSONObject json = new JSONObject(response.body().string());
token.setAccessToken(json.optString("access_token"));
token.setTokenType(json.optString("token_type"));
token.setExpiredIn(json.optInt("expires_in"));
token.setExpiredDateTime(LocalDateTime.now().plusSeconds(json.optInt("expires_in")));
token.setScope(json.optString("scope"));
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
}
fedexApiConfg is null even though I autowired it in prior to call.
And this FedexOAuthTokenManager is called from other #Component class by new FedexOAuthTokenManager()
Did you try like below?
Step 1: Create one Configuration class like below
public class DemoConfig implements ApplicationListener<ApplicationPreparedEvent> {
#Override
public void onApplicationEvent(ApplicationPreparedEvent event) {
//Load the values from the JSON file and populate the application
//properties dynamically
ConfigurableEnvironment environment = event.getApplicationContext().getEnvironment();
Properties props = new Properties();
props.put("spring.datasource.url", "<my value>");
//Add more properties
environment.getPropertySources().addFirst(new PropertiesPropertySource("myProps", props));
}
To listen to a context event, a bean should implement the ApplicationListener interface which has just one method onApplicationEvent().The ApplicationPreparedEvent is invoked very early in the lifecycle of the application
Step 2: Customize in src/main/resources/META-INF/spring.factories
org.springframework.context.ApplicationListener=com.example.demo.DemoConfig
Step 3: #Value in spring boot is commonly used to inject the configuration values into the spring boot application. Access the properties as per your wish.
#Value("${spring.datasource.url}")
private String valueFromJSon;
Try this sample first in your local machine and then modify your changes accordingly.
Refer - https://www.baeldung.com/spring-value-annotation
Refer - https://www.knowledgefactory.net/2021/02/aws-secret-manager-service-as.html
I'm trying to follow a guide to WebSockets at https://www.devglan.com/spring-boot/spring-boot-angular-websocket
I'd like it to respond to ws://localhost:8448/wsb/softlayer-cost-file, but I'm sure I misunderstood something. I'd like to get it to receive a binary file and issue periodic updates as the file is being processed.
Questions are:
How come Spring does not respond to my requests despite all the multiple URLs I try (see below).
Does my RxJS call run once and then conclude, or does it keep running until some closure has happened? Sorry to ask what might be obvious to others.
On my Spring Boot Server start, I see no errors. After about 5-7 minutes of running, I saw the following log message:
INFO o.s.w.s.c.WebSocketMessageBrokerStats - WebSocketSession[0 current WS(0)-HttpStream(0)-HttpPoll(0), 0 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(0)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], outboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], sockJsScheduler[pool size = 6, active threads = 1, queued tasks = 0, completed tasks = 5]
I've pointed my browser at these URLs and can't get the Spring Boot server to show any reaction:
ws://localhost:8448/app/message
ws://localhost:8448/greeting/app/message
ws://localhost:8448/topic
ws://localhost:8448/queue
(I got the initial request formed in Firefox, then clicked edit/resend to try again).
WebSocketConfig.java
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Autowired
CostFileUploadWebSocketHandler costFileUploadWebSocketHandler;
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(new SocketTextHandler(), "/wst");
registry.addHandler(costFileUploadWebSocketHandler, "/wsb/softlayer-cost-file");
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic/", "/queue/");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/greeting").setAllowedOrigins("*");
// .withSockJS();
}
}
CostFileUploadWebSocketHandler.java
#Component
public class CostFileUploadWebSocketHandler extends BinaryWebSocketHandler {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
private SoftLayerJobService softLayerJobService;
private SoftLayerService softLayerService;
private AuthenticationFacade authenticationFacade;
#Autowired
CostFileUploadWebSocketHandler(SoftLayerJobService softLayerJobService, SoftLayerService softLayerService,
AuthenticationFacade authenticationFacade) {
this.softLayerJobService = softLayerJobService;
this.softLayerService = softLayerService;
this.authenticationFacade = authenticationFacade;
}
Map<WebSocketSession, FileUploadInFlight> sessionToFileMap = new WeakHashMap<>();
#Override
public boolean supportsPartialMessages() {
return true;
}
class WebSocketProgressReporter implements ProgressReporter {
private WebSocketSession session;
public WebSocketProgressReporter(WebSocketSession session) {
this.session = session;
}
#Override
public void reportCurrentProgress(BatchStatus currentBatchStatus, long currentPercentage) {
try {
session.sendMessage(new TextMessage("BatchStatus "+currentBatchStatus));
session.sendMessage(new TextMessage("Percentage Complete "+currentPercentage));
} catch(IOException e) {
throw new RuntimeException(e);
}
}
}
#Override
protected void handleBinaryMessage(WebSocketSession session, BinaryMessage message) throws Exception {
ByteBuffer payload = message.getPayload();
FileUploadInFlight inflightUpload = sessionToFileMap.get(session);
if (inflightUpload == null) {
throw new IllegalStateException("This is not expected");
}
inflightUpload.append(payload);
if (message.isLast()) {
File fileNameSaved = save(inflightUpload.name, "websocket", inflightUpload.bos.toByteArray());
BatchStatus currentBatchStatus = BatchStatus.UNKNOWN;
long percentageComplete;
ProgressReporter progressReporter = new WebSocketProgressReporter(session);
SoftLayerCostFileJobExecutionThread softLayerCostFileJobExecutionThread =
new SoftLayerCostFileJobExecutionThread(softLayerService, softLayerJobService, fileNameSaved,progressReporter);
logger.info("In main thread about to begin separate thread");
ForkJoinPool.commonPool().submit(softLayerCostFileJobExecutionThread);
while(!softLayerCostFileJobExecutionThread.jobDone());
// softLayerCostFileJobExecutionThread.run();
// Wait for above to complete somehow
// StepExecution foundStepExecution = jobExplorer.getJobExecution(
// jobExecutionThread.getJobExecutionResult().getJobExecution().getId()
// ).getStepExecutions().stream().filter(stepExecution->stepExecution.getStepName().equals("softlayerUploadFile")).findFirst().orElseGet(null);
// if (!"COMPLETED".equals(jobExecutionResult.getExitStatus())) {
// throw new UploadFileException(file.getOriginalFilename() + " exit status: " + jobExecutionResult.getExitStatus());
// }
logger.info("In main thread after separate thread submitted");
session.sendMessage(new TextMessage("UPLOAD "+inflightUpload.name));
session.close();
sessionToFileMap.remove(session);
logger.info("Uploaded "+inflightUpload.name);
}
String response = "Upload Chunk: size "+ payload.array().length;
logger.debug(response);
}
private File save(String fileName, String prefix, byte[] data) throws IOException {
Path basePath = Paths.get(".", "uploads", prefix, UUID.randomUUID().toString());
logger.info("Saving incoming cost file "+fileName+" to "+basePath);
Files.createDirectories(basePath);
FileChannel channel = new FileOutputStream(Paths.get(basePath.toString(), fileName).toFile(), false).getChannel();
channel.write(ByteBuffer.wrap(data));
channel.close();
return new File(basePath.getFileName().toString());
}
#Override
public void afterConnectionEstablished(WebSocketSession session) throws Exception {
sessionToFileMap.put(session, new FileUploadInFlight(session));
}
static class FileUploadInFlight {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
String name;
String uniqueUploadId;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
/**
* Fragile constructor - beware not prod ready
* #param session
*/
FileUploadInFlight(WebSocketSession session) {
String query = session.getUri().getQuery();
String uploadSessionIdBase64 = query.split("=")[1];
String uploadSessionId = new String(Base64Utils.decodeUrlSafe(uploadSessionIdBase64.getBytes()));
List<String> sessionIdentifiers = Splitter.on("\\").splitToList(uploadSessionId);
String uniqueUploadId = session.getRemoteAddress().toString()+sessionIdentifiers.get(0);
String fileName = sessionIdentifiers.get(1);
this.name = fileName;
this.uniqueUploadId = uniqueUploadId;
logger.info("Preparing upload for "+this.name+" uploadSessionId "+uploadSessionId);
}
public void append(ByteBuffer byteBuffer) throws IOException{
bos.write(byteBuffer.array());
}
}
}
Below is a snippet of Angular code where I make the call to the websocket. The service is intended to receive a file, then provide regular updates of percentage complete until the service is completed. Does this call need to be in a loop, or does the socket run until it's closed?
Angular Snippet of call to WebSocket:
this.softlayerService.uploadBlueReportFile(this.blueReportFile)
.subscribe(data => {
this.showLoaderBlueReport = false;
this.successBlueReport = true;
this.blueReportFileName = "No file selected";
this.responseBlueReport = 'File '.concat(data.fileName).concat(' ').concat('is ').concat(data.exitStatus);
this.blueReportSelected = false;
this.getCurrentUserFiles();
},
(error)=>{
if(error.status === 504){
this.showLoaderBlueReport = false;
this.stillProcessing = true;
}else{
this.showLoaderBlueReport = false;
this.displayUploadBlueReportsError(error, 'File upload failed');
}
});
}
I'm using the following version of Spring Boot and Spring integration now.
spring.boot.version 2.3.4.RELEASE
spring-integration 5.3.2.RELEASE
My requirement is to create a TCP client server communication and i'm using spring integration for the same. The spike works fine for a single communication between client and server and also works fine for exactly 5 concurrent client connections.
The moment i have increased the concurrent client connections from 5 to any arbitary numbers, it doesn't work but the TCP server accepts only 5 connections.
I have used the 'ThreadAffinityClientConnectionFactory' mentioned by #Gary Russell in one of the earlier comments ( for similar requirements ) but still doesn't work.
Below is the code i have at the moment.
#Slf4j
#Configuration
#EnableIntegration
#IntegrationComponentScan
public class SocketConfig {
#Value("${socket.host}")
private String clientSocketHost;
#Value("${socket.port}")
private Integer clientSocketPort;
#Bean
public TcpOutboundGateway tcpOutGate(AbstractClientConnectionFactory connectionFactory) {
TcpOutboundGateway gate = new TcpOutboundGateway();
//connectionFactory.setTaskExecutor(taskExecutor());
gate.setConnectionFactory(clientCF());
return gate;
}
#Bean
public TcpInboundGateway tcpInGate(AbstractServerConnectionFactory connectionFactory) {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(connectionFactory);
inGate.setRequestChannel(fromTcp());
return inGate;
}
#Bean
public MessageChannel fromTcp() {
return new DirectChannel();
}
// Outgoing requests
#Bean
public ThreadAffinityClientConnectionFactory clientCF() {
TcpNetClientConnectionFactory tcpNetClientConnectionFactory = new TcpNetClientConnectionFactory(clientSocketHost, serverCF().getPort());
tcpNetClientConnectionFactory.setSingleUse(true);
ThreadAffinityClientConnectionFactory threadAffinityClientConnectionFactory = new ThreadAffinityClientConnectionFactory(
tcpNetClientConnectionFactory);
// Tested with the below too.
// threadAffinityClientConnectionFactory.setTaskExecutor(taskExecutor());
return threadAffinityClientConnectionFactory;
}
// Incoming requests
#Bean
public AbstractServerConnectionFactory serverCF() {
log.info("Server Connection Factory");
TcpNetServerConnectionFactory tcpNetServerConnectionFactory = new TcpNetServerConnectionFactory(clientSocketPort);
tcpNetServerConnectionFactory.setSerializer(new CustomSerializer());
tcpNetServerConnectionFactory.setDeserializer(new CustomDeserializer());
tcpNetServerConnectionFactory.setSingleUse(true);
return tcpNetServerConnectionFactory;
}
#Bean
public TaskExecutor taskExecutor () {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(50);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(50);
executor.setAllowCoreThreadTimeOut(true);
executor.setKeepAliveSeconds(120);
return executor;
}
}
Did anyone had the same issue with having multiple concurrent Tcp client connections of more than 5 ?
Thanks
Client Code:
#Component
#Slf4j
#RequiredArgsConstructor
public class ScheduledTaskService {
// Timeout in milliseconds
private static final int SOCKET_TIME_OUT = 18000;
private static final int BUFFER_SIZE = 32000;
private static final int ETX = 0x03;
private static final String HEADER = "ABCDEF ";
private static final String data = "FIXED DARATA"
private final AtomicInteger atomicInteger = new AtomicInteger();
#Async
#Scheduled(fixedDelay = 100000)
public void sendDataMessage() throws IOException, InterruptedException {
int numberOfRequests = 10;
Callable<String> executeMultipleSuccessfulRequestTask = () -> socketSendNReceive();
final Collection<Callable<String>> callables = new ArrayList<>();
IntStream.rangeClosed(1, numberOfRequests).forEach(i-> {
callables.add(executeMultipleSuccessfulRequestTask);
});
ExecutorService executorService = Executors.newFixedThreadPool(numberOfRequests);
List<Future<String>> taskFutureList = executorService.invokeAll(callables);
List<String> strings = taskFutureList.stream().map(future -> {
try {
return future.get(20000, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
e.printStackTrace();
}
return "";
}).collect(Collectors.toList());
strings.forEach(string -> log.info("Message received from the server: {} ", string));
}
public String socketSendNReceive() throws IOException{
int requestCounter = atomicInteger.incrementAndGet();
String host = "localhost";
int port = 8000;
Socket socket = new Socket();
InetSocketAddress address = new InetSocketAddress(host, port);
socket.connect(address, SOCKET_TIME_OUT);
socket.setSoTimeout(SOCKET_TIME_OUT);
//Send the message to the server
OutputStream os = socket.getOutputStream();
BufferedOutputStream bos = new BufferedOutputStream(os);
bos.write(HEADER.getBytes());
bos.write(data.getBytes());
bos.write(ETX);
bos.flush();
// log.info("Message sent to the server : {} ", envio);
//Get the return message from the server
InputStream is = socket.getInputStream();
String response = receber(is);
log.info("Received response");
return response;
}
private String receber(InputStream in) throws IOException {
final StringBuffer stringBuffer = new StringBuffer();
int readLength;
byte[] buffer;
buffer = new byte[BUFFER_SIZE];
do {
if(Objects.nonNull(in)) {
log.info("Input Stream not null");
}
readLength = in.read(buffer);
log.info("readLength : {} ", readLength);
if(readLength > 0){
stringBuffer.append(new String(buffer),0,readLength);
log.info("String ******");
}
} while (buffer[readLength-1] != ETX);
buffer = null;
stringBuffer.deleteCharAt(resposta.length()-1);
return stringBuffer.toString();
}
}
Since you are opening the connections all at the same time, you need to increase the backlog property on the server connection factory.
It defaults to 5.
/**
* The number of sockets in the connection backlog. Default 5;
* increase if you expect high connection rates.
* #param backlog The backlog to set.
*/
public void setBacklog(int backlog) {
There seem to be an issue when I use AggregatingReplyingKafkaTemplate with template.setReturnPartialOnTimeout(true) in that, it returns timeout exception even if partial results are available from consumers.
In example below, I have 3 consumers to reply to the request topic and i've set the reply timeout at 10 seconds. I've explicitly delayed the response of Consumer 3 to 11 seconds, however, I expect the response back from Consumer 1 and 2, so, I can return partial results. However, I am getting KafkaReplyTimeoutException. Appreciate your inputs. Thanks.
I follow the code based on the Unit Test below.
[ReplyingKafkaTemplateTests][1]
I've provided the actual code below:
#RestController
public class SumController {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
public static final String D_REPLY = "dReply";
public static final String D_REQUEST = "dRequest";
#ResponseBody
#PostMapping(value="/sum")
public String sum(#RequestParam("message") String message) throws InterruptedException, ExecutionException {
AggregatingReplyingKafkaTemplate<Integer, String, String> template = aggregatingTemplate(
new TopicPartitionOffset(D_REPLY, 0), 3, new AtomicInteger());
String resultValue ="";
String currentValue ="";
try {
template.setDefaultReplyTimeout(Duration.ofSeconds(10));
template.setReturnPartialOnTimeout(true);
ProducerRecord<Integer, String> record = new ProducerRecord<>(D_REQUEST, null, null, null, message);
RequestReplyFuture<Integer, String, Collection<ConsumerRecord<Integer, String>>> future =
template.sendAndReceive(record);
future.getSendFuture().get(5, TimeUnit.SECONDS); // send ok
System.out.println("Send Completed Successfully");
ConsumerRecord<Integer, Collection<ConsumerRecord<Integer, String>>> consumerRecord = future.get(10, TimeUnit.SECONDS);
System.out.println("Consumer record size "+consumerRecord.value().size());
Iterator<ConsumerRecord<Integer, String>> iterator = consumerRecord.value().iterator();
while (iterator.hasNext()) {
currentValue = iterator.next().value();
System.out.println("response " + currentValue);
System.out.println("Record header " + consumerRecord.headers().toString());
resultValue = resultValue + currentValue + "\r\n";
}
} catch (Exception e) {
System.out.println("Error Message is "+e.getMessage());
}
return resultValue;
}
public AggregatingReplyingKafkaTemplate<Integer, String, String> aggregatingTemplate(
TopicPartitionOffset topic, int releaseSize, AtomicInteger releaseCount) {
//Create Container Properties
ContainerProperties containerProperties = new ContainerProperties(topic);
containerProperties.setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
//Set the consumer Config
//Create Consumer Factory with Consumer Config
DefaultKafkaConsumerFactory<Integer, Collection<ConsumerRecord<Integer, String>>> cf =
new DefaultKafkaConsumerFactory<>(consumerConfigs());
//Create Listener Container with Consumer Factory and Container Property
KafkaMessageListenerContainer<Integer, Collection<ConsumerRecord<Integer, String>>> container =
new KafkaMessageListenerContainer<>(cf, containerProperties);
// container.setBeanName(this.testName);
AggregatingReplyingKafkaTemplate<Integer, String, String> template =
new AggregatingReplyingKafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerConfigs()), container,
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
template.setSharedReplyTopic(true);
template.start();
return template;
}
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test_id");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
return props;
}
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
org.apache.kafka.common.serialization.StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class);
return props;
}
public ProducerFactory<Integer,String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#KafkaListener(id = "def1", topics = { D_REQUEST}, groupId = "D_REQUEST1")
#SendTo // default REPLY_TOPIC header
public String dListener1(String in) throws InterruptedException {
return "First Consumer : "+ in.toUpperCase();
}
#KafkaListener(id = "def2", topics = { D_REQUEST}, groupId = "D_REQUEST2")
#SendTo // default REPLY_TOPIC header
public String dListener2(String in) throws InterruptedException {
return "Second Consumer : "+ in.toLowerCase();
}
#KafkaListener(id = "def3", topics = { D_REQUEST}, groupId = "D_REQUEST3")
#SendTo // default REPLY_TOPIC header
public String dListener3(String in) throws InterruptedException {
Thread.sleep(11000);
return "Third Consumer : "+ in;
}
}
'''
[1]: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/test/java/org/springframework/kafka/requestreply/ReplyingKafkaTemplateTests.java
template.setReturnPartialOnTimeout(true) simply means the template will consult the release strategy on timeout (with the timeout argument = true, to tell the strategy it's a timeout rather than a delivery call).
It must return true to release the partial result.
This is to allow you to look at (and possibly modify) the list to decide whether you want to release or discard.
Your strategy ignores the timeout parameter:
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
You need return timeout ? true : { ... }.
I'm trying to create a leftJoin in Kafka Streams which works fine for about 10 Records and then it crashes with an exception caused by a NullPointerException with such code:
private static KafkaStreams getKafkaStreams() {
StreamsConfig streamsConfig = new StreamsConfig(getProperties());
KStreamBuilder builder = new KStreamBuilder();
KTable<String, Verkaeufer> umsatzTable = builder.table(Serdes.String(), EventstreamSerde.Verkaeufer(), CommonUtilsConstants.TOPIC_VERKAEUFER_STAMMDATEN);
KStream<String, String> verkaeuferStream = builder.stream(CommonUtilsConstants.TOPIC_ANZAHL_UMSATZ_PER_VERKAEUFER);
KStream<String, String> tuttiStream = verkaeuferStream.leftJoin(umsatzTable,
(tutti, verkaeufer) -> ("Vorname=" + verkaeufer.getVorname().toString() +",Nachname=" +verkaeufer.getNachname().toString() +"," +tutti.toString()), Serdes.String(), Serdes.String());
tuttiStream.to(Serdes.String(), Serdes.String(), CommonUtilsConstants.TOPIC_TUTTI);
return new KafkaStreams(builder, streamsConfig);
}
StreamsConfig looks like this:
private static Properties getProperties() {
Properties props = new Properties();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, CommonUtilsConstants.BOOTSTRAP_SERVER_CONFIGURATION);
props.put(StreamsConfig.APPLICATION_ID_CONFIG, CommonUtilsConstants.GID_TUTTI);
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,Serdes.String().getClass());
props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, "1000");
return props;
}
Full Stack Trace:
22:19:36.550 [gid-tutti-8fe6be58-d5c5-41ce-982d-88081b98004e-StreamThread-1] ERROR o.a.k.s.p.internals.StreamThread - stream-thread [gid-tutti-8fe6be58-d5c5-41ce-982d-88081b98004e-StreamThread-1] Failed to commit StreamTask 0_0 state: org.apache.kafka.streams.errors.ProcessorStateException: task [0_0] Failed to flush state store KTABLE-SOURCE-STATE-STORE-0000000000
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:262)
at org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:190)
at org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:282)
at org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:264)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
at org.apache.kafka.streams.processor.internals.StreamTask.commitImpl(StreamTask.java:259)
at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:253)
at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:815)
at org.apache.kafka.streams.processor.internals.StreamThread.access$2800(StreamThread.java:73)
at org.apache.kafka.streams.processor.internals.StreamThread$2.apply(StreamThread.java:797)
at org.apache.kafka.streams.processor.internals.StreamThread.performOnStreamTasks(StreamThread.java:1448)
at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:789)
at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:778)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:567)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527) Caused by: java.lang.NullPointerException: null
at java.lang.String.<init>(String.java:143)
at ch.wesr.eventstream.commonutils.serde.GsonDeserializer.deserialize(GsonDeserializer.java:38)
at org.apache.kafka.streams.state.StateSerdes.valueFrom(StateSerdes.java:163)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:90)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.access$000(CachingKeyValueStore.java:34)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore$1.apply(CachingKeyValueStore.java:78)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:103)
at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:97)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.flush(CachingKeyValueStore.java:107)
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:260)
... 14 common frames omitted
Update:
This is what GsonDeserialize looks like
public class GsonDeserializer<T> implements Deserializer<T>{
public static final String CONFIG_VALUE_CLASS = "default.value.deserializer.class";
public static final String CONFIG_KEY_CLASS = "default.key.deserializer.class";
private Class<T> deserializedClass;
private Gson gson = new GsonBuilder().create();
public GsonDeserializer() {}
#Override
public void configure(Map<String, ?> config, boolean isKey) {
String configKey = isKey ? CONFIG_KEY_CLASS : CONFIG_VALUE_CLASS;
String clsName = String.valueOf(config.get(configKey));
try {
if (deserializedClass == null) {
deserializedClass = (Class<T>) Class.forName(clsName);
}
} catch (ClassNotFoundException e) {
System.err.printf("Failed to configure GsonDeserializer. " +
"Did you forget to specify the '%s' property ?%n",
configKey);
System.out.println(e.getMessage());
}
}
#Override
public T deserialize(String s, byte[] bytes) {
return gson.fromJson(new String(bytes), deserializedClass);
}
#Override
public void close() {}
}
As long as the cache is not flushed, your deserializer is never called. That's why it doesn't fail in the beginning and you can increase the time until it fails via cache size parameter and commit interval (we flush on commit).
Looking at your code for GsonDeserializer, it seems that new String(bytes) fails with NPE -- String constructor cannot take null as parameter -- your deserializer code must guard against bytes==null and should return null for this case directly.