Grpc throws deadline exceeded after negative number of seconds from now - spring-boot

First calls usually successful, but then I have exception with message like those:
io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: ClientCall started after deadline exceeded: -175.597476157s from now
Why is the number of seconds negative? How do I fix it?
My grpc config:
public class MyAppLibGrpcSenderConfig {
#Value("${grpc.client.host:localhost}")
private String host;
#Value("${grpc.client.port:9090}")
private int port;
#Value("${grpc.client.negotiationType:PLAINTEXT}")
private String negotiationType;
#Value("${grpc.client.deadline:300000}")
private long deadline;
#Autowired
private Tracer tracer;
#Bean
public ManagedChannel managedChannel() {
ManagedChannelBuilder<?> builder = ManagedChannelBuilder.forAddress(host, port);
if ("PLAINTEXT".equals(negotiationType)) {
builder.usePlaintext();
}
return builder.build();
}
#Bean
public TracingClientInterceptor tracingClientInterceptor(Tracer tracer) {
return TracingClientInterceptor
.newBuilder()
.withTracer(this.tracer)
.build();
}
#Bean
public MyAppSenderServiceGrpc.MyAppSenderServiceBlockingStub myAppSenderServiceBlockingStub(
TracingClientInterceptor tracingClientInterceptor,
ManagedChannel managedChannel) {
return MyAppSenderServiceGrpc
.newBlockingStub(tracingClientInterceptor.intercept(managedChannel))
.withDeadlineAfter(deadline, TimeUnit.MILLISECONDS);
}
#Bean
public MyAppCodeLoaderServiceGrpc.MyAppCodeLoaderServiceBlockingStub myAppCodeLoaderServiceBlockingStub(
TracingClientInterceptor tracingClientInterceptor,
ManagedChannel managedChannel) {
return MyAppCodeLoaderServiceGrpc
.newBlockingStub(tracingClientInterceptor.intercept(managedChannel))
.withDeadlineAfter(deadline, TimeUnit.MILLISECONDS);
}
}
Client code:
#net.devh.boot.grpc.server.service.GrpcService
public class MyAppEventKafkaSender extends MyAppSenderServiceGrpc.MyAppSenderServiceImplBase {
...
#SneakyThrows
#Override
public void sendMessage(ContextMyAppEventGrpc contextMyAppEventGrpc,
StreamObserver<Empty> responseObserver) {
try {
sendEvent(contextMyAppEventGrpc);
Empty reply = Empty.newBuilder().build();
responseObserver.onNext(reply);
responseObserver.onCompleted();
} catch (Exception e) {
Status status = Status.INTERNAL.withDescription(e.getMessage());
responseObserver.onError(status.asRuntimeException());
}
}
}

Deadline is an absolute point in time and is set immediately when you create your stub (and not necessarily when you execute it) - this is in contrast to timeouts which are relative to the start of the call.
So negative deadline means that it expired before your stub was executed.
To fix the issue, you should be setting deadline immediately before making a call.
var response = blockingStub.withDeadlineAfter(300000, TimeUnit.MILLISECONDS)
.yourRpcName();
Read more about Deadline here

Related

Auto reconnect to Binance Websocket after 24 hours

I'm currently experimenting on Binance Websocket (https://binance-docs.github.io/apidocs/spot/en/#websocket-market-streams), streaming the candlestick data for processing.
As documented, the stream will randomly close after 24 hours. What's the best way to continue the session without interruption? I mean disconnect/reconnect after 23 hours so the program will continue without losing its state.
Here's what I did. I'm using the binance-java-api https://github.com/binance-exchange/binance-java-api.
And here's how I connect:
BinanceApiWebSocketClient client =
BinanceApiClientFactory.newInstance(
appConfig.getApiKey(),
appConfig.getApiSecret(),
appConfig.isUseTestNet(),
appConfig.isUseTestNet())
.newWebSocketClient();
client.onCandlestickEvent(cryptoPair.toLowerCase(), getCandlestickInterval(),
new BinanceApiCallback<>() {
#Override
public void onResponse(final CandlestickEvent evt) {}
To solve this issue, I have used a scheduler/timer to reconnect the session every 12 hours. Since I'm using the Quarkus framework, it's readily available.
Solution:
SessionManager class:
#Singleton
#Slf4j
#RequiredArgsConstructor
public class SessionManagerScheduler {
final BinanceEventHandler binanceEventHandler;
#Scheduled(cron = "0 2 */12 * * ?")
public void reconnectSession() {
log.info("Keep-Alive: Binance Session Via WebSocket -------------------------");
binanceEventHandler.timeout();
}
}
The Binance event handler:
#ApplicationScoped
#Slf4j
#RequiredArgsConstructor
public class BinanceEventHandler {
final AppConfig appConfig;
final CandlestickAccumulator candlestickAccumulator;
final CandlestickMapper candlestickMapper;
private Closeable candleStream = null;
public void start() {
streamCandleEvent();
}
public void timeout() {
try {
candleStream.close();
streamCandleEvent();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private void streamCandleEvent() {
String cryptoPair = String.join(",", appConfig.getCryptoPairs());
log.info("Start listening to cryptoPair={}", cryptoPair);
candleStream = getClient().onCandlestickEvent(cryptoPair.toLowerCase(), getCandlestickInterval(),
new BinanceApiCallback<>() {
#Override
public void onResponse(final CandlestickEvent evt) {
if (!evt.getBarFinal()) {
return;
}
log.debug("Processing cryptoPair={}, event={}", cryptoPair, evt);
Candlestick candlestick = candlestickMapper.asCandleStick(evt);
candlestickAccumulator.processCandlestickEvent(candlestick);
}
#Override
public void onFailure(final Throwable cause) {
Application.hasError = true;
log.error("Fail connecting to Binance API {}", cause.getMessage());
}
}
);
}
private BinanceApiWebSocketClient getClient() {
return BinanceApiClientFactory.newInstance(
appConfig.getApiKey(),
appConfig.getApiSecret(),
appConfig.isUseTestNet(),
appConfig.isUseTestNet())
.newWebSocketClient();
}
private CandlestickInterval getCandlestickInterval() {
return CandlestickInterval.valueOf(appConfig.getCandlestickInterval());
}
}

How to configure spring integration adapters of a merely connecting client and a server sending messages

I'm trying to implement the following scenario using Spring Integration:
I need a client to connect to a server via TCP IP and wait to receive messages within 30 seconds.
I need a server to send 0 to n messages to the client which had connected.
I need a way to start and stop channel transfer without loss of messages.
I need to change the port the server is listening between stop and start.
This is my config so far:
#Configuration
public class TcpConfiguration {
private static Logger LOG = LoggerFactory.getLogger(TcpConfiguration.class);
#Value("${port}")
private Integer port;
#Value("${so-timeout}")
private Integer soTimeout;
#Value("${keep-alive}")
private Boolean keepAlive;
#Value("${send-timeout}")
private Integer sendTimeout;
#Bean
public AbstractServerConnectionFactory getMyConnFactory() {
LOG.debug("getMyConnFactory");
TcpNetServerConnectionFactory factory = new TcpNetServerConnectionFactory(port);
LOG.debug("getMyConnFactory port={}", port);
factory.setSoTimeout(soTimeout);
LOG.debug("getMyConnFactory soTimeout={}", soTimeout);
factory.setSoKeepAlive(true);
LOG.debug("getMyConnFactory keepAlive={}", keepAlive);
return factory;
}
#Bean
public AbstractEndpoint getMyChannelAdapter() {
LOG.debug("getMyChannelAdapter");
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(getMyConnFactory());
adapter.setOutputChannel(myChannelIn());
adapter.setSendTimeout(sendTimeout);
LOG.debug("getMyChannelAdapter adapter={}", adapter.getClass().getName());
return adapter;
}
#Bean
public MessageChannel myChannelIn() {
LOG.debug("myChannelIn");
return new DirectChannel();
}
#Bean
#Transformer(inputChannel = "myChannelIn", outputChannel = "myServiceChannel")
public ObjectToStringTransformer myTransformer() {
LOG.debug("myTransformer");
return new ObjectToStringTransformer();
}
#ServiceActivator(inputChannel = "myServiceChannel")
public void service(String in) {
LOG.debug("service received={}", in);
}
#Bean
public MessageChannel myChannelOut() {
LOG.debug("myChannelOut");
return new DirectChannel();
}
#Bean
public IntegrationFlow myOutbound() {
LOG.debug("myOutbound");
return IntegrationFlows.from(myChannelOut())
.handle(mySender())
.get();
}
#Bean
public MessageHandler mySender() {
LOG.debug("mySender");
TcpSendingMessageHandler tcpSendingMessageHandler = new TcpSendingMessageHandler();
tcpSendingMessageHandler.setConnectionFactory(getMyConnFactory());
return tcpSendingMessageHandler;
}
}
Please advice!
To change the server port I would shutdown the application context and restart it after configuring the new port in a remote configuration server.
Can I just close the application context without corrupting the current message transfer?
I don't know how to handle the connect-only client thing.
Use dynamic flow registration; just get the connection to open it without sending.
#SpringBootApplication
public class So62867670Application {
public static void main(String[] args) {
SpringApplication.run(So62867670Application.class, args);
}
#Bean
public ApplicationRunner runner(DynamicTcpReceiver receiver) {
return args -> { // Just a demo to show starting/stopping
receiver.connectAndListen(1234);
System.in.read();
receiver.stop();
System.in.read();
receiver.connectAndListen(1235);
System.in.read();
receiver.stop();
};
}
}
#Component
class DynamicTcpReceiver {
#Autowired
private IntegrationFlowContext context;
private IntegrationFlowRegistration registration;
public void connectAndListen(int port) throws InterruptedException {
TcpClientConnectionFactorySpec client = Tcp.netClient("localhost", port)
.deserializer(TcpCodecs.lf());
IntegrationFlow flow = IntegrationFlows.from(Tcp.inboundAdapter(client))
.transform(Transformers.objectToString())
.handle(System.out::println)
.get();
this.registration = context.registration(flow).register();
client.get().getConnection(); // just open the single shared connection
}
public void stop() {
if (this.registration != null) {
this.registration.destroy();
this.registration = null;
}
}
}
EDIT
And this is the server side...
#SpringBootApplication
#EnableScheduling
public class So62867670ServerApplication {
public static void main(String[] args) {
SpringApplication.run(So62867670ServerApplication.class, args);
}
#Bean
public ApplicationRunner runner(DynamicTcpServer receiver) {
return args -> { // Just a demo to show starting/stopping
receiver.tcpListen(1234);
System.in.read();
receiver.stop(1234);
System.in.read();
receiver.tcpListen(1235);
System.in.read();
receiver.stop(1235);
};
}
}
#Component
class DynamicTcpServer {
private static final Logger LOG = LoggerFactory.getLogger(DynamicTcpServer.class);
#Autowired
private IntegrationFlowContext flowContext;
#Autowired
private ApplicationContext appContext;
private final Map<Integer, IntegrationFlowRegistration> registrations = new HashMap<>();
private final Map<String, Entry<Integer, AtomicInteger>> clients = new ConcurrentHashMap<>();
public void tcpListen(int port) {
TcpServerConnectionFactorySpec server = Tcp.netServer(port)
.id("server-" + port)
.serializer(TcpCodecs.lf());
server.get().registerListener(msg -> false); // dummy listener so the accept thread doesn't exit
IntegrationFlow flow = f -> f.handle(Tcp.outboundAdapter(server));
this.registrations.put(port, flowContext.registration(flow).register());
}
public void stop(int port) {
IntegrationFlowRegistration registration = this.registrations.remove(port);
if (registration != null) {
registration.destroy();
}
}
#EventListener
public void closed(TcpConnectionOpenEvent event) {
LOG.info(event.toString());
String connectionId = event.getConnectionId();
String[] split = connectionId.split(":");
int port = Integer.parseInt(split[2]);
this.clients.put(connectionId, new AbstractMap.SimpleEntry<>(port, new AtomicInteger()));
}
#EventListener
public void closed(TcpConnectionCloseEvent event) {
LOG.info(event.toString());
this.clients.remove(event.getConnectionId());
}
#EventListener
public void listening(TcpConnectionServerListeningEvent event) {
LOG.info(event.toString());
}
#Scheduled(fixedDelay = 5000)
public void sender() {
this.clients.forEach((connectionId, portAndCount) -> {
IntegrationFlowRegistration registration = this.registrations.get(portAndCount.getKey());
if (registration != null) {
LOG.info("Sending to " + connectionId);
registration.getMessagingTemplate().send(MessageBuilder.withPayload("foo")
.setHeader(IpHeaders.CONNECTION_ID, connectionId).build());
if (portAndCount.getValue().incrementAndGet() > 9) {
this.appContext.getBean("server-" + portAndCount.getKey(), TcpNetServerConnectionFactory.class)
.closeConnection(connectionId);
}
}
});
}
}

How to build a nonblocking Consumer when using AsyncRabbitTemplate with Request/Reply Pattern

I'm new to rabbitmq and currently trying to implement a nonblocking producer with a nonblocking consumer. I've build some test producer where I played around with typereference:
#Service
public class Producer {
#Autowired
private AsyncRabbitTemplate asyncRabbitTemplate;
public <T extends RequestEvent<S>, S> RabbitConverterFuture<S> asyncSendEventAndReceive(final T event) {
return asyncRabbitTemplate.convertSendAndReceiveAsType(QueueConfig.EXCHANGE_NAME, event.getRoutingKey(), event, event.getResponseTypeReference());
}
}
And in some other place the test function that gets called in a RestController
#Autowired
Producer producer;
public void test() throws InterruptedException, ExecutionException {
TestEvent requestEvent = new TestEvent("SOMEDATA");
RabbitConverterFuture<TestResponse> reply = producer.asyncSendEventAndReceive(requestEvent);
log.info("Hello! The Reply is: {}", reply.get());
}
This so far was pretty straightforward, where I'm stuck now is how to create a consumer which is non-blocking too. My current listener:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public TestResponse onReceive(TestEvent event) {
Future<TestResponse> replyLater = proccessDataLater(event.getSomeData())
return replyLater.get();
}
As far as I'm aware, when using #RabbitListener this listener runs in its own thread. And I could configure the MessageListener to use more then one thread for the active listeners. Because of that, blocking the listener thread with future.get() is not blocking the application itself. Still there might be the case where all threads are blocking now and new events are stuck in the queue, when they maybe dont need to. What I would like to do is to just receive the event without the need to instantly return the result. Which is probably not possible with #RabbitListener. Something like:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public void onReceive(TestEvent event) {
/*
* Some fictional RabbitMQ API call where i get a ReplyContainer which contains
* the CorrelationID for the event. I can call replyContainer.reply(testResponse) later
* in the code without blocking the listener thread
*/
ReplyContainer replyContainer = AsyncRabbitTemplate.getReplyContainer()
// ProcessDataLater calls reply on the container when done with its action
proccessDataLater(event.getSomeData(), replyContainer);
}
What is the best way to implement such behaviour with rabbitmq in spring?
EDIT Config Class:
#Configuration
#EnableRabbit
public class RabbitMQConfig implements RabbitListenerConfigurer {
public static final String topicExchangeName = "exchange";
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
#Bean
public MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(rabbitConnectionFactory());
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public AsyncRabbitTemplate asyncRabbitTemplate() {
return new AsyncRabbitTemplate(rabbitTemplate());
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
Queue queue() {
return new Queue("test", false);
}
#Bean
Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("foo.#");
}
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter(producerJackson2MessageConverter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
#Override
public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
}
I don't have time to test it right now, but something like this should work; presumably you don't want to lose messages so you need to set the ackMode to MANUAL and do the acks yourself (as shown).
UPDATE
#SpringBootApplication
public class So52173111Application {
private final ExecutorService exec = Executors.newCachedThreadPool();
#Autowired
private RabbitTemplate template;
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
return args -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo", "test");
future.addCallback(r -> {
System.out.println("Reply: " + r);
}, t -> {
t.printStackTrace();
});
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(RabbitTemplate template) {
return new AsyncRabbitTemplate(template);
}
#RabbitListener(queues = "foo")
public void listen(String in, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag,
#Header(AmqpHeaders.CORRELATION_ID) String correlationId,
#Header(AmqpHeaders.REPLY_TO) String replyTo) {
ListenableFuture<String> future = handleInput(in);
future.addCallback(result -> {
Address address = new Address(replyTo);
this.template.convertAndSend(address.getExchangeName(), address.getRoutingKey(), result, m -> {
m.getMessageProperties().setCorrelationId(correlationId);
return m;
});
try {
channel.basicAck(tag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}, t -> {
t.printStackTrace();
});
}
private ListenableFuture<String> handleInput(String in) {
SettableListenableFuture<String> future = new SettableListenableFuture<String>();
exec.execute(() -> {
try {
Thread.sleep(2000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
future.set(in.toUpperCase());
});
return future;
}
public static void main(String[] args) {
SpringApplication.run(So52173111Application.class, args);
}
}

spring rabbitmq wait confirm timeout

spring-rabbit version:1.7.4.RELEASE
this is my code:
#Configuration
public class RabbitmqConfiguration {
public RabbitmqConfiguration(RabbitTemplate rabbitTemplate,ConfirmCallback confirmCallback) throws Exception {
rabbitTemplate.setConfirmCallback(confirmCallback);
ObjectMapper mapper = new ObjectMapper();
mapper.setPropertyNamingStrategy(PropertyNamingStrategy.SNAKE_CASE);
rabbitTemplate.setMessageConverter(new Jackson2JsonMessageConverter(mapper));
}
}
#Component
#Slf4j
public class OrderStatusChangeComponentImpl implements OrderStatusChangeComponent,ConfirmCallback{
#Autowired
private RabbitTemplate rabbitTemplate;
#Autowired
private OrderMessageLogComponent orderMessageLogComponent;
#Autowired
private Gson gson;
/*
* (non-Javadoc)
*
* #see org.springframework.amqp.rabbit.core.RabbitTemplate.ConfirmCallback#
* confirm(org.springframework.amqp.rabbit.support.CorrelationData, boolean,
* java.lang.String)
*/
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
long nowTime = System.nanoTime();
String uuid = correlationData.getId();
if (ack) {
orderMessageLogComponent.deleteOrderMessageLogByUUID(uuid);
} else {
log.error(cause, nowTime);
}
}
i test rabbitmq send msg by jmeter about 512 thread and 1000 loops;
i see log have so much error.
Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=406, reply-text=TIMEOUT WAITING FOR ACK, class-id=0, method-id=0)
finally my application cannot connection rabbitmq.
btw my rabbitmq server is healty.
Try doing your send() within a RabbitTemplate.invoke() and invoke a template.waitForConfirmsOrDie() with a longer timeout.
If you are using invoke() and don't do that, the framework will only wait 5000ms for confirms.
If you are not using invoke(), it's not clear how you can get that close error.
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
long nowTime = System.nanoTime();
String uuid = correlationData.getId();
if (ack) {
orderMessageLogComponent.deleteOrderMessageLogByUUID(uuid);
} else {
log.error("消息发送失败,消息唯一标识为{},具体原因为{},当前时间为{}", uuid, cause, nowTime);
}
}
#Override
#Async(MsgSendAsyncConfig.MSGSEND_SYNC_POOL)
public void deleteOrderMessageLogByUUID(String uuid) {
orderMessageLogService.deleteOrderMessageLogByUUID(uuid);
}

How do I write a unit test to verify async behavior using Spring 4 and annotations?

How do I write a unit test to verify async behavior using Spring 4 and annotations?
Since i'm used to Spring's (old) xml style), it took me some time to figure this out. So I thought I answer my own question to help others.
First the service that exposes an async download method:
#Service
public class DownloadService {
// note: placing this async method in its own dedicated bean was necessary
// to circumvent inner bean calls
#Async
public Future<String> startDownloading(final URL url) throws IOException {
return new AsyncResult<String>(getContentAsString(url));
}
private String getContentAsString(URL url) throws IOException {
try {
Thread.sleep(1000); // To demonstrate the effect of async
InputStream input = url.openStream();
return IOUtils.toString(input, StandardCharsets.UTF_8);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
}
}
Next the test:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration
public class DownloadServiceTest {
#Configuration
#EnableAsync
static class Config {
#Bean
public DownloadService downloadService() {
return new DownloadService();
}
}
#Autowired
private DownloadService service;
#Test
public void testIndex() throws Exception {
final URL url = new URL("http://spring.io/blog/2013/01/16/next-stop-spring-framework-4-0");
Future<String> content = service.startDownloading(url);
assertThat(false, equalTo(content.isDone()));
final String str = content.get();
assertThat(true, equalTo(content.isDone()));
assertThat(str, JUnitMatchers.containsString("<html"));
}
}
If you are using the same example in Java 8 you could also use the CompletableFuture class as follows:
#Service
public class DownloadService {
#Async
public CompletableFuture<String> startDownloading(final URL url) throws IOException {
CompletableFuture<Boolean> future = new CompletableFuture<>();
Executors.newCachedThreadPool().submit(() -> {
getContentAsString(url);
future.complete(true);
return null;
});
return future;
}
private String getContentAsString(URL url) throws IOException {
try {
Thread.sleep(1000); // To demonstrate the effect of async
InputStream input = url.openStream();
return IOUtils.toString(input, StandardCharsets.UTF_8);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
}
}
Now the test:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration
public class DownloadServiceTest {
#Configuration
#EnableAsync
static class Config {
#Bean
public DownloadService downloadService() {
return new DownloadService();
}
}
#Autowired
private DownloadService service;
#Test
public void testIndex() throws Exception {
final URL url = new URL("http://spring.io/blog/2013/01/16/next-stop-spring-framework-4-0");
CompletableFuture<Boolean> content = service.startDownloading(url);
content.thenRun(() -> {
assertThat(true, equalTo(content.isDone()));
assertThat(str, JUnitMatchers.containsString("<html"));
});
// wait for completion
content.get(10, TimeUnit.SECONDS);
}
}
Please that when the time-out is not specified, and anything goes wrong the test will go on "forever" until the CI or you shut it down.

Resources