Auto reconnect to Binance Websocket after 24 hours - websocket

I'm currently experimenting on Binance Websocket (https://binance-docs.github.io/apidocs/spot/en/#websocket-market-streams), streaming the candlestick data for processing.
As documented, the stream will randomly close after 24 hours. What's the best way to continue the session without interruption? I mean disconnect/reconnect after 23 hours so the program will continue without losing its state.
Here's what I did. I'm using the binance-java-api https://github.com/binance-exchange/binance-java-api.
And here's how I connect:
BinanceApiWebSocketClient client =
BinanceApiClientFactory.newInstance(
appConfig.getApiKey(),
appConfig.getApiSecret(),
appConfig.isUseTestNet(),
appConfig.isUseTestNet())
.newWebSocketClient();
client.onCandlestickEvent(cryptoPair.toLowerCase(), getCandlestickInterval(),
new BinanceApiCallback<>() {
#Override
public void onResponse(final CandlestickEvent evt) {}

To solve this issue, I have used a scheduler/timer to reconnect the session every 12 hours. Since I'm using the Quarkus framework, it's readily available.
Solution:
SessionManager class:
#Singleton
#Slf4j
#RequiredArgsConstructor
public class SessionManagerScheduler {
final BinanceEventHandler binanceEventHandler;
#Scheduled(cron = "0 2 */12 * * ?")
public void reconnectSession() {
log.info("Keep-Alive: Binance Session Via WebSocket -------------------------");
binanceEventHandler.timeout();
}
}
The Binance event handler:
#ApplicationScoped
#Slf4j
#RequiredArgsConstructor
public class BinanceEventHandler {
final AppConfig appConfig;
final CandlestickAccumulator candlestickAccumulator;
final CandlestickMapper candlestickMapper;
private Closeable candleStream = null;
public void start() {
streamCandleEvent();
}
public void timeout() {
try {
candleStream.close();
streamCandleEvent();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private void streamCandleEvent() {
String cryptoPair = String.join(",", appConfig.getCryptoPairs());
log.info("Start listening to cryptoPair={}", cryptoPair);
candleStream = getClient().onCandlestickEvent(cryptoPair.toLowerCase(), getCandlestickInterval(),
new BinanceApiCallback<>() {
#Override
public void onResponse(final CandlestickEvent evt) {
if (!evt.getBarFinal()) {
return;
}
log.debug("Processing cryptoPair={}, event={}", cryptoPair, evt);
Candlestick candlestick = candlestickMapper.asCandleStick(evt);
candlestickAccumulator.processCandlestickEvent(candlestick);
}
#Override
public void onFailure(final Throwable cause) {
Application.hasError = true;
log.error("Fail connecting to Binance API {}", cause.getMessage());
}
}
);
}
private BinanceApiWebSocketClient getClient() {
return BinanceApiClientFactory.newInstance(
appConfig.getApiKey(),
appConfig.getApiSecret(),
appConfig.isUseTestNet(),
appConfig.isUseTestNet())
.newWebSocketClient();
}
private CandlestickInterval getCandlestickInterval() {
return CandlestickInterval.valueOf(appConfig.getCandlestickInterval());
}
}

Related

Grpc throws deadline exceeded after negative number of seconds from now

First calls usually successful, but then I have exception with message like those:
io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: ClientCall started after deadline exceeded: -175.597476157s from now
Why is the number of seconds negative? How do I fix it?
My grpc config:
public class MyAppLibGrpcSenderConfig {
#Value("${grpc.client.host:localhost}")
private String host;
#Value("${grpc.client.port:9090}")
private int port;
#Value("${grpc.client.negotiationType:PLAINTEXT}")
private String negotiationType;
#Value("${grpc.client.deadline:300000}")
private long deadline;
#Autowired
private Tracer tracer;
#Bean
public ManagedChannel managedChannel() {
ManagedChannelBuilder<?> builder = ManagedChannelBuilder.forAddress(host, port);
if ("PLAINTEXT".equals(negotiationType)) {
builder.usePlaintext();
}
return builder.build();
}
#Bean
public TracingClientInterceptor tracingClientInterceptor(Tracer tracer) {
return TracingClientInterceptor
.newBuilder()
.withTracer(this.tracer)
.build();
}
#Bean
public MyAppSenderServiceGrpc.MyAppSenderServiceBlockingStub myAppSenderServiceBlockingStub(
TracingClientInterceptor tracingClientInterceptor,
ManagedChannel managedChannel) {
return MyAppSenderServiceGrpc
.newBlockingStub(tracingClientInterceptor.intercept(managedChannel))
.withDeadlineAfter(deadline, TimeUnit.MILLISECONDS);
}
#Bean
public MyAppCodeLoaderServiceGrpc.MyAppCodeLoaderServiceBlockingStub myAppCodeLoaderServiceBlockingStub(
TracingClientInterceptor tracingClientInterceptor,
ManagedChannel managedChannel) {
return MyAppCodeLoaderServiceGrpc
.newBlockingStub(tracingClientInterceptor.intercept(managedChannel))
.withDeadlineAfter(deadline, TimeUnit.MILLISECONDS);
}
}
Client code:
#net.devh.boot.grpc.server.service.GrpcService
public class MyAppEventKafkaSender extends MyAppSenderServiceGrpc.MyAppSenderServiceImplBase {
...
#SneakyThrows
#Override
public void sendMessage(ContextMyAppEventGrpc contextMyAppEventGrpc,
StreamObserver<Empty> responseObserver) {
try {
sendEvent(contextMyAppEventGrpc);
Empty reply = Empty.newBuilder().build();
responseObserver.onNext(reply);
responseObserver.onCompleted();
} catch (Exception e) {
Status status = Status.INTERNAL.withDescription(e.getMessage());
responseObserver.onError(status.asRuntimeException());
}
}
}
Deadline is an absolute point in time and is set immediately when you create your stub (and not necessarily when you execute it) - this is in contrast to timeouts which are relative to the start of the call.
So negative deadline means that it expired before your stub was executed.
To fix the issue, you should be setting deadline immediately before making a call.
var response = blockingStub.withDeadlineAfter(300000, TimeUnit.MILLISECONDS)
.yourRpcName();
Read more about Deadline here

How to configure spring integration adapters of a merely connecting client and a server sending messages

I'm trying to implement the following scenario using Spring Integration:
I need a client to connect to a server via TCP IP and wait to receive messages within 30 seconds.
I need a server to send 0 to n messages to the client which had connected.
I need a way to start and stop channel transfer without loss of messages.
I need to change the port the server is listening between stop and start.
This is my config so far:
#Configuration
public class TcpConfiguration {
private static Logger LOG = LoggerFactory.getLogger(TcpConfiguration.class);
#Value("${port}")
private Integer port;
#Value("${so-timeout}")
private Integer soTimeout;
#Value("${keep-alive}")
private Boolean keepAlive;
#Value("${send-timeout}")
private Integer sendTimeout;
#Bean
public AbstractServerConnectionFactory getMyConnFactory() {
LOG.debug("getMyConnFactory");
TcpNetServerConnectionFactory factory = new TcpNetServerConnectionFactory(port);
LOG.debug("getMyConnFactory port={}", port);
factory.setSoTimeout(soTimeout);
LOG.debug("getMyConnFactory soTimeout={}", soTimeout);
factory.setSoKeepAlive(true);
LOG.debug("getMyConnFactory keepAlive={}", keepAlive);
return factory;
}
#Bean
public AbstractEndpoint getMyChannelAdapter() {
LOG.debug("getMyChannelAdapter");
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(getMyConnFactory());
adapter.setOutputChannel(myChannelIn());
adapter.setSendTimeout(sendTimeout);
LOG.debug("getMyChannelAdapter adapter={}", adapter.getClass().getName());
return adapter;
}
#Bean
public MessageChannel myChannelIn() {
LOG.debug("myChannelIn");
return new DirectChannel();
}
#Bean
#Transformer(inputChannel = "myChannelIn", outputChannel = "myServiceChannel")
public ObjectToStringTransformer myTransformer() {
LOG.debug("myTransformer");
return new ObjectToStringTransformer();
}
#ServiceActivator(inputChannel = "myServiceChannel")
public void service(String in) {
LOG.debug("service received={}", in);
}
#Bean
public MessageChannel myChannelOut() {
LOG.debug("myChannelOut");
return new DirectChannel();
}
#Bean
public IntegrationFlow myOutbound() {
LOG.debug("myOutbound");
return IntegrationFlows.from(myChannelOut())
.handle(mySender())
.get();
}
#Bean
public MessageHandler mySender() {
LOG.debug("mySender");
TcpSendingMessageHandler tcpSendingMessageHandler = new TcpSendingMessageHandler();
tcpSendingMessageHandler.setConnectionFactory(getMyConnFactory());
return tcpSendingMessageHandler;
}
}
Please advice!
To change the server port I would shutdown the application context and restart it after configuring the new port in a remote configuration server.
Can I just close the application context without corrupting the current message transfer?
I don't know how to handle the connect-only client thing.
Use dynamic flow registration; just get the connection to open it without sending.
#SpringBootApplication
public class So62867670Application {
public static void main(String[] args) {
SpringApplication.run(So62867670Application.class, args);
}
#Bean
public ApplicationRunner runner(DynamicTcpReceiver receiver) {
return args -> { // Just a demo to show starting/stopping
receiver.connectAndListen(1234);
System.in.read();
receiver.stop();
System.in.read();
receiver.connectAndListen(1235);
System.in.read();
receiver.stop();
};
}
}
#Component
class DynamicTcpReceiver {
#Autowired
private IntegrationFlowContext context;
private IntegrationFlowRegistration registration;
public void connectAndListen(int port) throws InterruptedException {
TcpClientConnectionFactorySpec client = Tcp.netClient("localhost", port)
.deserializer(TcpCodecs.lf());
IntegrationFlow flow = IntegrationFlows.from(Tcp.inboundAdapter(client))
.transform(Transformers.objectToString())
.handle(System.out::println)
.get();
this.registration = context.registration(flow).register();
client.get().getConnection(); // just open the single shared connection
}
public void stop() {
if (this.registration != null) {
this.registration.destroy();
this.registration = null;
}
}
}
EDIT
And this is the server side...
#SpringBootApplication
#EnableScheduling
public class So62867670ServerApplication {
public static void main(String[] args) {
SpringApplication.run(So62867670ServerApplication.class, args);
}
#Bean
public ApplicationRunner runner(DynamicTcpServer receiver) {
return args -> { // Just a demo to show starting/stopping
receiver.tcpListen(1234);
System.in.read();
receiver.stop(1234);
System.in.read();
receiver.tcpListen(1235);
System.in.read();
receiver.stop(1235);
};
}
}
#Component
class DynamicTcpServer {
private static final Logger LOG = LoggerFactory.getLogger(DynamicTcpServer.class);
#Autowired
private IntegrationFlowContext flowContext;
#Autowired
private ApplicationContext appContext;
private final Map<Integer, IntegrationFlowRegistration> registrations = new HashMap<>();
private final Map<String, Entry<Integer, AtomicInteger>> clients = new ConcurrentHashMap<>();
public void tcpListen(int port) {
TcpServerConnectionFactorySpec server = Tcp.netServer(port)
.id("server-" + port)
.serializer(TcpCodecs.lf());
server.get().registerListener(msg -> false); // dummy listener so the accept thread doesn't exit
IntegrationFlow flow = f -> f.handle(Tcp.outboundAdapter(server));
this.registrations.put(port, flowContext.registration(flow).register());
}
public void stop(int port) {
IntegrationFlowRegistration registration = this.registrations.remove(port);
if (registration != null) {
registration.destroy();
}
}
#EventListener
public void closed(TcpConnectionOpenEvent event) {
LOG.info(event.toString());
String connectionId = event.getConnectionId();
String[] split = connectionId.split(":");
int port = Integer.parseInt(split[2]);
this.clients.put(connectionId, new AbstractMap.SimpleEntry<>(port, new AtomicInteger()));
}
#EventListener
public void closed(TcpConnectionCloseEvent event) {
LOG.info(event.toString());
this.clients.remove(event.getConnectionId());
}
#EventListener
public void listening(TcpConnectionServerListeningEvent event) {
LOG.info(event.toString());
}
#Scheduled(fixedDelay = 5000)
public void sender() {
this.clients.forEach((connectionId, portAndCount) -> {
IntegrationFlowRegistration registration = this.registrations.get(portAndCount.getKey());
if (registration != null) {
LOG.info("Sending to " + connectionId);
registration.getMessagingTemplate().send(MessageBuilder.withPayload("foo")
.setHeader(IpHeaders.CONNECTION_ID, connectionId).build());
if (portAndCount.getValue().incrementAndGet() > 9) {
this.appContext.getBean("server-" + portAndCount.getKey(), TcpNetServerConnectionFactory.class)
.closeConnection(connectionId);
}
}
});
}
}

Spring websocket establishing connection is stuck at 'opening connection'

I am using spring-boot-websocket (spring-boot version 1.5.10) in my project. I have configured it as below,
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends WebSocketMessageBrokerConfigurationSupport
implements WebSocketMessageBrokerConfigurer {
#Value( "${rabbitmq.host}" )
private String rabbitmqHost;
#Value( "${rabbitmq.stomp.port}" )
private int rabbitmqStompPort;
#Value( "${rabbitmq.username}" )
private String rabbitmqUserName;
#Value( "${rabbitmq.password}" )
private String rabbitmqPassword;
#Override
public void configureMessageBroker( MessageBrokerRegistry registry )
{
registry.enableStompBrokerRelay("/topic", "/queue").setRelayHost(rabbitmqHost).setRelayPort(rabbitmqStompPort)
.setSystemLogin(rabbitmqUserName).setSystemPasscode(rabbitmqPassword);
registry.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints( StompEndpointRegistry stompEndpointRegistry )
{
stompEndpointRegistry.addEndpoint("/ws").setAllowedOrigins("*").withSockJS();
}
#Bean
#Override
public WebSocketHandler subProtocolWebSocketHandler()
{
return new CustomSubProtocolWebSocketHandler(clientInboundChannel(), clientOutboundChannel());
}
#Override
public void configureWebSocketTransport( WebSocketTransportRegistration registry )
{
super.configureWebSocketTransport(registry);
}
#Override
public boolean configureMessageConverters( List<MessageConverter> messageConverters )
{
return super.configureMessageConverters(messageConverters);
}
#Override
public void configureClientInboundChannel( ChannelRegistration registration )
{
super.configureClientInboundChannel(registration);
}
#Override
public void configureClientOutboundChannel( ChannelRegistration registration )
{
super.configureClientOutboundChannel(registration);
}
#Override
public void addArgumentResolvers( List<HandlerMethodArgumentResolver> argumentResolvers )
{
super.addArgumentResolvers(argumentResolvers);
}
#Override
public void addReturnValueHandlers( List<HandlerMethodReturnValueHandler> returnValueHandlers )
{
super.addReturnValueHandlers(returnValueHandlers);
}
}
public class CustomSubProtocolWebSocketHandler extends SubProtocolWebSocketHandler {
private static final Logger LOGGER = LoggerFactory.getLogger(CustomSubProtocolWebSocketHandler.class);
#Autowired
private UserCommons userCommons;
CustomSubProtocolWebSocketHandler(MessageChannel clientInboundChannel,
SubscribableChannel clientOutboundChannel) {
super(clientInboundChannel, clientOutboundChannel);
}
#Override
public void afterConnectionEstablished(WebSocketSession session) throws Exception {
LOGGER.info("************************************************************************************************************************New webSocket connection was established: {}", session);
String token = session.getUri().getQuery().replace("token=", "");
try
{
String user = Jwts.parser().setSigningKey(TokenConstant.SECRET)
.parseClaimsJws(token.replace(TokenConstant.TOKEN_PREFIX, "")).getBody().getSubject();
Optional<UserModel> userModelOptional = userCommons.getUserByEmail(user);
if( !userModelOptional.isPresent() )
{
LOGGER.error(
"************************************************************************************************************************Invalid token is passed with web socket request");
throw new DataException(GeneralConstants.EXCEPTION, "Invalid user", HttpStatus.BAD_REQUEST);
}
}
catch( Exception e )
{
LOGGER.error(GeneralConstants.ERROR, e);
}
super.afterConnectionEstablished(session);
}
#Override
public void afterConnectionClosed(WebSocketSession session, CloseStatus closeStatus) throws Exception {
LOGGER.error("************************************************************************************************************************webSocket connection was closed");
LOGGER.error("Reason for closure {} Session: {} ", closeStatus.getReason(),session.getId() );
super.afterConnectionClosed(session, closeStatus);
}
#Override
public void handleTransportError(WebSocketSession session, Throwable exception) throws Exception {
LOGGER.error("************************************************************************************************************************Connection closed unexpectedly");
LOGGER.error(GeneralConstants.ERROR, exception);
super.handleTransportError(session, exception);
}
}
From the client-side, I am creating a SockJS object to establish the connection,
let url = `/ws?token=${localStorage.getItem("access_token")}`;
// Web Socket connection
/* eslint-disable */
let sockJS = new SockJS(url);
let stompClient = Stomp.over(sockJS);
debugger
this.setState({
stompObject : stompClient,
});
But the connection is not getting established consistently, most of the times it is stuck at Opening the connection, in the backend log, I can see the connection getting established and a session is created. But, in the browser console, I can see client-side sending message to the server but the server is not acknowledging the message.
Sometimes, when I refresh the browser for 10-15 times, the connection is getting established successfully. Is there any mistake in my configuration?
Thank You.
Given that you can "hit refresh 10 or 15 times and then get a connection," I'm curious if you dealing with a cookie issue? I know Chrome is famous for that sort of thing. Anyway close out all browser windows and stop the browser, then start the browser, and tell it to clear browsing history and then attempt the connection. Also, be SURE you read the version of the spring-boot docs for the version of spring-boot you are that you are actually using, and also specify the SB version in your questions and when looking for answers.

Netty Parallel Handler Processing

Following recommendations elsewhere I am attempting to parallelize my final inbound handler in a Netty pipeline as such
public final class EchoServer {
private EventLoopGroup group = new NioEventLoopGroup();
private UnorderedThreadPoolEventExecutor workers = new UnorderedThreadPoolEventExecutor(10);
public void start(int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(group).channel(NioDatagramChannel.class).option(ChannelOption.SO_BROADCAST, true)
.handler(new ChannelInitializer<NioDatagramChannel>() {
#Override
protected void initChannel(NioDatagramChannel channel) throws Exception {
channel.pipeline().addLast(workers, new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket packet) throws Exception {
System.err.println(packet);
// Simulated database delay that I have to wait to occur before repsonding
Thread.sleep(1000);
ctx.write(new DatagramPacket(Unpooled.copiedBuffer("goodbye", StandardCharsets.ISO_8859_1), packet.sender()));
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
});
}
});
b.bind(port).sync().channel().closeFuture().await();
} finally {
group.shutdownGracefully();
}
}
public void stop() {
group.shutdownGracefully();
}
}
I have ten clients that connect concurrently, as a test, and I am measuring execution time for handling all the requests. As expected with the 1 second delay and sequential execution it takes just over 10 seconds. I am trying to get execution down to somewhere sub 2 seconds to prove parallel handling.
From what I understand adding the handler to the pipeline with an explicitly assigned executor is supposed to parallelize that handlers work across the thread in the executor.
Instead of seeing a increase in performance, what I am finding is that my client is not receiving the responses when I add the parallel processing. The thread sleep is there to simulate the potential time it will take to write the incoming data to a database. Am I doing something obviously wrong here?
I worked around the apparently lack of Netty support for doing final end UDP processing in parallel using standard java concurrency mechanisms.
public final class EchoServer {
private EventLoopGroup group = new NioEventLoopGroup();
private ExecutorService executors = Executors.newFixedThreadPool(10);
public void start(int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(group).channel(NioDatagramChannel.class).handler(new ChannelInitializer<NioDatagramChannel>() {
#Override
protected void initChannel(NioDatagramChannel channel) throws Exception {
channel.pipeline().addLast(new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket packet) throws Exception {
CompletableFuture.runAsync(() -> {
System.err.println(packet);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
ctx.writeAndFlush(new DatagramPacket(Unpooled.copiedBuffer("goodbye", StandardCharsets.ISO_8859_1),
packet.sender()));
}, executors);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
});
}
});
b.bind(port).sync().channel().closeFuture().await();
} finally {
group.shutdownGracefully();
}
}
public void stop() {
group.shutdownGracefully();
}
}

Websocket : Is it possible to know from the program what is the reason for onClose being called

I have a Sample WebSocket Program whown below which works fine
When ever the user closes the browser or if there is any excetion Or any disconnect , the onClose Method is
being called
My question is that , Is it possible to know from the program what is the reason for onClose being called ??
Please share your views , Thanks for reading .
public class Html5Servlet extends WebSocketServlet {
private AtomicInteger index = new AtomicInteger();
private static final List<String> tickers = new ArrayList<String>();
static{
tickers.add("ajeesh");
tickers.add("peeyu");
tickers.add("kidillan");
tickers.add("entammo");
}
/**
*
*/
private static final long serialVersionUID = 1L;
public WebSocket doWebSocketConnect(HttpServletRequest req, String resp) {
//System.out.println("doWebSocketConnect");
return new StockTickerSocket();
}
protected String getMyJsonTicker() throws Exception{
return "";
}
public class StockTickerSocket implements WebSocket.OnTextMessage{
private Connection connection;
private Timer timer;
#Override
public void onClose(int arg0, String arg1) {
System.out.println("onClose called!"+arg0);
}
#Override
public void onOpen(Connection connection) {
//System.out.println("onOpen");
this.connection=connection;
this.timer=new Timer();
}
#Override
public void onMessage(String data) {
//System.out.println("onMessage");
if(data.indexOf("disconnect")>=0){
connection.close();
timer.cancel();
}else{
sendMessage();
}
}
public void disconnect() {
System.out.println("disconnect called");
}
public void onDisconnect()
{
System.out.println("onDisconnect called");
}
private void sendMessage() {
if(connection==null||!connection.isOpen()){
//System.out.println("Connection is closed!!");
return;
}
timer.schedule(new TimerTask() {
#Override
public void run() {
try{
//System.out.println("Running task");
connection.sendMessage(getMyJsonTicker());
}
catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}, new Date(),5000);
}
}
}
The signature for onClose is the following ...
#Override
public void onClose(int closeCode, String closeReason) {
System.out.println("onClose called - statusCode = " + closeCode);
System.out.println(" reason = " + closeReason);
}
Where int closeCode is any of the registered Close Status Codes.
And String closeReason is an optional (per protocol spec) close reason message.

Resources