Following recommendations elsewhere I am attempting to parallelize my final inbound handler in a Netty pipeline as such
public final class EchoServer {
private EventLoopGroup group = new NioEventLoopGroup();
private UnorderedThreadPoolEventExecutor workers = new UnorderedThreadPoolEventExecutor(10);
public void start(int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(group).channel(NioDatagramChannel.class).option(ChannelOption.SO_BROADCAST, true)
.handler(new ChannelInitializer<NioDatagramChannel>() {
#Override
protected void initChannel(NioDatagramChannel channel) throws Exception {
channel.pipeline().addLast(workers, new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket packet) throws Exception {
System.err.println(packet);
// Simulated database delay that I have to wait to occur before repsonding
Thread.sleep(1000);
ctx.write(new DatagramPacket(Unpooled.copiedBuffer("goodbye", StandardCharsets.ISO_8859_1), packet.sender()));
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
});
}
});
b.bind(port).sync().channel().closeFuture().await();
} finally {
group.shutdownGracefully();
}
}
public void stop() {
group.shutdownGracefully();
}
}
I have ten clients that connect concurrently, as a test, and I am measuring execution time for handling all the requests. As expected with the 1 second delay and sequential execution it takes just over 10 seconds. I am trying to get execution down to somewhere sub 2 seconds to prove parallel handling.
From what I understand adding the handler to the pipeline with an explicitly assigned executor is supposed to parallelize that handlers work across the thread in the executor.
Instead of seeing a increase in performance, what I am finding is that my client is not receiving the responses when I add the parallel processing. The thread sleep is there to simulate the potential time it will take to write the incoming data to a database. Am I doing something obviously wrong here?
I worked around the apparently lack of Netty support for doing final end UDP processing in parallel using standard java concurrency mechanisms.
public final class EchoServer {
private EventLoopGroup group = new NioEventLoopGroup();
private ExecutorService executors = Executors.newFixedThreadPool(10);
public void start(int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(group).channel(NioDatagramChannel.class).handler(new ChannelInitializer<NioDatagramChannel>() {
#Override
protected void initChannel(NioDatagramChannel channel) throws Exception {
channel.pipeline().addLast(new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket packet) throws Exception {
CompletableFuture.runAsync(() -> {
System.err.println(packet);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
ctx.writeAndFlush(new DatagramPacket(Unpooled.copiedBuffer("goodbye", StandardCharsets.ISO_8859_1),
packet.sender()));
}, executors);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
});
}
});
b.bind(port).sync().channel().closeFuture().await();
} finally {
group.shutdownGracefully();
}
}
public void stop() {
group.shutdownGracefully();
}
}
Related
I'm currently experimenting on Binance Websocket (https://binance-docs.github.io/apidocs/spot/en/#websocket-market-streams), streaming the candlestick data for processing.
As documented, the stream will randomly close after 24 hours. What's the best way to continue the session without interruption? I mean disconnect/reconnect after 23 hours so the program will continue without losing its state.
Here's what I did. I'm using the binance-java-api https://github.com/binance-exchange/binance-java-api.
And here's how I connect:
BinanceApiWebSocketClient client =
BinanceApiClientFactory.newInstance(
appConfig.getApiKey(),
appConfig.getApiSecret(),
appConfig.isUseTestNet(),
appConfig.isUseTestNet())
.newWebSocketClient();
client.onCandlestickEvent(cryptoPair.toLowerCase(), getCandlestickInterval(),
new BinanceApiCallback<>() {
#Override
public void onResponse(final CandlestickEvent evt) {}
To solve this issue, I have used a scheduler/timer to reconnect the session every 12 hours. Since I'm using the Quarkus framework, it's readily available.
Solution:
SessionManager class:
#Singleton
#Slf4j
#RequiredArgsConstructor
public class SessionManagerScheduler {
final BinanceEventHandler binanceEventHandler;
#Scheduled(cron = "0 2 */12 * * ?")
public void reconnectSession() {
log.info("Keep-Alive: Binance Session Via WebSocket -------------------------");
binanceEventHandler.timeout();
}
}
The Binance event handler:
#ApplicationScoped
#Slf4j
#RequiredArgsConstructor
public class BinanceEventHandler {
final AppConfig appConfig;
final CandlestickAccumulator candlestickAccumulator;
final CandlestickMapper candlestickMapper;
private Closeable candleStream = null;
public void start() {
streamCandleEvent();
}
public void timeout() {
try {
candleStream.close();
streamCandleEvent();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private void streamCandleEvent() {
String cryptoPair = String.join(",", appConfig.getCryptoPairs());
log.info("Start listening to cryptoPair={}", cryptoPair);
candleStream = getClient().onCandlestickEvent(cryptoPair.toLowerCase(), getCandlestickInterval(),
new BinanceApiCallback<>() {
#Override
public void onResponse(final CandlestickEvent evt) {
if (!evt.getBarFinal()) {
return;
}
log.debug("Processing cryptoPair={}, event={}", cryptoPair, evt);
Candlestick candlestick = candlestickMapper.asCandleStick(evt);
candlestickAccumulator.processCandlestickEvent(candlestick);
}
#Override
public void onFailure(final Throwable cause) {
Application.hasError = true;
log.error("Fail connecting to Binance API {}", cause.getMessage());
}
}
);
}
private BinanceApiWebSocketClient getClient() {
return BinanceApiClientFactory.newInstance(
appConfig.getApiKey(),
appConfig.getApiSecret(),
appConfig.isUseTestNet(),
appConfig.isUseTestNet())
.newWebSocketClient();
}
private CandlestickInterval getCandlestickInterval() {
return CandlestickInterval.valueOf(appConfig.getCandlestickInterval());
}
}
i started a project where i implement appache kafka.
I already have a working producer that writes data into the queue. So far so good. Now i wanted to program an consumer that reads out all the data in the queue.
That is the corresponding code:
try {
consumer.subscribe(Collections.singletonList("names"));
if (startingPoint != null){
consumer.
consumer.poll(Duration.ofMillis(0));
consumer.seekToBeginning(consumer.assignment());
}
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(500));
for (ConsumerRecord<String, String> record : records) {
keyValuePairs.add(new String[]{record.key(),record.value()});
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}
} catch (Exception e) {
e.printStackTrace();
} finally {
consumer.close();
}
That code doesnt work right now like it is supposed to do. Only new records are consumed.
I was able to find out that
seekToBeginning() isn´t working because no partition is assigned to the consumer in that moment.
If i increase the duration of the poll it works. If i just pause the thread on the other hand it doesn´t.
Could someone please try to explain me why that is the case. I tried to find out by myself and already read something about a Kafka heartbeat. But i still haven´t fully understood what happens exactly.
The assignment takes time; polling for 0 will generally mean the poll will exit before it occurs.
You should add a ConsumerRebalanceListener callback to the subscribe() method and perform the seek in onPartitionsAssigned().
EDIT
#SpringBootApplication
public class So69121558Application {
public static void main(String[] args) {
SpringApplication.run(So69121558Application.class, args);
}
#Bean
public ApplicationRunner runner(ConsumerFactory<String, String> cf, KafkaTemplate<String, String> template) {
return args -> {
template.send("so69121558", "test");
Consumer<String, String> consumer = cf.createConsumer("group", "");
consumer.subscribe(Collections.singletonList("so69121558"), new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
}
#Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
consumer.seekToBeginning(partitions);
}
});
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
records.forEach(System.out::println);
Thread.sleep(5000);
consumer.close();
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69121558").partitions(1).replicas(1).build();
}
}
Here are a couple of examples of doing it the Spring way - just add one of these (or both) to the above class.
#KafkaListener(id = "so69121558", topics = "so69121558")
void listen(ConsumerRecord<?, ?> rec) {
System.out.println(rec);
}
#KafkaListener(id = "so69121558-1", topics = "so69121558")
void pojoListen(String in) {
System.out.println(in);
}
The seeks are done a bit differently too; here's the complete example:
#SpringBootApplication
public class So69121558Application extends AbstractConsumerSeekAware {
public static void main(String[] args) {
SpringApplication.run(So69121558Application.class, args);
}
#KafkaListener(id = "so69121558", topics = "so69121558")
void listen(ConsumerRecord<?, ?> rec) {
System.out.println(rec);
}
#KafkaListener(id = "so69121558-1", topics = "so69121558")
void pojoListen(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69121558").partitions(1).replicas(1).build();
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
callback.seekToBeginning(assignments.keySet());
}
}
I'd like to gather connection and request timing metrics for an OkHttpClient instance that calls a particular service. I'm wondering if this approach is correct, and whether my interpretation of the event types makes sense?
Timer callTimer = <new codahale timer>;
Timer connectTimer = <new codahale timer>;
Timer secureConnectTimer = <new codahale timer>;
Timer requestTimer = <new codahale timer>;
# this gets registered with my client
new EventListener() {
// see https://square.github.io/okhttp/events/#eventlistener for info on the ordering of these events
private final Map<Call, Timer.Context> secureConnectTimerContexts = Maps.newConcurrentMap();
private final Map<Call, Timer.Context> connectTimerContexts = Maps.newConcurrentMap();
private final Map<Call, Timer.Context> callTimerContexts = Maps.newConcurrentMap();
private final Map<Call, Timer.Context> requestTimerContexts = Maps.newConcurrentMap();
#Override
public void secureConnectStart(Call call) {
secureConnectTimerContexts.put(call, secureConnectTimer.time());
}
#Override
public void secureConnectEnd(Call call, #Nullable Handshake handshake) {
Timer.Context context = secureConnectTimerContexts.remove(call);
if (Objects.nonNull(context)) {
context.stop();
}
}
#Override
public void connectStart(Call call, InetSocketAddress inetSocketAddress, Proxy proxy) {
connectTimerContexts.put(call, connectTimer.time());
}
#Override
public void connectEnd(Call call, InetSocketAddress inetSocketAddress, Proxy proxy, #Nullable Protocol protocol) {
Timer.Context context = connectTimerContexts.remove(call);
if (Objects.nonNull(context)) {
context.stop();
}
}
#Override
public void connectionAcquired(Call call, Connection connection) {
requestTimerContexts.put(call, requestTimer.time());
}
#Override
public void connectionReleased(Call call, Connection connection) {
Timer.Context context = requestTimerContexts.remove(call);
if (context != null) {
context.stop();
}
}
#Override
public void connectFailed(Call call, InetSocketAddress inetSocketAddress, Proxy proxy,
#Nullable Protocol protocol, IOException ioe) {
Timer.Context context = connectTimerContexts.remove(call);
if (Objects.nonNull(context)) {
context.stop();
}
}
#Override
public void callStart(Call call) {
callTimerContexts.put(call, callTimer.time());
}
#Override
public void callEnd(Call call) {
callFinishedForMetrics(call);
}
#Override
public void callFailed(Call call, IOException ioe) {
callFinishedForMetrics(call);
}
private void callFinishedForMetrics(Call call) {
Timer.Context callTimerContext = callTimerContexts.remove(call);
if (callTimerContext != null) {
callTimerContext.stop();
}
requestTimerContexts.remove(call);
secureConnectTimerContexts.remove(call);
connectTimerContexts.remove(call);
}
}
You can use EventListener.Factory to create a unique listener instance for each Call. That way you don't need all the maps; the Timer.Context objects can just be instance fields of the call-bound EventListener.
I want to write Junit UT case for my websocket serverendpoint code using embedded Jetty.
i tried things explained in below link:
JUnit test with javax.websocket on embedded Jetty throws RejectedExecutionException: NonBlockingThread
I want to test my onMessage callback for websocket.
If i dont use server.join() method then the connection closes as soon as it opens.
If i use server.join() method nothing happens after joining.
Below is My code.
Server startup code::
public class EmbeddedJettyServer {
private final int port;
private Server server;
public EmbeddedJettyServer(int port) {
this.port = port;
}
public void start() throws Exception {
server = new Server();
ServerConnector connector = new ServerConnector(server);
connector.setPort(8080);
server.addConnector(connector);
// Setup the basic application "context" for this application at "/"
// This is also known as the handler tree (in jetty speak)
ServletContextHandler context = new ServletContextHandler(ServletContextHandler.SESSIONS);
context.setContextPath("/");
server.setHandler(context);
try {
// Initialize javax.websocket layer
ServerContainer wscontainer = WebSocketServerContainerInitializer.configureContext(context);
// Add WebSocket endpoint to javax.websocket layer
wscontainer.addEndpoint(WebSocketServer.class);
System.out.println("Begin start");
server.start();
server.dump(System.err);
server.join();
} catch (Throwable t) {
t.printStackTrace(System.err);
}
}
public void stop() throws Exception {
server.stop();
LOGGER.info("Jetty server stopped");
}
public URI getWebsocketUri(Class<WebSocketServer> class1) {
return server.getURI();
}
}
Client Code:
#ClientEndpoint()
public class WebSocketClientJetty {
WebSocketContainer container;
public Session connect(URI uri) throws Exception {
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
try {
// Attempt Connect
Session session = container.connectToServer(WebSocketClientJetty.class,uri);
// return container.connectToServer(WebSocketClientJetty.class, uri);
session.getBasicRemote().sendText("Hello");
// Close session
// session.close();
return session;
} finally {
}
}
public void stop() throws Exception{
if (container instanceof LifeCycle) {
((LifeCycle) container).stop();
}
}
#OnOpen
public void onWebSocketConnect(Session sess)
{
System.out.println("Socket Connected: " + sess);
}
#OnMessage
public void onWebSocketText(String message)
{
System.out.println("Received TEXT message: " + message);
}
#OnClose
public void onWebSocketClose(CloseReason reason)
{
System.out.println("Socket Closed: " + reason);
}
#OnError
public void onWebSocketError(Throwable cause)
{
cause.printStackTrace(System.err);
}
}
Serverendpoint code:
#ServerEndpoint(value = "/echo",
encoders={JsonEncoder.class})
public class WebSocketServer {
private static final Logger LOGGER =
#OnOpen
public void onOpen(Session session){
System.out.println("onopen");
some code....
}
#OnMessage
public void onMessage(String message, Session session) throws IOException{
System.out.println("onmessage");
....
}
#OnClose
public void onClose(Session session){
System.out.println("onClose");
...
}
}
Ut case:
public class WebSocketJettyTest {
private static EmbeddedJettyServer server;
#ClassRule
public static final ExternalResource integrationServer = new ExternalResource() {
#Override
protected void before() throws Throwable {
System.out.println("Starting...");
server = new EmbeddedJettyServer(8080);
server.start();
System.out.println("Started");
}
};
#Before
public void setUp() throws Exception {
}
#After
public void shutdown() throws Exception {
server.stop();
}
#Test
public void testSocket() throws Exception {
/*URI uri = server.getWebsocketUri(WebSocketServer.class);*/
URI uri = URI.create("ws://localhost:8080/echo");
WebSocketClientJetty client = new WebSocketClientJetty();
Session session = client.connect(uri);
session.getBasicRemote().sendText("hello");
Thread.sleep(6000);
client.stop();
}
}
Drop the call to
server.join();
That just makes the current thread wait until the server thread stops.
Which is making it difficult for you.
I have a Sample WebSocket Program whown below which works fine
When ever the user closes the browser or if there is any excetion Or any disconnect , the onClose Method is
being called
My question is that , Is it possible to know from the program what is the reason for onClose being called ??
Please share your views , Thanks for reading .
public class Html5Servlet extends WebSocketServlet {
private AtomicInteger index = new AtomicInteger();
private static final List<String> tickers = new ArrayList<String>();
static{
tickers.add("ajeesh");
tickers.add("peeyu");
tickers.add("kidillan");
tickers.add("entammo");
}
/**
*
*/
private static final long serialVersionUID = 1L;
public WebSocket doWebSocketConnect(HttpServletRequest req, String resp) {
//System.out.println("doWebSocketConnect");
return new StockTickerSocket();
}
protected String getMyJsonTicker() throws Exception{
return "";
}
public class StockTickerSocket implements WebSocket.OnTextMessage{
private Connection connection;
private Timer timer;
#Override
public void onClose(int arg0, String arg1) {
System.out.println("onClose called!"+arg0);
}
#Override
public void onOpen(Connection connection) {
//System.out.println("onOpen");
this.connection=connection;
this.timer=new Timer();
}
#Override
public void onMessage(String data) {
//System.out.println("onMessage");
if(data.indexOf("disconnect")>=0){
connection.close();
timer.cancel();
}else{
sendMessage();
}
}
public void disconnect() {
System.out.println("disconnect called");
}
public void onDisconnect()
{
System.out.println("onDisconnect called");
}
private void sendMessage() {
if(connection==null||!connection.isOpen()){
//System.out.println("Connection is closed!!");
return;
}
timer.schedule(new TimerTask() {
#Override
public void run() {
try{
//System.out.println("Running task");
connection.sendMessage(getMyJsonTicker());
}
catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}, new Date(),5000);
}
}
}
The signature for onClose is the following ...
#Override
public void onClose(int closeCode, String closeReason) {
System.out.println("onClose called - statusCode = " + closeCode);
System.out.println(" reason = " + closeReason);
}
Where int closeCode is any of the registered Close Status Codes.
And String closeReason is an optional (per protocol spec) close reason message.