Spark-Streaming CustomReceiver Unknown Host Exception - hadoop

I am new to spark streaming. I want to stream a url online in order to retrieve info from a certain URL, I used the JavaCustomReceiver in order to stream a url.
This is the code I'm using (source)
public class JavaCustomReceiver extends Receiver<String> {
private static final Pattern SPACE = Pattern.compile(" ");
public static void main(String[] args) throws Exception {
SparkConf sparkConf = new SparkConf().setAppName("JavaCustomReceiver");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(1000));
JavaReceiverInputDStream<String> lines = ssc.receiverStream(
new JavaCustomReceiver("http://stream.meetup.com/2/rsvps", 80));
JavaDStream<String> words = lines.flatMap(new
FlatMapFunction<String, String>() {
#Override
public Iterator<String> call(String x) {
return Arrays.asList(SPACE.split(x)).iterator();
}
});
JavaPairDStream<String, Integer> wordCounts = words.mapToPair(
new PairFunction<String, String, Integer>() {
#Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<>(s, 1);
}
}).reduceByKey(new Function2<Integer, Integer, Integer>() {
#Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
wordCounts.print();
ssc.start();
ssc.awaitTermination();
}
String host = null;
int port = -1;
public JavaCustomReceiver(String host_, int port_) {
super(StorageLevel.MEMORY_AND_DISK_2());
host = host_;
port = port_;
}
public void onStart() {
new Thread() {
#Override
public void run() {
receive();
}
}.start();
}
public void onStop() {
}
private void receive() {
try {
Socket socket = null;
BufferedReader reader = null;
String userInput = null;
try {
// connect to the server
socket = new Socket(host, port);
reader = new BufferedReader(
new InputStreamReader(socket.getInputStream(), StandardCharsets.UTF_8));
// Until stopped or connection broken continue reading
while (!isStopped() && (userInput = reader.readLine()) != null) {
System.out.println("Received data '" + userInput + "'");
store(userInput);
}
} finally {
Closeables.close(reader, /* swallowIOException = */ true);
Closeables.close(socket, /* swallowIOException = */ true);
}
restart("Trying to connect again");
} catch (ConnectException ce) {
// restart if could not connect to server
restart("Could not connect", ce);
} catch (Throwable t) {
restart("Error receiving data", t);
}
}
}
However, I keep getting a java.net.UnknownHostException
How can I fix this? What is wrong with the code that I'm using ?

After reading the code of the custom receiver referenced, it is clear that it is a TCP receiver that connects to a host:port and not an HTTP receiver that could take an URL. You'll have to change the code to read from an HTTP endpoint.

Related

Freemarker Debugger framework usage example

I have started working on a Freemarker Debugger using breakpoints etc. The supplied framework is based on java RMI. So far I get it to suspend at one breakpoint but then ... nothing.
Is there a very basic example setup for the serverpart and the client part other then the debug/imp classes supplied with the sources. That would be of great help.
this is my server class:
class DebuggerServer {
private final int port;
private final String templateName1;
private final Environment templateEnv;
private boolean stop = false;
public DebuggerServer(String templateName) throws IOException {
System.setProperty("freemarker.debug.password", "hello");
port = SecurityUtilities.getSystemProperty("freemarker.debug.port", Debugger.DEFAULT_PORT).intValue();
System.setProperty("freemarker.debug.password", "hello");
Configuration cfg = new Configuration();
// Some other recommended settings:
cfg.setIncompatibleImprovements(new Version(2, 3, 20));
cfg.setDefaultEncoding("UTF-8");
cfg.setLocale(Locale.US);
cfg.setTemplateExceptionHandler(TemplateExceptionHandler.RETHROW_HANDLER);
Template template = cfg.getTemplate(templateName);
templateName1 = template.getName();
System.out.println("Debugging " + templateName1);
Map<String, Object> root = new HashMap();
Writer consoleWriter = new OutputStreamWriter(System.out);
templateEnv = new Environment(template, null, consoleWriter);
DebuggerService.registerTemplate(template);
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server Acceptor").start();
}
private void startInternal() {
boolean handled = false;
while (!stop) {
List breakPoints = DebuggerService.getBreakpoints(templateName1);
for (int i = 0; i < breakPoints.size(); i++) {
try {
Breakpoint bp = (Breakpoint) breakPoints.get(i);
handled = DebuggerService.suspendEnvironment(templateEnv, templateName1, bp.getLine());
} catch (RemoteException e) {
System.err.println(e.getMessage());
}
}
}
}
public void stop() {
this.stop = true;
}
}
This is the client class:
class DebuggerClientHandler {
private final Debugger client;
private boolean stop = false;
public DebuggerClientHandler(String templateName) throws IOException {
// System.setProperty("freemarker.debug.password", "hello");
// System.setProperty("java.rmi.server.hostname", "192.168.2.160");
client = DebuggerClient.getDebugger(InetAddress.getByName("localhost"), Debugger.DEFAULT_PORT, "hello");
client.addDebuggerListener(environmentSuspendedEvent -> {
System.out.println("Break " + environmentSuspendedEvent.getName() + " at line " + environmentSuspendedEvent.getLine());
// environmentSuspendedEvent.getEnvironment().resume();
});
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server").start();
}
private void startInternal() {
while (!stop) {
}
}
public void stop() {
this.stop = true;
}
public void addBreakPoint(String s, int i) throws RemoteException {
Breakpoint bp = new Breakpoint(s, i);
List breakpoints = client.getBreakpoints();
client.addBreakpoint(bp);
}
}
Liferay IDE (https://github.com/liferay/liferay-ide) has FreeMarker template debug support (https://issues.liferay.com/browse/IDE-976), so somehow they managed to use it. I have never seen it in action though. Other than that, I'm not aware of anything that uses the debug API.

How to use Netty's channel pool map as a ConnectorProvider for a Jax RS client

I have wasted several hours trying to solve a issue with the use of netty's channel pool map and a jax rs client.
I have used jersey's own netty connector as an inspiration but exchanged netty's channel with netty's channel pool map.
https://jersey.github.io/apidocs/2.27/jersey/org/glassfish/jersey/netty/connector/NettyConnectorProvider.html
My problem is that I have references that I need inside my custom SimpleChannelInboundHandler. However by the design of netty's way to create a channel pool map, I can not pass the references through my custom ChannelPoolHandler, because as soon as the pool map has created a pool the constructor of the channel pool handler never runs again.
This is the method where it makes acquires a pool and check out a channel to make a HTTP request.
#Override
public Future<?> apply(ClientRequest request, AsyncConnectorCallback callback) {
final CompletableFuture<Object> completableFuture = new CompletableFuture<>();
try{
HttpRequest httpRequest = buildHttpRequest(request);
// guard against prematurely closed channel
final GenericFutureListener<io.netty.util.concurrent.Future<? super Void>> closeListener =
future -> {
if (!completableFuture.isDone()) {
completableFuture.completeExceptionally(new IOException("Channel closed."));
}
};
try {
ClientRequestDTO clientRequestDTO = new ClientRequestDTO(NettyChannelPoolConnector.this, request, completableFuture, callback);
dtoMap.putIfAbsent(request.getUri(), clientRequestDTO);
// Retrieves a channel pool for the given host
FixedChannelPool pool = this.poolMap.get(clientRequestDTO);
// Acquire a new channel from the pool
io.netty.util.concurrent.Future<Channel> f = pool.acquire();
f.addListener((FutureListener<Channel>) futureWrite -> {
//Succeeded with acquiring a channel
if (futureWrite.isSuccess()) {
Channel channel = futureWrite.getNow();
channel.closeFuture().addListener(closeListener);
try {
if(request.hasEntity()) {
channel.writeAndFlush(httpRequest);
final JerseyChunkedInput jerseyChunkedInput = new JerseyChunkedInput(channel);
request.setStreamProvider(contentLength -> jerseyChunkedInput);
if(HttpUtil.isTransferEncodingChunked(httpRequest)) {
channel.write(jerseyChunkedInput);
} else {
channel.write(jerseyChunkedInput);
}
executorService.execute(() -> {
channel.closeFuture().removeListener(closeListener);
try {
request.writeEntity();
} catch (IOException ex) {
callback.failure(ex);
completableFuture.completeExceptionally(ex);
}
});
channel.flush();
} else {
channel.closeFuture().removeListener(closeListener);
channel.writeAndFlush(httpRequest);
}
} catch (Exception ex) {
System.err.println("Failed to sync and flush http request" + ex.getLocalizedMessage());
}
pool.release(channel);
}
});
} catch (NullPointerException ex) {
System.err.println("Failed to acquire socket from pool " + ex.getLocalizedMessage());
}
} catch (Exception ex) {
completableFuture.completeExceptionally(ex);
return completableFuture;
}
return completableFuture;
}
This is my ChannelPoolHandler
public class SimpleChannelPoolHandler implements ChannelPoolHandler {
private ClientRequestDTO clientRequestDTO;
private boolean ssl;
private URI uri;
private int port;
SimpleChannelPoolHandler(URI uri) {
this.uri = uri;
if(uri != null) {
this.port = uri.getPort() != -1 ? uri.getPort() : "https".equals(uri.getScheme()) ? 443 : 80;
ssl = "https".equalsIgnoreCase(uri.getScheme());
}
}
#Override
public void channelReleased(Channel ch) throws Exception {
System.out.println("Channel released: " + ch.toString());
}
#Override
public void channelAcquired(Channel ch) throws Exception {
System.out.println("Channel acquired: " + ch.toString());
}
#Override
public void channelCreated(Channel ch) throws Exception {
System.out.println("Channel created: " + ch.toString());
int readTimeout = Integer.parseInt(ApplicationEnvironment.getInstance().get("READ_TIMEOUT"));
SocketChannelConfig channelConfig = (SocketChannelConfig) ch.config();
channelConfig.setConnectTimeoutMillis(2000);
ChannelPipeline channelPipeline = ch.pipeline();
if(ssl) {
SslContext sslContext = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
channelPipeline.addLast("ssl", sslContext.newHandler(ch.alloc(), uri.getHost(), this.port));
}
channelPipeline.addLast("client codec", new HttpClientCodec());
channelPipeline.addLast("chunked content writer",new ChunkedWriteHandler());
channelPipeline.addLast("content decompressor", new HttpContentDecompressor());
channelPipeline.addLast("read timeout", new ReadTimeoutHandler(readTimeout, TimeUnit.MILLISECONDS));
channelPipeline.addLast("business logic", new JerseyNettyClientHandler(this.uri));
}
}
And this is my SimpleInboundHandler
public class JerseyNettyClientHandler extends SimpleChannelInboundHandler<HttpObject> {
private final NettyChannelPoolConnector nettyChannelPoolConnector;
private final LinkedBlockingDeque<InputStream> isList = new LinkedBlockingDeque<>();
private final AsyncConnectorCallback asyncConnectorCallback;
private final ClientRequest jerseyRequest;
private final CompletableFuture future;
public JerseyNettyClientHandler(ClientRequestDto clientRequestDTO) {
this.nettyChannelPoolConnector = clientRequestDTO.getNettyChannelPoolConnector();
ClientRequestDTO cdto = clientRequestDTO.getNettyChannelPoolConnector().getDtoMap().get(clientRequestDTO.getClientRequest());
this.asyncConnectorCallback = cdto.getCallback();
this.jerseyRequest = cdto.getClientRequest();
this.future = cdto.getFuture();
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception {
if(msg instanceof HttpResponse) {
final HttpResponse httpResponse = (HttpResponse) msg;
final ClientResponse response = new ClientResponse(new Response.StatusType() {
#Override
public int getStatusCode() {
return httpResponse.status().code();
}
#Override
public Response.Status.Family getFamily() {
return Response.Status.Family.familyOf(httpResponse.status().code());
}
#Override
public String getReasonPhrase() {
return httpResponse.status().reasonPhrase();
}
}, jerseyRequest);
for (Map.Entry<String, String> entry : httpResponse.headers().entries()) {
response.getHeaders().add(entry.getKey(), entry.getValue());
}
if((httpResponse.headers().contains(HttpHeaderNames.CONTENT_LENGTH) && HttpUtil.getContentLength(httpResponse) > 0) || HttpUtil.isTransferEncodingChunked(httpResponse)) {
ctx.channel().closeFuture().addListener(future -> isList.add(NettyInputStream.END_OF_INPUT_ERROR));
response.setEntityStream(new NettyInputStream(isList));
} else {
response.setEntityStream(new InputStream() {
#Override
public int read() {
return -1;
}
});
}
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> {
asyncConnectorCallback.response(response);
future.complete(response);
});
}
}
if(msg instanceof HttpContent) {
HttpContent content = (HttpContent) msg;
ByteBuf byteContent = content.content();
if(byteContent.isReadable()) {
byte[] bytes = new byte[byteContent.readableBytes()];
byteContent.getBytes(byteContent.readerIndex(), bytes);
isList.add(new ByteArrayInputStream(bytes));
}
}
if(msg instanceof LastHttpContent) {
isList.add(NettyInputStream.END_OF_INPUT);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> asyncConnectorCallback.failure(cause));
}
future.completeExceptionally(cause);
isList.add(NettyInputStream.END_OF_INPUT_ERROR);
}
The references needed to be passed to the SimpleChannelInboundHandler is what is packed into the ClientRequestDTO as seen in the first code block.
I am not sure as it is not a tested code. But it could be achieved by the following code.
SimpleChannelPool sPool = poolMap.get(Req.getAddress());
Future<Channel> f = sPool.acquire();
f.get().pipeline().addLast("inbound", new NettyClientInBoundHandler(Req, jbContext, ReportData));
f.addListener(new NettyClientFutureListener(this.Req, sPool));
where Req, jbContext, ReportData could be input data for InboundHandler().

Building a Future API on top of Netty

I want to build an API based on Futures (from java.util.concurrent) that is powered by a custom protocol on top of Netty (version 4). Basic idea is to write a simple library that would abstract the underlying Netty implementation and make it easier to make requests.
Using this library, one should be able to write something like this:
Request req = new Request(...);
Future<Response> responseFuture = new ServerIFace(host, port).call(req);
// For example, let's block until this future is resolved
Reponse res = responseFuture.get().getResult();
Underneath this code, a Netty client is connected
public class ServerIFace {
private Bootstrap bootstrap;
private EventLoopGroup workerGroup;
private String host;
private int port;
public ServerIFace(String host, int port) {
this.host = host;
this.port = port;
this.workerGroup = new NioEventLoopGroup();
bootstrap();
}
private void bootstrap() {
bootstrap = new Bootstrap();
bootstrap.group(workerGroup);
bootstrap.channel(NioSocketChannel.class);
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ObjectEncoder());
ch.pipeline().addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(Response.class.getClassLoader())));
ch.pipeline().addLast("response", new ResponseReceiverChannelHandler());
}
});
}
public Future<Response> call(final Request request) throws InterruptedException {
CompletableFuture<Response> responseFuture = new CompletableFuture<>();
Channel ch = bootstrap.connect(host, port).sync().channel();
ch.writeAndFlush(request).addListener((f) -> {
if (f.isSuccess()) {
System.out.println("Wrote successfully");
} else {
f.cause().printStackTrace();
}
});
ChannelFuture closeFuture = ch.closeFuture();
// Have to 'convert' ChannelFuture to java.util.concurrent.Future
closeFuture.addListener((f) -> {
if (f.isSuccess()) {
// How to get this response?
Response response = ((ResponseReceiverChannelHandler) ch.pipeline().get("response")).getResponse();
responseFuture.complete(response);
} else {
f.cause().printStackTrace();
responseFuture.cancel(true);
}
ch.close();
}).sync();
return responseFuture;
}
}
Now, as you can see, in order to abstract Netty's inner ChannelFuture, I have to 'convert' it to Java's Future (I'm aware that ChannelFuture is derived from Future, but that information doesn't seem useful at this point).
Right now, I'm capturing this Response object in the last handler of my inbound part of the client pipeline, the ResponseReceiverChannelHandler.
public class ResponseReceiverChannelHandler extends ChannelInboundHandlerAdapter {
private Response response = null;
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
this.response = (Response)msg;
ctx.close();
}
public Response getResponse() {
return response;
}
}
Since I'm new to Netty and these things in general, I'm looking for a cleaner, thread-safe way of delivering this object to the API user.
Correct me if I'm wrong, but none of the Netty examples show how to achieve this, and most of the Client examples just print out whatever they get from Server.
Please note that my main goal here is to learn more about Netty, and that this code has no production purposes.
For the reference (although I don't think it's that relevant) here's the Server code.
public class Server {
public static class RequestProcessorHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ChannelFuture future;
if (msg instanceof Request) {
Request req = (Request)msg;
Response res = some function of req
future = ctx.writeAndFlush(res);
} else {
future = ctx.writeAndFlush("Error, not a request!");
}
future.addListener((f) -> {
if (f.isSuccess()) {
System.out.println("Response sent!");
} else {
System.out.println("Response not sent!");
f.cause().printStackTrace();
}
});
}
}
public int port;
public Server(int port) {
this.port = port;
}
public void run() throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(Request.class.getClassLoader())));
ch.pipeline().addLast(new ObjectEncoder());
// Not really shutting down this threadpool but it's ok for now
ch.pipeline().addLast(new DefaultEventExecutorGroup(2), new RequestProcessorHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
int port;
if (args.length > 0) {
port = Integer.parseInt(args[0]);
} else {
port = 8080;
}
new Server(port).run();
}
}

Http proxy in Netty websocket client to connect to internet

My application is running behind a corporate firewall and I need to use http proxy(http://theclientproxy.net:8080) to connect to internet
I have used the Netty client as below,
https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/http/websocketx/client
Code:
public final class WebSocketClient {
static final String URL = System.getProperty("url", "wss://127.0.0.1:8080/websocket");
public static void main(String[] args) throws Exception {
URI uri = new URI(URL);
String scheme = uri.getScheme() == null? "ws" : uri.getScheme();
final String host = uri.getHost() == null? "127.0.0.1" : uri.getHost();
final int port;
final boolean ssl = "wss".equalsIgnoreCase(scheme);
final SslContext sslCtx;
if (ssl) {
sslCtx = SslContextBuilder.forClient()
.trustManager(InsecureTrustManagerFactory.INSTANCE).build();
} else {
sslCtx = null;
}
EventLoopGroup group = new NioEventLoopGroup();
try {
final WebSocketClientHandler handler =
new WebSocketClientHandler(
WebSocketClientHandshakerFactory.newHandshaker(
uri, WebSocketVersion.V13, null, true, new DefaultHttpHeaders()));
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
if (sslCtx != null) {
p.addFirst(new HttpProxyHandler(new InetSocketAddress("theclientproxy.net", 8080) ) );
p.addLast(sslCtx.newHandler(ch.alloc(), host, port));
}
p.addLast(
new HttpClientCodec(),
new HttpObjectAggregator(8192),
WebSocketClientCompressionHandler.INSTANCE,
handler);
}
});
Channel ch = b.connect(uri.getHost(), port).sync().channel();
handler.handshakeFuture().sync();
BufferedReader console = new BufferedReader(new InputStreamReader(System.in));
while (true) {
String msg = console.readLine(); //THIS IS NULL IN DATA CENTER LOGS
if (msg == null) {
break;
} else if ("bye".equals(msg.toLowerCase())) {
ch.writeAndFlush(new CloseWebSocketFrame());
ch.closeFuture().sync();
break;
} else if ("ping".equals(msg.toLowerCase())) {
WebSocketFrame frame = new PingWebSocketFrame(Unpooled.wrappedBuffer(new byte[] { 8, 1, 8, 1 }));
ch.writeAndFlush(frame);
} else {
WebSocketFrame frame = new TextWebSocketFrame(msg);
ch.writeAndFlush(frame);
}
}
} finally {
group.shutdownGracefully();
}
Handler:
public class WebSocketClientHandler extends SimpleChannelInboundHandler<Object> {
private final WebSocketClientHandshaker handshaker;
private ChannelPromise handshakeFuture;
public WebSocketClientHandler(WebSocketClientHandshaker handshaker) {
this.handshaker = handshaker;
}
public ChannelFuture handshakeFuture() {
return handshakeFuture;
}
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
handshakeFuture = ctx.newPromise();
}
#Override
public void channelActive(ChannelHandlerContext ctx) {
handshaker.handshake(ctx.channel());
}
#Override
public void channelInactive(ChannelHandlerContext ctx) {
System.out.println("WebSocket Client disconnected!");
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
Channel ch = ctx.channel();
if (!handshaker.isHandshakeComplete()) {
try {
handshaker.finishHandshake(ch, (FullHttpResponse) msg);
System.out.println("WebSocket Client connected!");
handshakeFuture.setSuccess();
} catch (WebSocketHandshakeException e) {
System.out.println("WebSocket Client failed to connect");
handshakeFuture.setFailure(e);
}
return;
}
The application is able to connect to the websocket server endpoint from my local machine successfully.
But in the company datacenter where my application is deployed, I see the msg value is null and the websocket client is disconnected
Does that mean my connection is blocked at firewall? If that is the case then why did the statement "WebSocket Client connected!" is printed at all?
Thanks
The httpproxyhandler you used is correct
Just remove the BufferredReader code as mentioned below when deploying in linux, docker, etc:
Netty WebSocket Client Channel always gets inactive on Linux Server

Netty performs slow at writing

Netty 4.1 (on OpenJDK 1.6.0_32 and CentOS 6.4) message sending is strangely slow. According to the profiler, it is the DefaultChannelHandlerContext.writeAndFlush that makes the biggest percentage (60%) of the running time. Decoding process is not emphasized in the profiler. Small messages are being processed and maybe the bootstrap options are not set correctly (TCP_NODELAY is true and nothing improved)? DefaultEventExecutorGroup is used both in server and client to avoid blocking Netty's main event loop and to run 'ServerData' and 'ClientData' classes with business logic and sending of the messages is done from there through context.writeAndFlush(...). Is there a more proper/faster way? Using straight ByteBuf.writeBytes(..) serialization in the encoder and ReplayingDecoder in the decoder made no difference in encoding speed. Sorry for the lengthy code, neither 'Netty In Action' book nor the documentation helped.
JProfiler's call tree of the client side: http://i62.tinypic.com/dw4e43.jpg
The server class is:
public class NettyServer
{
EventLoopGroup incomingLoopGroup = null;
EventLoopGroup workerLoopGroup = null;
ServerBootstrap serverBootstrap = null;
int port;
DataServer dataServer = null;
DefaultEventExecutorGroup dataEventExecutorGroup = null;
DefaultEventExecutorGroup dataEventExecutorGroup2 = null;
public ChannelFuture serverChannelFuture = null;
public NettyServer(int port)
{
this.port = port;
DataServer = new DataServer(this);
}
public void run() throws Exception
{
incomingLoopGroup = new NioEventLoopGroup();
workerLoopGroup = new NioEventLoopGroup();
dataEventExecutorGroup = new DefaultEventExecutorGroup(5);
dataEventExecutorGroup2 = new DefaultEventExecutorGroup(5);
try
{
ChannelInitializer<SocketChannel> channelInitializer =
new ChannelInitializer<SocketChannel>()
{
#Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MessageByteDecoder());
ch.pipeline().addLast(new MessageByteEncoder());
ch.pipeline().addLast(dataEventExecutorGroup, new DataServerInboundHandler(DataServer, NettyServer.this));
ch.pipeline().addLast(dataEventExecutorGroup2, new DataServerDataHandler(DataServer));
}
};
// bootstrap the server
serverBootstrap = new ServerBootstrap();
serverBootstrap.group(incomingLoopGroup, workerLoopGroup)
.channel(NioServerSocketChannel.class)
.childHandler(channelInitializer)
.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024)
.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024)
.childOption(ChannelOption.SO_KEEPALIVE, true);
serverChannelFuture = serverBootstrap.bind(port).sync();
serverChannelFuture.channel().closeFuture().sync();
}
finally
{
incomingLoopGroup.shutdownGracefully();
workerLoopGroup.shutdownGracefully();
}
}
}
The client class:
public class NettyClient
{
Bootstrap clientBootstrap = null;
EventLoopGroup workerLoopGroup = null;
String serverHost = null;
int serverPort = -1;
ChannelFuture clientFutureChannel = null;
DataClient dataClient = null;
DefaultEventExecutorGroup dataEventExecutorGroup = new DefaultEventExecutorGroup(5);
DefaultEventExecutorGroup dataEventExecutorGroup2 = new DefaultEventExecutorGroup(5);
public NettyClient(String serverHost, int serverPort)
{
this.serverHost = serverHost;
this.serverPort = serverPort;
}
public void run() throws Exception
{
workerLoopGroup = new NioEventLoopGroup();
try
{
this.dataClient = new DataClient();
ChannelInitializer<SocketChannel> channelInitializer =
new ChannelInitializer<SocketChannel>()
{
#Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MessageByteDecoder());
ch.pipeline().addLast(new MessageByteEncoder());
ch.pipeline().addLast(dataEventExecutorGroup, new ClientInboundHandler(dataClient, NettyClient.this)); ch.pipeline().addLast(dataEventExecutorGroup2, new ClientDataHandler(dataClient));
}
};
clientBootstrap = new Bootstrap();
clientBootstrap.group(workerLoopGroup);
clientBootstrap.channel(NioSocketChannel.class);
clientBootstrap.option(ChannelOption.SO_KEEPALIVE, true);
clientBootstrap.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
clientBootstrap.option(ChannelOption.TCP_NODELAY, true);
clientBootstrap.option(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024);
clientBootstrap.option(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024);
clientBootstrap.handler(channelInitializer);
clientFutureChannel = clientBootstrap.connect(serverHost, serverPort).sync();
clientFutureChannel.channel().closeFuture().sync();
}
finally
{
workerLoopGroup.shutdownGracefully();
}
}
}
The message class:
public class Message implements Serializable
{
public static final byte MSG_FIELD = 0;
public static final byte MSG_HELLO = 1;
public static final byte MSG_LOG = 2;
public static final byte MSG_FIELD_RESPONSE = 3;
public static final byte MSG_MAP_KEY_VALUE = 4;
public static final byte MSG_STATS_FILE = 5;
public static final byte MSG_SHUTDOWN = 6;
public byte msgID;
public byte msgType;
public String key;
public String value;
public byte method;
public byte id;
}
The decoder:
public class MessageByteDecoder extends ByteToMessageDecoder
{
private Kryo kryoCodec = new Kryo();
private int contentSize = 0;
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> out) //throws Exception
{
if (!buffer.isReadable() || buffer.readableBytes() < 4) // we need at least integer
return;
// read header
if (contentSize == 0) {
contentSize = buffer.readInt();
}
if (buffer.readableBytes() < contentSize)
return;
// read content
byte [] buf = new byte[contentSize];
buffer.readBytes(buf);
Input in = new Input(buf, 0, buf.length);
out.add(kryoCodec.readObject(in, Message.class));
contentSize = 0;
}
}
The encoder:
public class MessageByteEncoder extends MessageToByteEncoder<Message>
{
Kryo kryoCodec = new Kryo();
public MessageByteEncoder()
{
super(false);
}
#Override
protected void encode(ChannelHandlerContext ctx, Message msg, ByteBuf out) throws Exception
{
int offset = out.arrayOffset() + out.writerIndex();
byte [] inArray = out.array();
Output kryoOutput = new OutputWithOffset(inArray, inArray.length, offset + 4);
// serialize message content
kryoCodec.writeObject(kryoOutput, msg);
// write length of the message content at the beginning of the array
out.writeInt(kryoOutput.position());
out.writerIndex(out.writerIndex() + kryoOutput.position());
}
}
Client's business logic run in DefaultEventExecutorGroup:
public class DataClient
{
ChannelHandlerContext ctx;
// ...
public void processData()
{
// ...
while ((line = br.readLine()) != null)
{
// ...
process = new CountDownLatch(columns.size());
for(Column c : columns)
{
// sending column data to the server for processing
ctx.channel().eventLoop().execute(new Runnable() {
#Override
public void run() {
ctx.writeAndFlush(Message.createMessage(msgID, processID, c.key, c.value));
}});
}
// block until all the processed column fields of this row are returned from the server
process.await();
// write processed line to file ...
}
// ...
}
// ...
}
Client's message handling:
public class ClientInboundHandler extends ChannelInboundHandlerAdapter
{
DataClient dataClient = null;
// ...
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg)
{
// dispatch the message to the listeners
Message m = (Message) msg;
switch(m.msgType)
{
case Message.MSG_FIELD_RESPONSE: // message with processed data is received from the server
// decreases the 'process' CountDownLatch in the processData() method
dataClient.setProcessingResult(m.msgID, m.value);
break;
// ...
}
// forward the message to the pipeline
ctx.fireChannelRead(msg);
}
// ...
}
}
Server's message handling:
public class ServerInboundHandler extends ChannelInboundHandlerAdapter
{
private DataServer dataServer = null;
// ...
#Override
public void channelRead(ChannelHandlerContext ctx, Object obj) throws Exception
{
Message msg = (Message) obj;
switch(msg.msgType)
{
case Message.MSG_FIELD:
dataServer.processField(msg, ctx);
break;
// ...
}
ctx.fireChannelRead(msg);
}
//...
}
Server's business logic run in DefaultEventExecutorGroup:
public class DataServer
{
// ...
public void processField(final Message msg, final ChannelHandlerContext context)
{
context.executor().submit(new Runnable()
{
#Override
public void run()
{
String processedValue = (String) processField(msg.key, msg.value);
final Message responseToClient = Message.createResponseFieldMessage(msg.msgID, processedValue);
// send processed data to the client
context.channel().eventLoop().submit(new Runnable(){
#Override
public void run() {
context.writeAndFlush(responseToClient);
}
});
}
});
}
// ...
}
Please try using CentOS 7.0.
I've had similar problem:
The same Netty 4 program runs very fast on CentOS 7.0 (about 40k msg/s), but can't write more than about 8k msg/s on CentOS 6.3 and 6.5 (I haven't tried 6.4).
There is no need to submit stuff to the EventLoop. Just call Channel.writeAndFlush(...) directly in your DataClient and DataServer.

Resources